Samsung and Naver's AI Chip Claims 8x Power Efficiency Over NVIDIA Tech News at Tool Battles

*This post may contain affiliate links. If you click on a product link, we may receive a commission. We only recommend products or services that we personally use or believe will add value to our audience*

Samsung and Naver’s AI Chip Claims 8x Power Efficiency Over NVIDIA

TL;DR: Samsung and Naver attribute the remarkable leap in performance to the unique integration of LPDDR memory into the chip's architecture

Samsung Electronics and Naver have joined forces to develop a specialized semiconductor solution aimed at powering large-scale artificial intelligence (AI) models. This collaboration, initiated in late 2022, brings together Samsung’s renowned chip production capabilities and cutting-edge memory technologies with Naver’s deep expertise in AI algorithms and software solutions.

The first fruit of this collaborative effort is an AI semiconductor solution based on a field-programmable gate array (FPGA) meticulously designed for inference tasks, particularly tailored for the Naver HyperCLOVA X large language model. The claim to fame for this joint endeavor is a staggering eightfold increase in power efficiency compared to NVIDIA’s AI GPUs, as per the data presented by the companies.

While the specific intricacies of the chip’s design, especially concerning the integration of Low-Power Double Data Rate (LPDDR) DRAM, remain closely guarded, Samsung and Naver attribute the remarkable leap in performance to the unique integration of LPDDR memory into the chip’s architecture. This novel approach to memory utilization has evidently propelled the solution to new heights in power efficiency.

Notably, the HyperCLOVA large language model developed by Naver emerges as a pivotal element in achieving these remarkable outcomes. Naver asserts its commitment to ongoing improvements in the HyperCLOVA model, with a focus on refining compression algorithms and streamlining the model for even greater efficiency. The current iteration of HyperCLOVA boasts a staggering parameter count exceeding 200 billion, underlining its significance in the realm of large-scale language models.

Suk Geun Chung, Head of Naver CLOVA CIC, expressed optimism about the collaboration, stating, “Combining our acquired knowledge and know-how from HyperCLOVA with Samsung’s semiconductor manufacturing prowess, we believe we can create an entirely new class of solutions that can better tackle the challenges of today’s AI technologies.”

The collaboration between Samsung and Naver transcends the mere development of a high-performance AI chip. It encompasses the utilization of Samsung’s advanced process technologies and integrates high-tech memory solutions such as computational storage, processing-in-memory (PIM), processing-near-memory (PNM), and Compute Express Link (CXL). Naver’s expertise in software and AI algorithms complements Samsung’s hardware prowess, creating a synergy aimed at addressing memory bottlenecks in large-scale AI systems.

Jinman Han, Executive Vice President of Memory Global Sales & Marketing at Samsung Electronics, highlighted the strategic significance of this collaboration. “Through our collaboration with Naver, we will develop cutting-edge semiconductor solutions to solve the memory bottleneck in large-scale AI systems,” stated Han. He emphasized Samsung’s commitment to expanding its market-leading memory lineup, including computational storage, PIM, and more, to meet the escalating demands of an ever-expanding data landscape.

As NVIDIA continues its stronghold in the AI chip market, the entry of Samsung and Naver promises to inject diversity and innovation. Developers seeking robust solutions for AI applications now have a compelling alternative to explore, marking a significant milestone in the evolution of AI chip technologies.

New Report

Close