SEOUL, Feb. 17 (Yonhap) — Samsung Electronics Co. said Wednesday it has developed an artificial intelligence (AI) processor-embedded high bandwidth memory (HBM) chip that boasts low energy consumption and enhanced performance.
Using processing-in-memory (PIM) technology, the South Korean tech giant said it integrated AI engines onto its HBM2 Aquabolt, becoming the first in the industry to develop a HBM-PIM semiconductor.
PIM combines logic-based complex processing units within the memory. The HBM2 Aquabolt is Samsung’s second-generation HBM DRAM chip mainly used for high performance computing and AI applications that has been on the market since January 2018.
Samsung, the world’s largest memory chip producer, said its HBM-PIM solution more than doubles the performance of an AI system and reduces its energy consumption by 70 percent compared with the existing HBM2.
The latest product also supports the existing HBM interface, meaning customers can set up an AI accelerator system without changing hardware or software.
An AI accelerator is computer hardware that is designed specifically to handle AI requirements.
In the standard computer architecture, also known as the von Neumann architecture, the processor and memory are separate and data are exchanged between the two. In such a configuration, latency occurs especially when lots of data are moved.
To overcome latency issues, Samsung said it installed AI engines into each memory bank, maximizing parallel processing to boost performance.
HBM-PIM also minimizes data moves between the processor and memory, therefore enhancing energy efficiency for an AI accelerator system.
Samsung said it will install its HBM-PIM solution in its customers’ AI accelerator systems for verification within the first half of the year and that it will actively cooperate with its clients to achieve the standardization of the PIM platform and its ecosystem.
Samsung said its PIM technology is further explained in a paper submitted to the International Solid-State Circuits Conference.