Samsung Develops Industry’s First High Bandwidth Memory with AI Processing Power

Korea on Febru­a­ry 17, 2021

The new architecture will deliver over twice the system performance
and reduce energy consumption by more than 70%

Sam­sung Elec­tro­nics, the world lea­der in advan­ced memo­ry tech­no­lo­gy, today announ­ced that it has deve­lo­ped the industry’s first High Band­width Memo­ry (HBM) inte­gra­ted with arti­fi­cial intel­li­gence (AI) pro­ces­sing power — the HBM-PIM. The new pro­ces­sing-in-memo­ry (PIM) archi­tec­tu­re brings power­ful AI com­pu­ting capa­bi­li­ties insi­de high-per­for­mance memo­ry, to acce­le­ra­te lar­ge-sca­le pro­ces­sing in data cen­ters, high per­for­mance com­pu­ting (HPC) sys­tems and AI-enab­led mobi­le applications.


Kwan­gil Park, seni­or vice pre­si­dent of Memo­ry Pro­duct Plan­ning at Sam­sung Elec­tro­nics sta­ted, “Our ground­brea­king HBM-PIM is the industry’s first pro­gramm­a­ble PIM solu­ti­on tailo­red for diver­se AI-dri­ven workloads such as HPC, trai­ning and infe­rence. We plan to build upon this bre­akthrough by fur­ther col­la­bo­ra­ting with AI solu­ti­on pro­vi­ders for even more advan­ced PIM-powe­red applications.”


Rick Ste­vens, Argonne’s Asso­cia­te Labo­ra­to­ry Direc­tor for Com­pu­ting, Envi­ron­ment and Life Sci­en­ces com­men­ted, “I’m deligh­ted to see that Sam­sung is addres­sing the memo­ry bandwidth/power chal­len­ges for HPC and AI com­pu­ting. HBM-PIM design has demons­tra­ted impres­si­ve per­for­mance and power gains on important clas­ses of AI app­li­ca­ti­ons, so we look for­ward to working tog­e­ther to eva­lua­te its per­for­mance on addi­tio­nal pro­blems of inte­rest to Argon­ne Natio­nal Laboratory.”


Most of today’s com­pu­ting sys­tems are based on the von Neu­mann archi­tec­tu­re, which uses sepa­ra­te pro­ces­sor and memo­ry units to car­ry out mil­li­ons of intri­ca­te data pro­ces­sing tasks. This sequen­ti­al pro­ces­sing approach requi­res data to con­stant­ly move back and forth, resul­ting in a sys­tem-slowing bot­t­len­eck espe­cial­ly when hand­ling ever-incre­a­sing volu­mes of data.


Ins­tead, the HBM-PIM brings pro­ces­sing power direct­ly to whe­re the data is stored by pla­cing a DRAM-opti­mi­zed AI engi­ne insi­de each memo­ry bank — a sto­rage sub-unit — enab­ling par­al­lel pro­ces­sing and mini­mi­zing data move­ment. When app­lied to Samsung’s exis­ting HBM2 Aqu­abolt solu­ti­on, the new archi­tec­tu­re is able to deli­ver over twice the sys­tem per­for­mance while redu­cing ener­gy con­sump­ti­on by more than 70%. The HBM-PIM also does not requi­re any hard­ware or soft­ware chan­ges, allowing fas­ter inte­gra­ti­on into exis­ting systems.


Samsung’s paper on the HBM-PIM has been selec­ted for pre­sen­ta­ti­on at the renow­ned Inter­na­tio­nal Solid-Sta­te Cir­cuits Vir­tu­al Con­fe­rence (ISSCC) held through Feb. 22. Samsung’s HBM-PIM is now being tes­ted insi­de AI acce­le­ra­tors by lea­ding AI solu­ti­on part­ners, with all vali­da­ti­ons expec­ted to be com­ple­ted wit­hin the first half of this year.