AMD Unveils World’s First 7nm Datacenter GPUs

– Powe­ring the Next Era of Arti­fi­ci­al Intel­li­gence, Cloud Com­pu­ting and High Per­for­mance Com­pu­ting (HPC)
 

AMD Rade­on Instinct™ MI60 and MI50 acce­le­ra­tors with super­char­ged com­pu­te per­for­mance, high-speed con­nec­ti­vi­ty, fast memo­ry band­width and updated ROCm open soft­ware plat­form power the most deman­ding deep lear­ning, HPC, cloud and ren­de­ring appli­ca­ti­ons 

San Fran­cis­co, Calif. 
 

AMD (NASDAQ: AMD) today announ­ced the AMD Rade­on Instinct™ MI60 and MI50 acce­le­ra­tors, the world’s first 7nm dat­a­cen­ter GPUs, desi­gned to deli­ver the com­pu­te per­for­mance requi­red for next-gene­ra­ti­on deep lear­ning, HPC, cloud com­pu­ting and ren­de­ring appli­ca­ti­ons. Rese­ar­chers, sci­en­tists and deve­lo­pers will use AMD Rade­on Instinct™ acce­le­ra­tors to sol­ve tough and inte­res­t­ing chal­lenges, inclu­ding lar­ge-sca­le simu­la­ti­ons, cli­ma­te chan­ge, com­pu­ta­tio­nal bio­lo­gy, dise­a­se pre­ven­ti­on and more.

Lega­cy GPU archi­tec­tures limit IT mana­gers from effec­tively addres­sing the con­stant­ly evol­ving demands of pro­ces­sing and ana­ly­zing huge data­sets for modern cloud dat­a­cen­ter workloads,” said David Wang, seni­or vice pre­si­dent of engi­nee­ring, Rade­on Tech­no­lo­gies Group at AMD. “Com­bi­ning world-class per­for­mance and a fle­xi­ble archi­tec­tu­re with a robust soft­ware plat­form and the industry’s lea­ding-edge ROCm open soft­ware eco­sys­tem, the new AMD Rade­on Instinct™ acce­le­ra­tors pro­vi­de the cri­ti­cal com­pon­ents nee­ded to sol­ve the most dif­fi­cult cloud com­pu­ting chal­lenges today and into the future.”

The AMD Rade­on Instinct™ MI60 and MI50 acce­le­ra­tors fea­ture fle­xi­ble mixed-pre­cis­i­on capa­bi­li­ties, powered by high-per­for­mance com­pu­te units that expand the types of workloads the­se acce­le­ra­tors can address, inclu­ding a ran­ge of HPC and deep lear­ning appli­ca­ti­ons. The new AMD Rade­on Instinct™ MI60 and MI50 acce­le­ra­tors were desi­gned to effi­ci­ent­ly pro­cess workloads such as rapidly trai­ning com­plex neu­ral net­works, deli­ve­ring hig­her levels of floa­ting-point per­for­mance, grea­ter effi­ci­en­ci­es and new fea­tures for dat­a­cen­ter and depart­ment­al deploy­ments1.

The AMD Rade­on Instinct™ MI60 and MI50 acce­le­ra­tors pro­vi­de ultra-fast floa­ting-point per­for­man­ceand hyper-fast HBM2 (second-gene­ra­ti­on High-Band­width Memo­ry) with up to 1 TB/s memo­ry band­width speeds. They are also the first GPUs capa­ble of sup­port­ing next-gene­ra­ti­on PCIe® 4.02 inter­con­nect, which is up to 2X fas­ter than other x86 CPU-to-GPU inter­con­nect tech­no­lo­gies3, and fea­ture AMD Infi­ni­ty Fabric™ Link GPU inter­con­nect tech­no­lo­gy that enables GPU-to-GPU com­mu­ni­ca­ti­ons that are up to 6X fas­ter than PCIe® Gen 3 inter­con­nect speeds4.

AMD also announ­ced a new ver­si­on of the ROCm open soft­ware plat­form for acce­le­ra­ted com­pu­ting that sup­ports the archi­tec­tu­ral fea­tures of the new acce­le­ra­tors, inclu­ding opti­mi­zed deep lear­ning ope­ra­ti­ons (DLOPS) and the AMD Infi­ni­ty Fabric™ Link GPU inter­con­nect tech­no­lo­gy. Desi­gned for sca­le, ROCm allows cus­to­mers to deploy high-per­for­mance, ener­gy-effi­ci­ent hete­ro­ge­neous com­pu­ting sys­tems in an open environment.

Goog­le belie­ves that open source is good for ever­yo­ne,” said Rajat Mon­ga, engi­nee­ring direc­tor, Ten­sor­Flow, Goog­le. “We’­ve seen how hel­pful it can be to open source machi­ne lear­ning tech­no­lo­gy, and we’re glad to see AMD embra­cing it. With the ROCm open soft­ware plat­form, Ten­sor­Flow users will bene­fit from GPU acce­le­ra­ti­on and a more robust open source machi­ne lear­ning ecosystem.”

Key fea­tures of the AMD Rade­on Instinct™ MI60 and MI50 acce­le­ra­tors include:

  • Opti­mi­zed Deep Lear­ning Ope­ra­ti­ons: Pro­vi­des fle­xi­ble mixed-pre­cis­i­on FP16, FP32 and INT4/INT8 capa­bi­li­ties to meet gro­wing demand for dyna­mic and ever-chan­ging workloads, from trai­ning com­plex neu­ral net­works to run­ning infe­rence against tho­se trai­ned networks.
  • World’s Fas­test Dou­ble Pre­cis­i­on PCIe®2 Acce­le­ra­tor5The AMD Rade­on Instinct™ MI60 is the world’s fas­test dou­ble pre­cis­i­on PCIe 4.0 capa­ble acce­le­ra­tor, deli­ve­ring up to 7.4 TFLOPS peak FP64 per­for­mance5 allo­wing sci­en­tists and rese­ar­chers to more effi­ci­ent­ly pro­cess HPC appli­ca­ti­ons across a ran­ge of indus­tries inclu­ding life sci­en­ces, ener­gy, finan­ce, auto­mo­ti­ve, aero­space, aca­de­mics, govern­ment, defen­se and more. The AMD Rade­on Instinct™ MI50 deli­vers up to 6.7 TFLOPS FP64 peak per­for­mance1, while pro­vi­ding an effi­ci­ent, cost-effec­ti­ve solu­ti­on for a varie­ty of deep lear­ning workloads, as well as enab­ling high reu­se in Vir­tu­al Desk­top Infra­struc­tu­re (VDI), Desk­top-as-a-Ser­vice (DaaS) and cloud environments.
  • Up to 6X Fas­ter Data Trans­fer: Two Infi­ni­ty Fabric™ Links per GPU deli­ver up to 200 GB/s of peer-to-peer band­width – up to 6X fas­ter than PCIe 3.0 alo­ne4 – and enable the con­nec­tion of up to 4 GPUs in a hive ring con­fi­gu­ra­ti­on (2 hives in 8 GPU servers).
  • Ultra-Fast HBM2 Memo­ry: The AMD Rade­on Instinct™ MI60 pro­vi­des 32GB of HBM2 Error-cor­rec­ting code (ECC) memo­ry6, and the Rade­on Instinct™ MI50 pro­vi­des 16GB of HBM2 ECC memo­ry. Both GPUs pro­vi­de full-chip ECC and Relia­bi­li­ty, Acces­si­bi­li­ty and Ser­vicea­bi­li­ty (RAS)7 tech­no­lo­gies, which are cri­ti­cal to deli­ver more accu­ra­te com­pu­te results for lar­ge-sca­le HPC deployments.
  • Secu­re Vir­tua­li­zed Workload Sup­port: AMD MxG­PU Tech­no­lo­gy, the industry’s only hard­ware-based GPU vir­tua­liza­ti­on solu­ti­on, which is based on the indus­try-stan­dard SR-IOV (Sin­gle Root I/O Vir­tua­liza­ti­on) tech­no­lo­gy, makes it dif­fi­cult for hackers to attack at the hard­ware level, hel­ping pro­vi­de secu­ri­ty for vir­tua­li­zed cloud deployments.

Updated ROCm Open Software Platform

AMD today also announ­ced a new ver­si­on of its ROCm open soft­ware plat­form desi­gned to speed deve­lo­p­ment of high-per­for­mance, ener­gy-effi­ci­ent hete­ro­ge­neous com­pu­ting sys­tems. In addi­ti­on to sup­port for the new Rade­on Instinct™ acce­le­ra­tors, ROCm soft­ware ver­si­on 2.0 pro­vi­des updated math libra­ri­es for the new DLOPS; sup­port for 64-bit Linux ope­ra­ting sys­tems inclu­ding Cent­OS, RHEL and Ubun­tu; opti­miza­ti­ons of exis­ting com­pon­ents; and sup­port for the latest ver­si­ons of the most popu­lar deep lear­ning frame­works, inclu­ding Ten­sor­Flow 1.11, PyTorch (Caffe2) and others. Learn more about ROCm 2.0 soft­ware here.

Availability

The AMD Rade­on Instinct™ MI60 acce­le­ra­tor is expec­ted to ship to dat­a­cen­ter cus­to­mers by the end of 2018. The AMD Rade­on Instinct™ MI50 acce­le­ra­tor is expec­ted to begin ship­ping to data cen­ter cus­to­mers by the end of Q1 2019. The ROCm 2.0 open soft­ware plat­form is expec­ted to be available by the end of 2018.

Supporting Resources

  • Visit the AMD Next Hori­zon event web­page to get the event materials
  • Learn more about AMD Rade­on Instinct™ MI60 and MI50 accelerators
  • Learn more about AMD 7nm tech­no­lo­gy here
  • Learn more about the ROCm 2.0 open soft­ware plat­form here
  • Learn more about ROCm & MIO­pen Docker Hub here
  • Beco­me a fan of AMD on Face­book
  • Fol­low AMD Rade­on Instinct on Twit­ter

About AMD

For more than 45 years AMD has dri­ven inno­va­ti­on in high-per­for­mance com­pu­ting, gra­phics and visua­liza­ti­on tech­no­lo­gies ― the buil­ding blocks for gam­ing, immersi­ve plat­forms and the dat­a­cen­ter. Hundreds of mil­li­ons of con­su­mers, lea­ding For­tu­ne 500 busi­nesses and cut­ting-edge sci­en­ti­fic rese­arch faci­li­ties around the world rely on AMD tech­no­lo­gy dai­ly to impro­ve how they live, work and play. AMD employees around the world are focu­sed on buil­ding gre­at pro­ducts that push the boun­da­ries of what is pos­si­ble. For more infor­ma­ti­on about how AMD is enab­ling today and inspi­ring tomor­row, visit the AMD (NASDAQ: AMDweb­siteblogFace­book and Twit­ter pages.