AMD Processors Accelerating Performance of Top Supercomputers Worldwide

Growing preference for EPYC™ processors resulted in the number of AMD-powered supercomputers growing 3.5x year-over-year

SANTA CLARA, Calif., Nov. 16, 2021 (GLOBE NEWSWIRE) — During this year’s Super­com­pu­ting Con­fe­rence 2021 (SC21), AMD (NASDAQ: AMD) is show­ca­sing its expan­ded pre­sence and gro­wing pre­fe­rence in the high per­for­mance com­pu­ting (HPC) indus­try with the excep­tio­nal inno­va­ti­on and adop­ti­on of AMD data cen­ter pro­ces­sors and acce­le­ra­tors. Cus­to­mers across the indus­try con­ti­nue to expand their use of AMD EPYC™ pro­ces­sors and AMD Instinct™ acce­le­ra­tors to power cut­ting-edge rese­arch nee­ded to address some of the world’s big­gest chal­lenges in cli­ma­te, life sci­en­ces, medi­ci­ne, and more.

Gro­wing pre­fe­rence for AMD is show­ca­sed in the latest Top500 list. AMD now powers 73 super­com­pu­ters, com­pared to 21 in the Novem­ber 2020 list, a more than 3x year-over-year increase. Addi­tio­nal­ly, AMD powers four out of the top ten most powerful super­com­pu­ters in the world, as well as the most powerful super­com­pu­ter in EMEA. Final­ly, AMD EPYC 7003 series pro­ces­sors, which laun­ched eight months ago, are uti­li­zed by 17 of the 75 AMD powered super­com­pu­ters in the list, demons­t­ra­ting the rapid adop­ti­on of the latest gene­ra­ti­on of EPYC processors.

The demands of super­com­pu­ting users have increased expo­nen­ti­al­ly as the world seeks to acce­le­ra­te rese­arch, redu­cing the time to dis­co­very of valuable infor­ma­ti­on,” said For­rest Nor­rod, seni­or vice pre­si­dent and gene­ral mana­ger, Data Cen­ter and Embedded Solu­ti­ons Busi­ness Group, AMD. “With AMD EPYC CPUs and Instinct acce­le­ra­tors, we con­ti­nue to evol­ve our pro­duct offe­ring to push the boun­da­ries of data cen­ter tech­no­lo­gies enab­ling fas­ter rese­arch, bet­ter out­co­mes and more impact on the world.”

AMD has also been reco­gni­zed in the annu­al HPC­wire Rea­ders’ and Edi­tors’ Choice Awards at SC21. The com­pa­ny won ten awards inclu­ding Best Sus­taina­bi­li­ty Inno­va­ti­on in HPC, Best HPC Ser­ver Pro­duct and the Out­stan­ding Lea­der­ship in HPC award pre­sen­ted to Pre­si­dent and CEO Dr. Lisa Su.

Expan­ding Cus­to­mer Base
AMD is enga­ged broad­ly across the HPC indus­try to deli­ver the per­for­mance and effi­ci­en­cy of AMD EPYC and AMD Instinct pro­ducts, along with the ROCm™ open eco­sys­tem, to speed rese­arch. Through high-pro­fi­le instal­la­ti­ons like the ongo­ing deploy­ment of Oak Ridge Natio­nal Laboratory’s “Fron­tier” super­com­pu­ter, AMD is brin­ging the com­pu­te tech­no­lo­gies and per­for­mance nee­ded to sup­port deve­lo­p­ments in cur­rent and future rese­arch across the world. High­lights of “Fron­tier” and other new HPC sys­tems in the indus­try include:

A Year of Breakth­rough Pro­ducts and Research 
This year AMD laun­ched its AMD EPYC 7003 series pro­ces­sor, the world’s hig­hest-per­forming ser­ver pro­ces­sor.1 Sin­ce then, the­re has been over­whel­ming adop­ti­on from part­ners across the indus­try who are dri­ving dis­co­veries in bio­me­di­ci­ne, pre­dic­ting natu­ral dis­as­ters, clean ener­gy, semi­con­duc­tors, microelec­tro­nics and more.

Expan­ding on the fea­tures of the EPYC 7003 series pro­ces­sor, AMD recent­ly pre­view­ed the 3rd Gen EPYC pro­ces­sor with AMD 3D V‑cache. By uti­li­zing inno­va­ti­ve pack­a­ging tech­no­lo­gy, which lay­ers the L3 cache in EPYC 7003 series pro­ces­sors, AMD 3D V‑Cache tech­no­lo­gy offers enhan­ced per­for­mance for the tech­ni­cal com­pu­ting workloads pre­va­lent in HPC. Micro­soft Azu­re HPC vir­tu­al machi­nes fea­turing 3rd Gen EPYC with AMD 3D V‑Cache are curr­ent­ly available in Pri­va­te Pre­view and will be available glo­bal­ly soon.

AMD also unvei­led the world’s fas­test HPC and AI acce­le­ra­tor2, AMD Instinct MI250X. Desi­gned with the AMD CDNA™ 2 archi­tec­tu­re, the AMD Instinct MI200 series acce­le­ra­tors deli­ver up to 4.9x the peak FP64 per­for­mance ver­sus com­pe­ti­ti­ve data cen­ter acce­le­ra­tors, which is cri­ti­cal for HPC appli­ca­ti­ons like wea­ther mode­ling2. The AMD Instinct MI200 series acce­le­ra­tors are also the first to have over 100GB high-band­width memo­ry capa­ci­ty, deli­ve­ring up to 3.2 tera­bytes per second, the industry’s best aggre­ga­te band­width3.

Sup­port­ing Resources

About AMD
For more than 50 years AMD has dri­ven inno­va­ti­on in high-per­for­mance com­pu­ting, gra­phics and visua­liza­ti­on tech­no­lo­gies ― the buil­ding blocks for gam­ing, immersi­ve plat­forms and the dat­a­cen­ter. Hundreds of mil­li­ons of con­su­mers, lea­ding For­tu­ne 500 busi­nesses and cut­ting-edge sci­en­ti­fic rese­arch faci­li­ties around the world rely on AMD tech­no­lo­gy dai­ly to impro­ve how they live, work and play. AMD employees around the world are focu­sed on buil­ding gre­at pro­ducts that push the boun­da­ries of what is pos­si­ble. For more infor­ma­ti­on about how AMD is enab­ling today and inspi­ring tomor­row, visit the AMD (NASDAQ: AMD) web­site, blog, Face­book and Twit­ter pages.

AMD, the AMD Arrow logo, AMD CDNA, EPYC, AMD Instinct and com­bi­na­ti­ons the­reof are trade­marks of Advan­ced Micro Devices, Inc. Other names are for infor­ma­tio­nal pur­po­ses only and may be trade­marks of their respec­ti­ve owners.

_____________________________

1 MLN-016: Results as of 01/28/2021 using SPECrate®2017_int_base. The AMD EPYC 7763 a mea­su­red esti­ma­ted score of 798 is hig­her than the cur­rent hig­hest 2P ser­ver with an AMD EPYC 7H12 and a score of 717, https://spec.org/cpu2017/results/res2020q2/cpu2017-20200525–22554.pdf. OEM published score(s) for 3rd Gen EPYC may vary. SPEC®, SPE­Cra­te® and SPEC CPU® are regis­tered trade­marks of the Stan­dard Per­for­mance Eva­lua­ti­on Cor­po­ra­ti­on. See www.spec.org for more information.
2 MI200-01: World’s fas­test data cen­ter GPU is the AMD Instinct™ MI250X. Cal­cu­la­ti­ons con­duc­ted by AMD Per­for­mance Labs as of Sep 15, 2021, for the AMD Instinct™ MI250X (128GB HBM2e OAM modu­le) acce­le­ra­tor at 1,700 MHz peak boost engi­ne clock resul­ted in 95.7 TFLOPS peak theo­re­ti­cal dou­ble pre­cis­i­on (FP64 Matrix), 47.9 TFLOPS peak theo­re­ti­cal dou­ble pre­cis­i­on (FP64), 95.7 TFLOPS peak theo­re­ti­cal sin­gle pre­cis­i­on matrix (FP32 Matrix), 47.9 TFLOPS peak theo­re­ti­cal sin­gle pre­cis­i­on (FP32), 383.0 TFLOPS peak theo­re­ti­cal half pre­cis­i­on (FP16), and 383.0 TFLOPS peak theo­re­ti­cal Bfloat16 for­mat pre­cis­i­on (BF16) floa­ting-point per­for­mance. Cal­cu­la­ti­ons con­duc­ted by AMD Per­for­mance Labs as of Sep 18, 2020 for the AMD Instinct™ MI100 (32GB HBM2 PCIe® card) acce­le­ra­tor at 1,502 MHz peak boost engi­ne clock resul­ted in 11.54 TFLOPS peak theo­re­ti­cal dou­ble pre­cis­i­on (FP64), 46.1 TFLOPS peak theo­re­ti­cal sin­gle pre­cis­i­on matrix (FP32), 23.1 TFLOPS peak theo­re­ti­cal sin­gle pre­cis­i­on (FP32), 184.6 TFLOPS peak theo­re­ti­cal half pre­cis­i­on (FP16) floa­ting-point per­for­mance. Published results on the NVi­dia Ampere A100 (80GB) GPU acce­le­ra­tor, boost engi­ne clock of 1410 MHz, resul­ted in 19.5 TFLOPS peak dou­ble pre­cis­i­on ten­sor cores (FP64 Ten­sor Core), 9.7 TFLOPS peak dou­ble pre­cis­i­on (FP64). 19.5 TFLOPS peak sin­gle pre­cis­i­on (FP32), 78 TFLOPS peak half pre­cis­i­on (FP16), 312 TFLOPS peak half pre­cis­i­on (FP16 Ten­sor Flow), 39 TFLOPS peak Bfloat 16 (BF16), 312 TFLOPS peak Bfloat16 for­mat pre­cis­i­on (BF16 Ten­sor Flow), theo­re­ti­cal floa­ting-point per­for­mance. The TF32 data for­mat is not IEEE com­pli­ant and not included in this com­pa­ri­son. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf, page 15, Table 1.
3 MI200-07: Cal­cu­la­ti­ons con­duc­ted by AMD Per­for­mance Labs as of Sep 21, 2021, for the AMD Instinct™ MI250X and MI250 (128GB HBM2e) OAM acce­le­ra­tors desi­gned with AMD CDNA™ 2 6nm Fin­Fet pro­cess tech­no­lo­gy at 1,600 MHz peak memo­ry clock resul­ted in 128GB HBM2e memo­ry capa­ci­ty and 3.2768 TFLOPS peak theo­re­ti­cal memo­ry band­width per­for­mance. MI250/MI250X memo­ry bus inter­face is 4,096 bits times 2 die and memo­ry data rate is 3.20 Gbps for total memo­ry band­width of 3.2768 TB/s ((3.20 Gbps*(4,096 bits*2))/8).The hig­hest published results on the NVi­dia Ampere A100 (80GB) SXM GPU acce­le­ra­tor resul­ted in 80GB HBM2e memo­ry capa­ci­ty and 2.039 TB/s GPU memo­ry band­width per­for­mance. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf