AMD Announces World’s Fastest HPC Accelerator for Scientific Research¹

AMD Instinct™ MI100 acce­le­ra­tors revo­lu­tio­ni­ze high-per­for­mance com­pu­ting (HPC) and AI with indus­try-lea­ding com­pu­te per­for­mance ꟷ

ꟷ First GPU acce­le­ra­tor with new AMD CDNA archi­tec­tu­re engi­nee­red for the exas­ca­le era ꟷ 

SANTA CLARA, Calif. — Novem­ber 16, 2020 — AMD (NASDAQ: AMD) today announ­ced the new AMD Instinct™ MI100 acce­le­ra­tor – the world’s fas­test HPC GPU and the first x86 ser­ver GPU to sur­pass the 10 tera­flops (FP64) per­for­mance bar­ri­er.1 Sup­por­ted by new acce­le­ra­ted com­pu­te plat­forms from Dell, Giga­byte, HPE, and Super­mi­cro, the MI100, com­bi­ned with AMD EPYCTM CPUs and the ROCm™ 4.0 open soft­ware plat­form, is desi­gned to pro­pel new dis­co­ve­ries ahead of the exas­ca­le era. 

Built on the new AMD CDNA archi­tec­tu­re, the AMD Instinct MI100 GPU enab­les a new class of acce­le­ra­ted sys­tems for HPC and AI when pai­red with 2nd Gen AMD EPYC pro­ces­sors. The MI100 offers up to 11.5 TFLOPS of peak FP64 per­for­mance for HPC and up to 46.1 TFLOPS peak FP32 Matrix per­for­mance for AI and machi­ne lear­ning workloads2. With new AMD Matrix Core tech­no­lo­gy, the MI100 also deli­vers a near­ly 7x boost in FP16 theo­re­ti­cal peak floa­ting point per­for­mance for AI trai­ning workloads com­pa­red to AMD’s pri­or genera­ti­on acce­le­ra­tors.3

Today AMD takes a major step for­ward in the jour­ney toward exas­ca­le com­pu­ting as we unveil the AMD Instinct MI100 – the world’s fas­test HPC GPU,” said Brad McCredie, cor­po­ra­te vice pre­si­dent, Data Cen­ter GPU and Acce­le­ra­ted Pro­ces­sing, AMD. “Squa­re­ly tar­ge­ted toward the workloads that mat­ter in sci­en­ti­fic com­pu­ting, our latest acce­le­ra­tor, when com­bi­ned with the AMD ROCm open soft­ware plat­form, is desi­gned to pro­vi­de sci­en­tists and rese­ar­chers a supe­ri­or foun­da­ti­on for their work in HPC.” 

Open Soft­ware Plat­form for the Exas­ca­le Era

The AMD ROCm deve­lo­per soft­ware pro­vi­des the foun­da­ti­on for exas­ca­le com­pu­ting. As an open source tool­set con­sis­ting of com­pi­lers, pro­gramming APIs and libra­ries, ROCm is used by exas­ca­le soft­ware deve­lo­pers to crea­te high per­for­mance app­li­ca­ti­ons. ROCm 4.0 has been opti­mi­zed to deli­ver per­for­mance at sca­le for MI100-based sys­tems. ROCm 4.0 has upgraded the com­pi­ler to be open source and uni­fied to sup­port both OpenMP® 5.0 and HIP. PyTorch and Ten­sor­flow frame­works, which have been opti­mi­zed with ROCm 4.0, can now achie­ve hig­her per­for­mance with MI1007,8. ROCm 4.0 is the latest offe­ring for HPC, ML and AI app­li­ca­ti­on deve­lo­pers which allows them to crea­te per­for­mance por­ta­ble software. 

We’ve recei­ved ear­ly access to the MI100 acce­le­ra­tor, and the preli­mi­na­ry results are very encou­ra­ging. We’ve typi­cal­ly seen signi­fi­cant per­for­mance boosts, up to 2–3x com­pa­red to other GPUs,” said Bron­son Mes­ser, direc­tor of sci­ence, Oak Ridge Lea­ders­hip Com­pu­ting Faci­li­ty. “What’s also important to reco­gni­ze is the impact soft­ware has on per­for­mance. The fact that the ROCm open soft­ware plat­form and HIP deve­lo­per tool are open source and work on a varie­ty of plat­forms, it is some­thing that we have been abso­lute­ly almost obses­sed with sin­ce we fiel­ded the very first hybrid CPU/GPU system.”

 

Key capa­bi­li­ties and fea­tures of the AMD Instinct MI100 acce­le­ra­tor include: 

  • All-New AMD CDNA Archi­tec­tu­re- Engi­nee­red to power AMD GPUs for the exas­ca­le era and at the heart of the MI100 acce­le­ra­tor, the AMD CDNA archi­tec­tu­re offers excep­tio­nal per­for­mance and power efficiency
  • Lea­ding FP64 and FP32 Per­for­mance for HPC Workloads — Deli­vers indus­try lea­ding 11.5 TFLOPS peak FP64 per­for­mance and 23.1 TFLOPS peak FP32 per­for­mance, enab­ling sci­en­tists and rese­ar­chers across the glo­be to acce­le­ra­te dis­co­ve­ries in indus­tries inclu­ding life sci­en­ces, ener­gy, finan­ce, aca­de­mics, government, defen­se and more.1
  • All-New Matrix Core Tech­no­lo­gy for HPC and AI – Super­char­ged per­for­mance for a full ran­ge of sin­gle and mixed pre­cisi­on matrix ope­ra­ti­ons, such as FP32, FP16, bFloat16, Int8 and Int4, engi­nee­red to boost the con­ver­gence of HPC and AI.
  • 2nd Gen AMD Infi­ni­ty Fab­ric™ Tech­no­lo­gy – Instinct MI100 pro­vi­des ~2x the peer-to-peer (P2P) peak I/O band­width over PCIe® 4.0 with up to 340 GB/s of aggre­ga­te band­width per card with three AMD Infi­ni­ty Fab­ric™ Links.4 In a ser­ver, MI100 GPUs can be con­fi­gu­red with up to two ful­ly-con­nec­ted quad GPU hives, each pro­vi­ding up to 552 GB/s of P2P I/O band­width for fast data sharing.4
  • Ultra-Fast HBM2 Memo­ry– Fea­tures 32GB High-band­width HBM2 memo­ry at a clock rate of 1.2 GHz and deli­vers an ultra-high 1.23 TB/s of memo­ry band­width to sup­port lar­ge data sets and help eli­mi­na­te bot­t­len­ecks in moving data in and out of memo­ry.5
  • Sup­port for Industry’s Latest PCIe® Gen 4.0 – Desi­gned with the latest PCIe Gen 4.0 tech­no­lo­gy sup­port pro­vi­ding up to 64GB/s peak theo­re­ti­cal trans­port data band­width from CPU to GPU.6

 

Avail­ab­le Ser­ver Solutions

The AMD Instinct MI100 acce­le­ra­tors are expec­ted by end of the year in sys­tems from major OEM and ODM part­ners in the enter­pri­se mar­kets, including:

Dell

Dell EMC PowerEdge ser­vers will sup­port the new AMD Instinct MI100, which will enab­le fas­ter insights from data. This would help our cus­to­mers achie­ve more robust and effi­ci­ent HPC and AI results rapidly,” said Ravi Pen­de­kan­ti, seni­or vice pre­si­dent, PowerEdge Ser­vers, Dell Tech­no­lo­gies. “AMD has been a valued part­ner in our sup­port for advan­cing inno­va­ti­on in the data cen­ter. The high-per­for­mance capa­bi­li­ties of AMD Instinct acce­le­ra­tors are a natu­ral fit for our PowerEdge ser­ver AI & HPC portfolio.”

Giga­byte

We’re plea­sed to again work with AMD as a stra­te­gic part­ner offe­ring cus­to­mers ser­ver hard­ware for high per­for­mance com­pu­ting,” said Alan Chen, assi­stant vice pre­si­dent in NCBU, GIGABYTE. “AMD Instinct MI100 acce­le­ra­tors repre­sent the next level of high-per­for­mance com­pu­ting in the data cen­ter, brin­ging grea­ter con­nec­ti­vi­ty and data band­width for ener­gy rese­arch, mole­cu­lar dyna­mics, and deep lear­ning trai­ning. As a new acce­le­ra­tor in the GIGABYTE port­fo­lio, our cus­to­mers can look to bene­fit from impro­ved per­for­mance across a ran­ge of sci­en­ti­fic and indus­tri­al HPC workloads.”

Hew­lett Packard Enter­pri­se (HPE)

Cus­to­mers use HPE Apol­lo sys­tems for pur­po­se-built capa­bi­li­ties and per­for­mance to tack­le a ran­ge of com­plex, data-inten­si­ve workloads across high-per­for­mance com­pu­ting (HPC), deep lear­ning and ana­ly­tics,” said Bill Man­nel, vice pre­si­dent and gene­ral mana­ger, HPC at HPE. “With the intro­duc­tion of the new HPE Apol­lo 6500 Gen10 Plus sys­tem, we are fur­ther advan­cing our port­fo­lio to impro­ve workload per­for­mance by sup­por­ting the new AMD Instinct MI100 acce­le­ra­tor, which enab­les grea­ter con­nec­ti­vi­ty and data pro­ces­sing, along­side the 2nd Gen AMD EPYC™ pro­ces­sor. We look for­ward to con­ti­nuing our col­la­bo­ra­ti­on with AMD to expand our offe­rings with its latest CPUs and accelerators.”

Super­mi­cro

We’re exci­ted that AMD is making a big impact in high-per­for­mance com­pu­ting with AMD Instinct MI100 GPU acce­le­ra­tors,” said Vik Malya­la, seni­or vice pre­si­dent, field app­li­ca­ti­on engi­nee­ring and busi­ness deve­lo­p­ment, Super­mi­cro. “With the com­bi­na­ti­on of the com­pu­te power gai­ned with the new CDNA archi­tec­tu­re, along with the high memo­ry and GPU peer-to-peer band­width the MI100 brings, our cus­to­mers will get access to gre­at solu­ti­ons that will meet their acce­le­ra­ted com­pu­te requi­re­ments and cri­ti­cal enter­pri­se workloads. The AMD Instinct MI100 will be a gre­at addi­ti­on for our mul­ti-GPU ser­vers and our exten­si­ve port­fo­lio of high-per­for­mance sys­tems and ser­ver buil­ding block solutions.”

 

MI100 Spe­ci­fi­ca­ti­ons

 

Com­pu­te Units

Stream Pro­ces­sors

FP64 TFLOPS (Peak)

FP32 TFLOPS (Peak)

FP32 Matrix TFLOPS

(Peak)

FP16/FP16 Matrix
TFLOPS

(Peak)

INT4 | INT8 TOPS

(Peak)

bFloat16 TFLOPs

(Peak)

HBM2
ECC
Memory

Memo­ry Bandwidth

120

7680

Up to 11.5

Up to 23.1

Up to 46.1

Up to 184.6

Up to 184.6

Up to 92.3 TFLOPS

32GB

Up to 1.23 TB/s

 

Sup­por­ting Resources

 

About AMD

For more than 50 years AMD has dri­ven inno­va­ti­on in high-per­for­mance com­pu­ting, gra­phics and visua­liz­a­ti­on tech­no­lo­gies ― the buil­ding blocks for gaming, immer­si­ve plat­forms and the data cen­ter. Hund­reds of mil­li­ons of con­su­mers, lea­ding For­tu­ne 500 busi­nes­ses and cut­ting-edge sci­en­ti­fic rese­arch faci­li­ties around the world rely on AMD tech­no­lo­gy dai­ly to impro­ve how they live, work and play. AMD employees around the world are focu­sed on buil­ding gre­at pro­ducts that push the bounda­ries of what is pos­si­ble. For more infor­ma­ti­on about how AMD is enab­ling today and inspi­ring tomor­row, visit the AMD (NASDAQ: AMDweb­siteblogFace­book and Twit­ter pages.

 

CAUTIONARY STATEMENT
This press release con­tains for­ward-loo­king state­ments con­cer­ning Advan­ced Micro Devices, Inc. (AMD) such as the fea­tures, func­tio­n­a­li­ty, per­for­mance, avai­la­bi­li­ty, timing and expec­ted bene­fits of AMD pro­ducts inclu­ding the AMD Instinct™ MI100 acce­le­ra­tor, which are made pur­suant to the Safe Har­bor pro­vi­si­ons of the Pri­va­te Secu­ri­ties Liti­ga­ti­on Reform Act of 1995. For­ward loo­king state­ments are com­mon­ly iden­ti­fied by words such as “would,” “may,” “expects,” “belie­ves,” “plans,” “intends,” “pro­jects” and other terms with simi­lar mea­ning. Inves­tors are cau­tio­ned that

the for­ward-loo­king state­ments in this press release are based on cur­rent beliefs, assump­ti­ons and expec­ta­ti­ons, speak only as of the date of this press release and invol­ve risks and uncer­tain­ties that could cau­se actu­al results to dif­fer mate­ri­al­ly from cur­rent expec­ta­ti­ons. Such state­ments are sub­ject to cer­tain known and unknown risks and uncer­tain­ties, many of which are dif­fi­cult to pre­dict and gene­ral­ly bey­ond AMD’s con­trol, that could cau­se actu­al results and other future events to dif­fer mate­ri­al­ly from tho­se expres­sed in, or implied or pro­jec­ted by, the for­ward-loo­king infor­ma­ti­on and state­ments. Mate­ri­al fac­tors that could cau­se actu­al results to dif­fer mate­ri­al­ly from cur­rent expec­ta­ti­ons inclu­de, without limi­ta­ti­on, the fol­lowing: Intel Corporation’s domi­nan­ce of the micro­pro­ces­sor mar­ket and its aggres­si­ve busi­ness prac­ti­ces; the abi­li­ty of third par­ty manu­fac­tu­rers to manu­fac­tu­re AMD’s pro­ducts on a time­ly basis in suf­fi­ci­ent quan­ti­ties and using com­pe­ti­ti­ve tech­no­lo­gies; expec­ted manu­fac­tu­ring yiel­ds for AMD’s pro­ducts; the avai­la­bi­li­ty of essen­ti­al equip­ment, mate­ri­als or manu­fac­tu­ring pro­ces­ses; AMD’s abi­li­ty to intro­du­ce pro­ducts on a time­ly basis with fea­tures and per­for­mance levels that pro­vi­de value to its cus­to­mers; glo­bal eco­no­mic uncer­tain­ty; the loss of a signi­fi­cant cus­to­mer; AMD’s abi­li­ty to gene­ra­te reve­nue from its semi-cus­tom SoC pro­ducts; the impact of the COVID-19 pan­de­mic on AMD’s busi­ness, finan­cial con­di­ti­on and results of ope­ra­ti­ons; poli­ti­cal, legal, eco­no­mic risks and natu­ral dis­as­ters; the impact of government actions and regu­la­ti­ons such as export admi­nis­tra­ti­on regu­la­ti­ons, tariffs and tra­de pro­tec­tion mea­su­res; the impact of acqui­si­ti­ons, joint ven­tures and/or invest­ments on AMD’s busi­ness, inclu­ding the announ­ced acqui­si­ti­on of Xilinx, and the fail­u­re to inte­gra­te acqui­red busi­nes­ses; AMD’s abi­li­ty to com­ple­te the Xilinx mer­ger; the impact of the announ­ce­ment and pen­den­cy of the Xilinx mer­ger on AMD’s busi­ness; poten­ti­al secu­ri­ty vul­nera­bi­li­ties; poten­ti­al IT outa­ges, data loss, data breaches and cyber-attacks; uncer­tain­ties invol­ving the orde­ring and ship­ment of AMD’s pro­ducts; quar­ter­ly and sea­so­nal sales pat­terns; the restric­tions impo­sed by agree­ments gover­ning AMD’s notes and the revol­ving credit faci­li­ty; the com­pe­ti­ti­ve mar­kets in which AMD’s pro­ducts are sold; mar­ket con­di­ti­ons of the indus­tries in which AMD pro­ducts are sold; AMD’s reli­an­ce on third-par­ty intel­lec­tu­al pro­per­ty to design and intro­du­ce new pro­ducts in a time­ly man­ner; AMD’s reli­an­ce on third-par­ty com­pa­nies for the design, manu­fac­tu­re and sup­ply of mother­boards, soft­ware and other com­pu­ter plat­form com­pon­ents; AMD’s reli­an­ce on Micro­soft Cor­po­ra­ti­on and other soft­ware ven­dors’ sup­port to design and deve­lop soft­ware to run on AMD’s pro­ducts; AMD’s reli­an­ce on third-par­ty dis­tri­bu­tors and add-in-board part­ners; the poten­ti­al dilu­ti­ve effect if the 2.125% Con­ver­ti­ble Seni­or Notes due 2026 are con­ver­ted; future impairments of good­will and tech­no­lo­gy licen­se purcha­ses; AMD’s abi­li­ty to attract and retain qua­li­fied per­son­nel; AMD’s abi­li­ty to gene­ra­te suf­fi­ci­ent reve­nue and ope­ra­ting cash flow or obtain exter­nal finan­cing for rese­arch and deve­lo­p­ment or other stra­te­gic invest­ments; AMD’s indeb­ted­ness; AMD’s abi­li­ty to gene­ra­te suf­fi­ci­ent cash to ser­vice its debt obli­ga­ti­ons or meet its working capi­tal requi­re­ments; AMD’s abi­li­ty to repurcha­se its out­stan­ding debt in the event of a chan­ge of con­trol; the cycli­cal natu­re of the semi­con­duc­tor indus­try; the impact of modi­fi­ca­ti­on or inter­rup­ti­on of AMD’s inter­nal busi­ness pro­ces­ses and infor­ma­ti­on sys­tems; com­pa­ti­bi­li­ty of AMD’s pro­ducts with some or all indus­try-stan­dard soft­ware and hard­ware; cos­ts rela­ted to defec­ti­ve pro­ducts; the effi­ci­en­cy of AMD’s sup­ply chain; AMD’s abi­li­ty to rely on third par­ty sup­ply-chain logistics func­tions; AMD’s stock pri­ce vola­ti­li­ty; world­wi­de poli­ti­cal con­di­ti­ons; unfa­vor­able cur­ren­cy exchan­ge rate fluc­tua­tions; AMD’s abi­li­ty to effec­tively con­trol the sales of its pro­ducts on the gray mar­ket; AMD’s abi­li­ty to ade­qua­te­ly pro­tect its tech­no­lo­gy or other intel­lec­tu­al pro­per­ty; cur­rent and future claims and liti­ga­ti­on; poten­ti­al tax lia­bi­li­ties; and the impact of envi­ron­men­tal laws, con­flict mine­rals-rela­ted pro­vi­si­ons and other laws or regu­la­ti­ons. Inves­tors are urged to review in detail the risks and uncer­tain­ties in AMD’s Secu­ri­ties and Exchan­ge Com­mis­si­on filings, inclu­ding but not limi­ted to AMD’s Quar­ter­ly Report on Form 10‑Q for the quar­ter ended Sep­tem­ber 26, 2020.

 

©2020 Advan­ced Micro Devices, Inc. All rights reser­ved. AMD, the AMD Arrow logo, EPYC, AMD Instinct, Infi­ni­ty Fab­ric, ROCm and com­bi­na­ti­ons the­re­of are trade­marks of Advan­ced Micro Devices, Inc. The OpenMP name and the OpenMP logos are regis­tered trade­marks of the OpenMP Archi­tec­tu­re Review Board. PCIe is a regis­tered trade­mark of PCI-SIG Cor­po­ra­ti­on. Python is a trade­mark of the Python Soft­ware Foun­da­ti­on. PyTorch is a trade­mark or regis­tered trade­mark of PyTorch. Ten­sor­Flow, the Ten­sor­Flow logo and any rela­ted marks are trade­marks of Goog­le Inc. Other pro­duct names used in this publi­ca­ti­on are for iden­ti­fi­ca­ti­on pur­po­ses only and may be trade­marks of their respec­ti­ve companies.

  1. Cal­cu­la­ti­ons con­duc­ted by AMD Per­for­mance Labs as of Sep 18, 2020 for the AMD Instinct™ MI100 (32GB HBM2 PCIe® card) acce­le­ra­tor at 1,502 MHz peak boost engi­ne clock resul­ted in 11.54 TFLOPS peak dou­ble pre­cisi­on (FP64), 46.1 TFLOPS peak sin­gle pre­cisi­on matrix (FP32), 23.1 TFLOPS peak sin­gle pre­cisi­on (FP32), 184.6 TFLOPS peak half pre­cisi­on (FP16) peak theo­re­ti­cal, floa­ting-point per­for­mance. Publis­hed results on the NVi­dia Ampere A100 (40GB) GPU acce­le­ra­tor resul­ted in 9.7 TFLOPS peak dou­ble pre­cisi­on (FP64). 19.5 TFLOPS peak sin­gle pre­cisi­on (FP32), 78 TFLOPS peak half pre­cisi­on (FP16) theo­re­ti­cal, floa­ting-point per­for­mance. Ser­ver manu­fac­tu­rers may vary con­fi­gu­ra­ti­on offe­rings yiel­ding dif­fe­rent results. MI100-03
  2. Cal­cu­la­ti­ons per­for­med by AMD Per­for­mance Labs as of Sep 3, 2020 on the AMD Instinct™ MI100 (32GB HBM2 PCIe® card) acce­le­ra­tor at 1,502 MHz peak engi­ne clock resul­ted in 46.1 TFLOPS peak theo­re­ti­cal sin­gle pre­cisi­on (FP32 Matrix) Math floa­ting-point per­for­mance. The Nvi­dia Ampere A100 (40GB) GPU acce­le­ra­tor publis­hed results are 19.5 TFLOPS peak sin­gle pre­cisi­on (FP32) floa­ting-point per­for­mance. Nvi­dia results found at: https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-ampere-architecture-whitepaper.pdf. Ser­ver manu­fac­tu­rers may vary con­fi­gu­ra­ti­on offe­rings yiel­ding dif­fe­rent results. MI100-01
  3. Cal­cu­la­ti­ons per­for­med by AMD Per­for­mance Labs as of Sep 18, 2020 for the AMD Instinct™ MI100 acce­le­ra­tor at 1,502 MHz peak boost engi­ne clock resul­ted in 184.57 TFLOPS peak theo­re­ti­cal half pre­cisi­on (FP16) and 46.14 TFLOPS peak theo­re­ti­cal sin­gle pre­cisi­on (FP32 Matrix) floa­ting-point per­for­mance. The results cal­cu­la­ted for Rade­on Instinct™ MI50 GPU at 1,725 MHz peak engi­ne clock resul­ted in 26.5 TFLOPS peak theo­re­ti­cal half pre­cisi­on (FP16) and 13.25 TFLOPS peak theo­re­ti­cal sin­gle pre­cisi­on (FP32 Matrix) floa­ting-point per­for­mance. Ser­ver manu­fac­tu­rers may vary con­fi­gu­ra­ti­on offe­rings yiel­ding dif­fe­rent results. MI100-04
  4. Cal­cu­la­ti­ons as of SEP 18th, 2020. AMD Instinct™ MI100 built on AMD CDNA tech­no­lo­gy acce­le­ra­tors sup­por­ting PCIe® Gen4 pro­vi­ding up to 64 GB/s peak theo­re­ti­cal trans­port data band­width from CPU to GPU per card. AMD Instinct™ MI100 acce­le­ra­tors inclu­de three Infi­ni­ty Fab­ric™ links pro­vi­ding up to 276 GB/s peak theo­re­ti­cal GPU to GPU or Peer-to-Peer (P2P) trans­port rate band­width per­for­mance per GPU card. Com­bi­ned with PCIe Gen4 sup­port pro­vi­ding an aggre­ga­te GPU card I/O peak band­width of up to 340 GB/s. MI100s have three links: 92 GB/s * 3 links per GPU = 276 GB/s. Four GPU hives pro­vi­de up to 552 GB/s peak theo­re­ti­cal P2P per­for­mance. Dual 4 GPU hives in a ser­ver pro­vi­de up to 1.1 TB/s total peak theo­re­ti­cal direct P2P per­for­mance per ser­ver. AMD Infi­ni­ty Fab­ric link tech­no­lo­gy not enab­led: Four GPU hives pro­vi­de up to 256 GB/s peak theo­re­ti­cal P2P per­for­mance with PCIe® 4.0. Ser­ver manu­fac­tu­rers may vary con­fi­gu­ra­ti­on offe­rings yiel­ding dif­fe­rent results. MI100-07
  5. Cal­cu­la­ti­ons by AMD Per­for­mance Labs as of Oct 5th, 2020 for the AMD Instinct™ MI100 acce­le­ra­tor desi­gned with AMD CDNA 7nm Fin­FET pro­cess tech­no­lo­gy at 1,200 MHz peak memo­ry clock resul­ted in 1.2288 TFLOPS peak theo­re­ti­cal memo­ry band­width per­for­mance. The results cal­cu­la­ted for Rade­on Instinct™ MI50 GPU desi­gned with “Vega” 7nm Fin­FET pro­cess tech­no­lo­gy with 1,000 MHz peak memo­ry clock resul­ted in 1.024 TFLOPS peak theo­re­ti­cal memo­ry band­width per­for­mance. CDNA-04
  6. Works with PCIe® Gen 4.0 and Gen 3.0 com­pli­ant mother­boards. Per­for­mance may vary from mother­board to mother­board. Refer to sys­tem or mother­board pro­vi­der for indi­vi­du­al pro­duct per­for­mance and features.
  7. Tes­ting Con­duc­ted by AMD per­for­mance labs as of Octo­ber 30th, 2020, on three plat­forms and soft­ware ver­si­ons typi­cal for the launch dates of the Rade­on Instinct MI25 (2018), MI50 (2019) and AMD Instinct MI100 GPU (2020) run­ning the bench­mark app­li­ca­ti­on Quick­sil­ver. MI100 plat­form (2020): Giga­byte G482-Z51-00 sys­tem com­pri­sed of Dual Socket AMD EPYC™ 7702 64-Core Pro­ces­sor, AMD Instinct™ MI100 GPU, ROCm™ 3.10 dri­ver, 512GB DDR4, RHEL 8.2.  MI50 plat­form (2019): Super­mi­cro® SYS-4029GP-TRT2 sys­tem com­pri­sed of Dual Socket Intel Xeon® Gold® 6132, Rade­on Instinct™ MI50 GPU, ROCm 2.10 dri­ver, 256 GB DDR4, SLES15SP1. MI25 plat­form (2018): Super­mi­cro SYS-4028GR-TR2 sys­tem com­pri­sed of Dual Socket Intel Xeon CPU E5-2690, Rade­on Instinct™ MI25 GPU, ROCm 2.0.89 dri­ver, 246GB DDR4 sys­tem memo­ry, Ubun­tu 16.04.5 LTS. MI100-14
  8. Tes­ting Con­duc­ted by AMD per­for­mance labs as of Octo­ber 30th, 2020, on three plat­forms and soft­ware ver­si­ons typi­cal for the launch dates of the Rade­on Instinct MI25 (2018), MI50 (2019) and AMD Instinct MI100 GPU (2020) run­ning the bench­mark app­li­ca­ti­on Ten­sor­Flow Res­Net 50 FP 16 batch size 128. MI100 plat­form (2020): Giga­byte G482-Z51-00 sys­tem com­pri­sed of Dual Socket AMD EPYC™ 7702 64-Core Pro­ces­sor, AMD Instinct™ MI100 GPU, ROCm™ 3.10 dri­ver, 512GB DDR4, RHEL 8.2. MI50 plat­form (2019): Super­mi­cro® SYS-4029GP-TRT2 sys­tem com­pri­sed of Dual Socket Intel Xeon® Gold® 6254, Rade­on Instinct™ MI50 GPU, ROCm 3.0.6 dri­ver, 338 GB DDR4, Ubun­tu® 16.04.6 LTS. MI25 plat­form (2018): a Super­mi­cro SYS-4028GR-TR2 sys­tem com­pri­sed of Dual Socket Intel Xeon CPU E5-2690, Rade­on Instinct™ MI25 GPU, ROCm 2.0.89 dri­ver, 246GB DDR4 sys­tem memo­ry, Ubun­tu 16.04.5 LTS. MI100-15

Durch die weitere Nutzung der Seite stimmst du der Verwendung von Cookies zu. Weitere Informationen

Die Cookie-Einstellungen auf dieser Website sind auf "Cookies zulassen" eingestellt, um das beste Surferlebnis zu ermöglichen. Wenn du diese Website ohne Änderung der Cookie-Einstellungen verwendest oder auf "Akzeptieren" klickst, erklärst du dich damit einverstanden.

Schließen