Schlagwort: 58th

Still waiting for Exascale: Japan’s Fugaku outperforms all competition once again

FRANKFURT, Ger­ma­ny; BERKELEY, Calif.; and KNOXVILLE, Tenn.— The 58th annu­al edi­ti­on of the TOP500 saw litt­le chan­ge in the Top10. The Micro­soft Azu­re sys­tem cal­led Voy­a­ger-EUS2 was the only machi­ne to shake up the top spots, clai­ming No. 10. Based on an AMD EPYC pro­ces­sor with 48 cores and 2.45GHz working tog­e­ther with an NVIDIA A100 GPU and 80 GB of memo­ry, Voy­a­ger-EUS2 also uti­li­zes a Mel­lan­ox HDR Infi­ni­band for data transfer. 

While the­re were no other chan­ges to the posi­ti­ons of the sys­tems in the Top10, Perl­mut­ter at NERSC impro­ved its per­for­mance to 70.9 Pflop/s. Housed at the Law­rence Ber­ke­ley Natio­nal Labo­ra­to­ry, Perlmutter’s increased per­for­mance couldn’t move it from its pre­vious­ly held No. 5 spot. 

Fug­a­ku con­ti­nues to hold the No. 1 posi­ti­on that it first ear­ned in June 2020. Its HPL bench­mark score is 442 Pflop/s, which excee­ded the per­for­mance of Sum­mit at No. 2 by 3x. Instal­led at the Riken Cen­ter for Com­pu­ta­tio­nal Sci­ence (R‑CCS) in Kobe, Japan, it was co-deve­lo­ped by Riken and Fuji­tsu and is based on Fujitsu’s cus­tom ARM A64FX pro­ces­sor. Fug­a­ku also uses Fujitsu’s Tofu D inter­con­nect to trans­fer data bet­ween nodes. 

In sin­gle or fur­ther-redu­ced pre­cis­i­on, which are often used in machi­ne lear­ning and A.I. appli­ca­ti­on, Fug­a­ku has a peak per­for­mance abo­ve 1,000 PFlop/s (1 Exaflop/s). As a result, Fug­a­ku is often intro­du­ced as the first “Exas­ca­le” super­com­pu­ter. (…) Wei­ter­le­sen »