Qualcomm Cloud AI 100, AMD EPYC 7003 Series Processor, and Gigabyte server solutions breaks the Peta Operations Per Second barrier for AI Inferencing
There is no doubt that AI is the driving force for next-generation consumer experiences. Virtually every experience on a mobile device somehow involves AI, whether it’s scrolling through your favorite social apps or online shopping that offers recommendations based on tens of thousands of AI inferences on its own. Now what happens when these platforms are serving millions of users on a given day? That really requires racks upon racks of powerful servers that can deliver the AI inferencing performance required to keep these platforms humming along.
Today, Qualcomm Technologies is enabling a powerful server rack that can meet these high-performance requirements by pairing with the latest AMD EPYC 7003 Series processors and Gigabyte’s latest G292-Z43 server solutions. This amalgamation of hardware expertise offers incredible performance and raises the bar for the modern data center. Qualcomm Technologies’ cutting-edge Qualcomm Cloud AI 100 fits perfectly into Gigabyte’s server system and is capable of driving the incredible AI use cases in the field of high-speed data analysis, personalized recommendations, smart cities, 5G communications, and more.
The Gigabyte G292-Z43 server supports two AMD EPYC 7003 Series processors for its processing power along with multiple Qualcomm Cloud AI 100 cards for computationally intensive applications supporting AI inferencing workloads. The Qualcomm Cloud AI 100 Inference Accelerator boasts up to 400 TOPS with breakthrough performance/Watt and that’s just with one single Qualcomm Cloud AI 100 card. Now imagine a Gigabyte server can host up to 16 Qualcomm Cloud AI 100 inferencing cards per server that, cumulatively, can deliver up to 6.4 Peta OPS (400 TOPS x 16, one Peta OPS is 1000 TOPS). This marks the first time a Qualcomm Technologies AI-based solution breaking the PetaOPs barrier. And it gets even better: A server rack can host 19 or more of these server units, which easily exceeds 100 Peta OPS. That is a lot of Qualcomm Technologies AI muscle. See the infographic below on how it is being configured.
To put this into a bit more context, a single 400TOPS HHHL Qualcomm Cloud AI 100 inference card can drive around 19,000 Resnet50 images/sec. That translates to more than 6M images per second on one server rack. This kind of AI performance can surely enhance, extend, and scale AI experiences to the world. We want to thank AMD and Gigabyte for this amazing achievement.
Check out these photos of the Qualcomm Cloud AI 100 cards with the AMD EPYC 7003 Series processor-powered Gigabyte servers ready to rock and roll.