Qualcomm is still the most efficient AI accelerator for image processing

The company’s recent design successes will help Qualcomm turn MLPerf referrals into sales. AI energy efficiency matters!

Since Qualcomm announced its first-generation cloud edge artificial intelligence processor, the Qualcomm Cloud AI 100, the company has been at the top of the leader board for energy efficiency, a key customer requirement as the edge makes now part of a connected smart grid. The latest benchmarks from MLCommons, an industry consortium of more than 100 companies, demonstrate that the Qualcomm platform is still the industry’s most energy-efficient AI accelerator for image processing.

In the latest results, Qualcomm partners Foxconn, Thundercomm, Inventec, Dell, HPE and Lenovo all submitted leadership credentials, using the Qualcomm Cloud AI 100 “Standard” chip, which delivers 350 trillion operations per second ( TOPS). The companies are targeting, at least for now, edge image processing where power consumption is critical, while Qualcomm Technologies has submitted updated results for the pro SKU, targeting cloud inference processing in peripheral.

The fact that Gloria (Foxconn) and Inventec Heimdall, providers of cloud services companies, have submitted results to MLCommons tells us that Qualcomm could benefit from the Asian cloud market, while support from Dell, Lenovo and HPE indicates interest. worldwide for Qualcomm. part for datacenter and “Edge Clouds”.

The results

The current Cloud AI 100 demonstrates significantly higher efficiency for image processing compared to cloud and edge competitors. This makes sense because the accelerator’s legacy is the Qualcomm AI engine of the high-end Snapdragon mobile processor, which provides AI for mobile handsets where imaging is the primary application. Nevertheless, the Qualcomm platform offers best-in-class performance efficiency for the BERT-99 model, which is used in natural language processing.

Figure 3 shows the performance of Qualcomm Cloud AI 100 in edge image processing (ResNet50) compared to the NVIDIA Jetson Orin processor. We believe that Qualcomm will extend the next design of the Cloud AI family beyond image processing, as the edge of the cloud begins to require language processing and recommendation engines.

Unsurprisingly, Qualcomm’s power efficiency doesn’t come at the expense of high performance, unlike many startups targeting this emerging market. As shown in Figure 4, a server with 5 cards, each consuming 75 watts, provides nearly 50% more performance than a 2xNVIDIA A100, each consuming 300 watts. In an analysis of the potential economic benefits of this energy efficiency, we estimate that a large data center could save tens of millions of dollars annually in energy and capital costs by deploying the Qualcomm Cloud AI 100.


Qualcomm is still the champion when it comes to energy efficiency, but there are two potential issues they face. First, while NVIDIA’s Hopper-based H100 showed the fastest inference performance submitted to MLCommons, this platform is not yet generally available and NVIDIA has not submitted any power measurements. We suspect the H100 may eclipse Qualcomm’s star for efficiency, but we also suspect it may be a fleeting claim to fame, as we’ll likely see a second-gen piece in a similar timeframe, or perhaps some months later. Second, while the Qualcomm Cloud AI 100 has outstanding efficiency and performance for image processing, it doesn’t blast NVIDIA’s A100 for NLP, and we haven’t yet seen the performance data for other models such as recommendation engines. Therefore, while a cutting-edge AI processor typically only requires image analysis, a large data center may choose to wait for more uniform pattern coverage than the next generation might provide.

Disclosures: This article expresses the opinions of the authors and should not be taken as advice on buying or investing in the companies mentioned. Cambrian AI Research is fortunate to have many, if not most, semiconductor companies as customers, including Blaize, Cerebras, D-Matrix, Esperanto, FuriosaAI, Graphcore, GML, IBM, Intel, Mythic, NVIDIA, Qualcomm Technologies , Si-Five, SiMa.ai, Synopsys and Tenstorrent. We have no investment position in any of the companies mentioned in this article and do not plan to initiate any in the near future. For more information, please visit our website at https://cambrian-AI.com.

Comments are closed.