Qualcomm announces AI chips to compete with AMD and Nvidia

Qualcomm announced on Monday that it would release a new artificial intelligence Accelerator chips, marking new competition for Nvidiawhich has so far dominated the market for AI semiconductors.
The AI chips are a shift from Qualcomm, which until now has focused on semiconductors for wireless connectivity and mobile devices, not big data centers.
Qualcomm has said that both the AI200, which will go on sale in 2026, and the AI250, planned for 2027, could come in full, liquid-cooled server rack-filling systems.
Qualcomm is matching Nvidia and AMDwhich offer their graphics processing units, or GPUs, in full-rack systems that allow up to 72 chips to function as a single computer. AI labs need computing power to run the most advanced models.
Qualcomm’s data center chips are based on the AI components in Qualcomm’s smartphone chips called Hexagon Neural Processing Units, or NPUs.
“We wanted to prove ourselves in other domains first, and once we built our strength there, it was very easy for us to move to the data center level,” Durga Malladi, Qualcomm’s general manager for data center and edge, said on a call with reporters last week.
Qualcomm’s entry into the data center world marks new competition in one of the technology’s fastest-growing markets: equipment for new AI-centric server farms.
An estimated $6.7 trillion in capital expenditures will be spent on data centers by 2030, with most going to systems based around AI chips. McKinsey Estimates.
The industry is dominated by Nvidia, whose GPUs currently hold more than 90% of the market and whose sales give the company a market cap of more than $4.5 trillion. Nvidia’s chips were used to train the large language models used in OpenAI’s GPTs, ChatGPT.
But companies like OpenAI are looking for alternatives, and earlier this month the startup announced plans to buy it Chips from the number two GPU maker, AMD, and potentially take a stake in the company. Other companies, such as Google, Amazon And Microsoftare also developing their own AI accelerators for their cloud services.
Qualcomm said its chips are focusing on inference, or running AI models, rather than training, allowing labs like OpenAI to create new AI capabilities by processing terabytes of data.
The chipmaker said its rack-scale system would ultimately cost less to operate for customers such as cloud service providers, and a rack consumes 160 kilowatts, which Comparable Some Nvidia GPUs to draw higher power from the rack.
Malladi said Qualcomm will sell its AI chips and other parts separately, especially to clients like hyperscalers who prefer to design their own racks. He said other AI chip companies, such as Nvidia or AMD, could become customers for some of Qualcomm’s data center parts, such as its central processing unit, or CPU.
“What we’ve tried to do is make sure our customers are either in a position to take it all or ‘I’m going to mix and match,'” Malladi said.
The company declined to comment on the cost of chips, cards or racks and how many NPUs can be installed in a single rack. In May, Qualcomm Partnership announced Saudi Arabia’s Humain has committed to supplying data centers in the region with AI inferencing chips, and will be a customer of Qualcomm, deploying as many systems as can use 200 megawatts of power.
Qualcomm said its AI chips have advantages over other accelerators in terms of power consumption, cost of ownership and a new approach to handling memory. Its AI cards support 768 gigabytes of memory, which is said to be more than Nvidia and AMD’s offerings.
Qualcomm’s design for an AI server called AI200.
Qualcomm



Post Comment