Qualcomm Unveils AI200 and AI250 Racks for Data Center

Qualcomm Technologies announced the launch of its next-generation AI inference-optimized solutions for data centers: the Qualcomm AI200 and AI250 chip-based accelerator cards, and racks. Building off the Company’s NPU technology leadership, these solutions offer rack-scale performance and superior memory capacity for fast generative AI inference at high performance per dollar per watt—marking a major leap forward in enabling scalable, efficient, and flexible generative AI across industries.

Qualcomm AI200 introduces a purpose-built rack-level AI inference solution designed to deliver low total cost of ownership (TCO) and optimized performance for large language & multimodal model (LLM, LMM) inference and other AI workloads. It supports 768 GB of LPDDR per card for higher memory capacity and lower cost, enabling exceptional scale and flexibility for AI inference.

The Qualcomm AI250 solution will debut with an innovative memory architecture based on near-memory computing, providing a generational leap in efficiency and performance for AI inference workloads by delivering greater than 10x higher effective memory bandwidth and much lower power consumption. This enables disaggregated AI inferencing for efficient utilization of hardware while meeting customer performance and cost requirements.

Qualcomm-image

Both rack solutions feature direct liquid cooling for thermal efficiency, PCIe for scale up, Ethernet for scale out, confidential computing for secure AI workloads, and a rack-level power consumption of 160 kW.

“With Qualcomm AI200 and AI250, we’re redefining what’s possible for rack-scale AI inference. These innovative new AI infrastructure solutions empower customers to deploy generative AI at unprecedented TCO, while maintaining the flexibility and security modern data centers demand,” said Durga Malladi, SVP & GM, Technology Planning, Edge Solutions & Data Center, Qualcomm Technologies, Inc. “Our rich software stack and open ecosystem support make it easier than ever for developers and enterprises to integrate, manage, and scale already trained AI models on our optimized AI inference solutions. With seamless compatibility for leading AI frameworks and one-click model deployment, Qualcomm AI200 and AI250 are designed for frictionless adoption and rapid innovation.”

Our hyperscaler-grade AI software stack, which spans end-to-end from the application layer to system software layer, is optimized for AI inference. The stack supports leading machine learning (ML) frameworks, inference engines, generative AI frameworks, and LLM / LMM inference optimization techniques like disaggregated serving. Developers benefit from seamless model onboarding and one-click deployment of Hugging Face models via Qualcomm Technologies’ Efficient Transformers Library and Qualcomm AI Inference Suite. Our software provides ready-to-use AI applications and agents, comprehensive tools, libraries, APIs, and services for operationalizing AI.

Qualcomm AI200 and AI250 are expected to be commercially available in 2026 and 2027, respectively.

Lost Password