NVIDIA launches world’s only petascale workgroup server

NVIDIA recently announced the NVIDIA DGX Station A100, the world’s only petascale workgroup server. The second generation of the groundbreaking AI system, DGX Station A100 accelerates demanding machine learning and data science workloads for teams working in corporate offices, research facilities, labs or home offices everywhere.

Delivering 2.5 petaflops of AI performance, DGX Station A100 is the only workgroup server with four of the latest NVIDIA A100 Tensor Core GPUs fully interconnected with NVIDIA NVLink, providing up to 320GB of GPU memory to speed breakthroughs in enterprise data science and AI.

DGX Station A100 is also the only workgroup server that supports NVIDIA’s multi-instance GPU (MIG) technology. With MIG, a single DGX Station A100 provides up to 28 separate GPU instances to run parallel jobs and support multiple users without impacting system performance.

“DGX Station A100 brings AI out of the data center with a server-class system that can plug in anywhere,” said Charlie Boyle, vice president and general manager of DGX systems at NVIDIA. “Teams of data science and AI researchers can accelerate their work using the same software stack as NVIDIA DGX A100 systems, enabling them to easily scale from development to deployment.”

While DGX Station A100 does not require data-center-grade power or cooling, it is a server-class system that features the same remote management capabilities as NVIDIA DGX A100 data center systems. System administrators can easily perform any management tasks over a remote connection when data scientists and researchers are working at home or in labs.

DGX Station A100 is available with four 80GB or 40GB NVIDIA A100 Tensor Core GPUs, providing options for data science and AI research teams to select a system according to their unique workloads and budgets.

To power complex conversational AI models like BERT Large inference, DGX Station A100 is more than 4x faster than the previous generation DGX Station. It delivers nearly a 3x performance boost for BERT Large AI training.

For advanced data center workloads, DGX A100 systems will be available with the new NVIDIA A100 80GB GPUs, doubling GPU memory capacity to 640GB per system to enable AI teams to boost accuracy with larger datasets and models.

The new NVIDIA DGX A100 640GB systems can also be integrated into the NVIDIA DGX SuperPOD Solution for Enterprise, allowing organizations to build, train and deploy massive AI models on turnkey AI supercomputers available in units of 20 DGX A100 systems.

NVIDIA DGX Station A100 and NVIDIA DGX A100 640GB systems will be available this quarter through NVIDIA Partner Network resellers worldwide. An upgrade option is available for NVIDIA DGX A100 320GB customers.

Comments

Lost Password

Contact us

Do NOT follow this link or you will be banned from the site!