Purpose-built for the unique demands of AI.
NVIDIA DGX SuperPOD™ provides leadership-class AI infrastructure with agile, scalable performance for the most challenging AI training and inference workloads. Available with a choice of NVIDIA Blackwell-powered compute options in the NVIDIA DGX™ platform, DGX SuperPOD isn’t just a collection of hardware, but a full-stack data center platform that includes industry-leading computing, storage, networking, software, and infrastructure management optimized to work together and provide maximum performance at scale.
A ready-to-run, turnkey AI supercomputer, design-optimized with high-performance compute, networking, storage, and software integration.
Scaling to tens of thousands of NVIDIA GPUs, NVIDIA DGX SuperPOD tackles training and inference for state-of-the-art trillion-parameter generative AI models.
Includes enterprise-grade cluster and workload management, libraries that accelerate compute, storage, and network infrastructure, and an operating system optimized for AI workloads.
Extensively tested and pushed to the furthest limits with real-world enterprise AI workloads, so you don’t have to worry about application performance.
Guidance and support throughout the infrastructure lifecycle, with access to experts covering the full stack to keep AI workloads running at peak performance.
NVIDIA DGX SuperPOD offers a turnkey AI data center solution for organizations building AI factories, seamlessly delivering world-class computing, software tools, expertise, and continuous innovation. With a choice of compute options, NVIDIA DGX SuperPOD enables every enterprise to integrate AI into their business and create innovative applications rather than struggling with platform complexity.
Always featuring the best of NVIDIA AI innovation, NVIDIA DGX SuperPOD is offered with the full range of NVIDIA Blackwell-powered compute options from the NVIDIA DGX platform.
DGX SuperPOD with NVIDIA DGX B200 Systems is ideal for scaled infrastructure supporting enterprise teams of any size with complex, diverse AI workloads, such as building large language models, optimizing supply chains, or extracting intelligence from mountains of data.
DGX SuperPOD with NVIDIA DGX H200 Systems is best for scaled infrastructure to support the largest, most complex or transformer-based AI workloads, such as LLMs with the NVIDIA NeMo framework and deep learning recommender systems.
NVIDIA Mission Control streamlines AI factory operations, delivering instant agility, infrastructure resiliency, and hyperscale efficiency, accelerating AI experimentation for enterprises with full-stack software intelligence.
NVIDIA Mission Control streamlines AI operations, delivering instant agility, infrastructure resiliency, and hyperscale efficiency, accelerating AI experimentation for enterprises with full-stack software intelligence.
We are pioneering homegrown LLMs for the Japanese language, aiming at 390 billion parameters. This empowers businesses with finely-tuned AI solutions tailored to their culture and practices, utilizing DGX SuperPOD and NVIDIA AI Enterprise software stack for seamless development and deployment.
— Ashiq Khan, Vice President and Head of the Unified Cloud and Platform Division, Softbank Corp
We trained our LLMs more effectively with NVIDIA DGX SuperPOD’s powerful performance — as well as NeMo’s optimized algorithms and 3D parallelism techniques. We considered using other platforms, but it was difficult to find an alternative that provides full-stack environments — from the hardware level to the inference level.
- Hwijung Ryu, LLM Development Team Lead, KT Corporation
The 210 petaFLOPS Param Siddhi AI [supercomputer] equipped with DGX SuperPOD and indigenously developed HPC-AI engine, HPC-AI software frameworks, and cloud platform by C-DAC will accelerate experiments for solving India-specific grand challenges using science and engineering.
— Dr. Hemant Darbari, Director General, Centre for Development of Advanced Computing (C-DAC)
This will allow researchers to perform quantum-accurate molecular simulations of proteins to help find cures to diseases like COVID-19. What would’ve taken more than 6,000 years will now only take a day.
- Adrian Roitberg, Professor of Chemistry, University of Florida
The DGX SuperPOD is helping NAVER CLOVA to build state-of-the-art language models for Korean and Japanese markets and evolve into a strong AI platform player in the global market.
- Suk Geun SG Chung, Head of COLVA CIC, Naver Corporation
Looking for a turnkey, ready-to-run AI development platform? Equinix Private AI with NVIDIA DGX leverages Equinix colocation data centers, network connectivity, and managed services to host and operate NVIDIA DGX BasePOD™ and DGX SuperPOD.
Looking for a turnkey, ready-to-run AI development platform? Equinix Private AI with NVIDIA DGX leverages Equinix colocation data centers, network connectivity, and managed services to host and operate NVIDIA DGX BasePOD™ and DGX SuperPOD.
NVIDIA Eos, #9 in the TOP 500, is a large-scale NVIDIA DGX SuperPOD that enables AI innovation at NVIDIA, helping researchers to take on challenges that were previously impossible.
NVIDIA Eos, #9 in the TOP 500, is a large-scale NVIDIA DGX SuperPOD that enables AI innovation at NVIDIA, helping researchers to take on challenges that were previously impossible.
From landing top spots on supercomputing lists to outperforming all other AI infrastructure options at scale in MLPerf benchmarks, the NVIDIA DGX platform is at the forefront of innovation. Learn why customers choose NVIDIA DGX for their AI projects.
DGX SuperPOD with DGX GB200 systems is liquid-cooled, rack-scale AI infrastructure with intelligent predictive management capabilities for training and inferencing trillion-parameter generative AI models, powered by NVIDIA GB200 Grace Blackwell Superchips.
The fastest way to get started using the DGX platform is with NVIDIA DGX Cloud, a serverless AI-training-as-a-service platform purpose built for enterprises developing generative AI.
NVIDIA DGX SuperPOD delivers an integrated AI infrastructure solution with certified, high-performance storage that's been rigorously tested and certified by NVIDIA to handle the most demanding AI workloads, ensuring optimal performance.
NVIDIA Enterprise Services provides support, education, and professional services for your DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully.
Learn how to achieve cutting-edge breakthroughs with AI faster with this special technical training offered expressly to DGX customers from the AI experts at NVIDIA’s Deep Learning Institute (DLI).
Learn how to achieve cutting-edge breakthroughs with AI faster with this special technical training offered expressly to DGX customers from the AI experts at NVIDIA’s Deep Learning Institute (DLI).
NVIDIA DGX™ systems with DDN A3I is the definitive path to production AI with customers worldwide, across generative AI, autonomous vehicles, government, life sciences, financial services, and more. Our integrated solution provides unlimited scaling and improved performance as clusters grow, for faster iteration and, most importantly, speeding business innovation. The combined expertise gives customers the fastest path to a high-performance AI data center with 10X the performance, at a fraction of the power of competitive solutions.
IBM Storage Scale System is an NVIDIA certified ultra-performance solution that drives AI innovation and scales seamlessly from NVIDIA DGX BasePOD™ to the largest DGX SuperPOD™ installations. Deployed by thousands of organizations for GPU acceleration and AI, IBM Storage Scale System delivers six nines of data reliability, cyber resiliency, and multi-protocol data pipelines for the most-demanding enterprises. Software-defined IBM Storage integrates and tiers your data, so you can leverage a global data platform to bring value to your organization and transform data-intensive AI workloads into actionable insights.
Achieve limitless scale and performance with the VAST Data Platform, making large-scale AI simpler, faster, and easier to manage. VAST is deployed at some of the world's largest supercomputing centers and leading research institutions. VAST’s unique combination of massively parallel architecture, enterprise-grade security, ease of use, and revolutionary data reduction is enabling more organizations to become AI-driven enterprises. VAST’s deep integration with NVIDIA technologies including NVIDIA® BlueField® and GPUDirect® Storage eliminates complexity and streamlines AI pipelines to accelerate insights.
NetApp, the intelligent data infrastructure company, delivers enterprise-grade storage hardware and services that meets the demanding requirements for NVIDIA DGX SuperPOD™, NVIDIA DGX BasePOD™, and other NVIDIA accelerated architectures. NetApp ONTAP® allows customers to build AI factories with seamless, silo-free data access across hybrid multicloud environments. For organizations accelerating their AI infrastructure, NVIDIA DGX SuperPOD with all the benefits of NetApp ONTAP Storage offers an enterprise-grade option to NetApp EF600 with BeeGFS certified storage for NVIDIA DGX SuperPOD.
Dell PowerScale delivers an AI-ready data platform that accelerates data processing and AI training—now validated on NVIDIA DGX SuperPOD™. PowerScale's scalable architecture enables effortless expansion, empowering organizations to refine generative AI models and safeguard data through robust security features. With high-speed Ethernet connectivity, PowerScale accelerates data access to NVIDIA DGX™ systems, minimizing transfer times and maximizing storage throughput. Smart scale-out capabilities, including the Multipath Client Driver and NVIDIA® GPUDirect®, ensure organizations can meet high-performance thresholds for accelerated AI model training and inference.
Optimize your data infrastructure investments and push the boundaries of AI innovation with the WEKApod Data Platform Appliance certified for NVIDIA DGX SuperPOD™. Pairing NVIDIA DGX™ infrastructure and networking technologies with the WEKA® Data Platform delivers enhanced performance for diverse AI workloads and fosters faster model training and deployment. The advanced, scale-out architecture transforms stagnant data storage silos into dynamic data pipelines that fuel GPUs more efficiently and powers AI workloads seamlessly and sustainably, on premises and in the cloud.
Pure Storage and NVIDIA are partnering to bring the latest technologies to every enterprise seeking to infuse its business with AI. Developed in collaboration with NVIDIA, AIRI is powered by NVIDIA DGX BasePOD™ and FlashBlade//S storage. Additionally, FlashBlade//S storage is now certified for NVIDIA DGX SuperPOD™, a certified turnkey AI data center solution for enterprises. Pure Storage is an Elite member of the NVIDIA Partner Network (NPN) and works closely with NVIDIA and mutual channel partners to ensure solution integration support.