The foundation for your AI factory.
NVIDIA DGX™ B200 is a unified AI platform for develop-to-deploy pipelines for businesses of any size at any stage in their AI journey. Equipped with eight NVIDIA Blackwell GPUs interconnected with fifth-generation NVIDIA® NVLink™, NVIDIA DGX B200 delivers 3X the training performance and 15X the inference performance of previous-generation systems. Leveraging the NVIDIA Blackwell architecture, DGX B200 can handle diverse workloads—including large language models, recommender systems, and chatbots—making it ideal for businesses looking to accelerate their AI transformation.
Benefits
Performance
Projected performance subject to change. Token-to-token latency (TTL) = 50ms real time, first token latency (FTL) = 5s, input sequence length = 32,768, output sequence length = 1,028, 8x eight-way NVIDIA DGX H100 GPUs air-cooled vs. 1x eight-way NVIDIA DGX B200 air-cooled, per GPU performance comparison.
Projected performance subject to change. 32,768 GPU scale, 4,096x eight-way NVIDIA DGX H100 air-cooled cluster: 400G IB network, 4,096x 8-way NVIDIA DGX B200 air-cooled cluster: 400G IB network.
Specifications
GPU | 8x NVIDIA Blackwell GPUs |
GPU Memory | 1,440GB total GPU memory |
Performance | 72 petaFLOPS training and 144 petaFLOPS inference |
Power Consumption | ~14.3kW max |
CPU | 2 Intel® Xeon® Platinum 8570 Processors 112 Cores total, 2.1 GHz (Base), 4 GHz (Max Boost) |
System Memory | Up to 4TB |
Networking | 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI
|
Management Network | 10Gb/s onboard NIC with RJ45 100Gb/s dual-port ethernet NIC Host baseboard management controller (BMC) with RJ45 |
Storage | OS: 2x 1.9TB NVMe M.2 Internal storage: 8x 3.84TB NVMe U.2 |
Software | NVIDIA AI Enterprise: Optimized AI Software NVIDIA Base Command™: Orchestration, Scheduling, and Cluster Management NVIDIA DGX OS / Ubuntu: Operating system |
Rack Units (RU) | 10 RU |
System Dimensions | Height: 17.5in (444mm) Width: 19.0in (482.2mm) Length: 35.3in (897.1mm |
Operating Temperature | 5–30°C (41–86°F) |
Enterprise Support | Three-year Enterprise Business-Standard Support for hardware and software 24/7 Enterprise Support portal access Live agent support during local business hours |
Resources
NVIDIA DGX SuperPOD is a turnkey AI data center infrastructure solution that delivers uncompromising performance for every user and workload. Configurable with any NVIDIA DGX system, DGX SuperPOD provides leadership-class accelerated infrastructure with scalable performance for the most demanding AI training and inference workloads, with industry-proven results, allowing IT to deliver performance without compromise.
NVIDIA Mission Control streamlines AI factory operations, from workloads to infrastructure, with world-class expertise delivered as software. It powers NVIDIA Blackwell data centers, bringing instant agility for inference and training while providing full-stack intelligence for infrastructure resilience. Every enterprise can run AI with hyperscale efficiency, simplifying and accelerating AI experimentation.
NVIDIA Enterprise Services provide support, education, and professional services for your NVIDIA DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully
Learn how to achieve cutting-edge breakthroughs with AI faster with this special technical training offered expressly to NVIDIA DGX customers from the AI experts at NVIDIA’s Deep Learning Institute (DLI).
Get Started