Built for the age of AI reasoning.
Sign up to be notified when NVIDIA GB300 NVL72 becomes available.
Overview
The NVIDIA GB300 NVL72 features a fully liquid-cooled, rack-scale design that unifies 72 NVIDIA Blackwell Ultra GPUs and 36 Arm®-based NVIDIA Grace™ CPUs in a single platform optimized for test-time scaling inference. AI factories powered with the GB300 NVL72 using NVIDIA Quantum-X800 InfiniBand or Spectrum™-X Ethernet paired with ConnectX®-8 SuperNICS provide a 50x higher output for reasoning model inference compared to the NVIDIA Hopper™ platform.
DeepSeek R1 ISL = 32K, OSL = 8K, GB300 NVL72 with FP4 Dynamo disaggregation. H100 with FP8 in-flight batching. Projected performance subject to change.
Experience next-level AI reasoning performance with the NVIDIA GB300 NVL72 platform. Compared to Hopper, the GB300 NVL72 delivers an impressive 10x boost in user responsiveness (TPS per user) and a 5x improvement in throughput (TPS per megawatt (MW)). Together, these advancements translate into a remarkable 50x leap in overall AI factory output.
Features
Test-time scaling and AI reasoning increase the compute necessary to achieve quality of service and maximum throughput. NVIDIA Blackwell Ultra’s Tensor Cores are supercharged with 2x the attention-layer acceleration and 1.5x more AI compute floating-point operations per second (FLOPS) compared to NVIDIA Blackwell GPUs.
Larger memory capacity allows for larger batch sizing and maximum throughput performance.NVIDIA Blackwell Ultra GPU’s offer 1.5x larger HBM3e memory in combination with added AI compute, boosting AI reasoning throughput for the largest context lengths.
The NVIDIA Blackwell architecture delivers groundbreaking advancements in accelerated computing, powering a new era of unparalleled performance, efficiency, and scale.
The NVIDIA ConnectX-8 SuperNIC’s input/output (IO) module hosts two ConnectX-8 devices, providing 800 gigabits per second (Gb/s) of network connectivity for each GPU in the NVIDIA GB300 NVL72. This delivers best-in-class remote direct-memory access (RDMA) capabilities with either NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet networking platforms, enabling peak AI workload efficiency.
The NVIDIA Grace CPU is a breakthrough processor designed for modern data center workloads. It provides outstanding performance and memory bandwidth with 2x the energy efficiency of today’s leading server processors.
Unlocking the full potential of accelerated computing requires seamless communication between every GPU. The fifth-generation of NVIDIA NVLink™ is a scale–up interconnect that unleashes accelerated performance for AI reasoning models..
As a building block for the NVIDIA GB300 NVL72 rack-scale solution, the NVIDIA GB300 Grace Blackwell Ultra Superchip features four NVIDIA Blackwell Ultra GPUs, two Grace CPUs, and four ConnectX-8 SuperNICs. Through NVIDIA NVLink Switch technology and NVIDIA BlueField®-3 DPUs, 18 superchips combine into one giant GPU, purpose-built for the age of AI reasoning.
Specifications
Configuration | 72 NVIDIA Blackwell Ultra GPUs, 36 NVIDIA Grace CPUs |
NVLink Bandwidth | 130 TB/s |
Fast Memory | Up to 40 TB |
GPU Memory | Bandwidth | Up to 21 TB | Up to 576 TB/s |
CPU Memory | Bandwidth | Up to 18 TB SOCAMM with LPDDR5X | Up to 14.3 TB/s |
CPU Core Count | 2,592 Arm Neoverse V2 cores |
FP4 Tensor Core | 1,400 | 1,100² PFLOPS |
FP8/FP6 Tensor Core | 720 PFLOPS |
INT8 Tensor Core | 23 PFLOPS |
FP16/BF16 Tensor Core | 360 PFLOPS |
TF32 Tensor Core | 180 PFLOPS |
FP32 | 6 PFLOPS |
FP64 / FP64 Tensor Core | 100 TFLOPS |
1. Preliminary specifications. May be subject to change. All Tensor Core specifications are with sparsity unless otherwise noted. |
Resources
Sign up to be notified when the NVIDIA GB300 NVL72 becomes available.
Sign up for the latest news, updates, and more from NVIDIA.