The NVIDIA AI platform achieves world-class performance and versatility in MLPerf Training, Inference, and HPC benchmarks for the most demanding, real-world AI workloads.
MLPerf™ benchmarks—developed by MLCommons, a consortium of AI leaders from academia, research labs, and industry—are designed to provide unbiased evaluations of training and inference performance for hardware, software, and services. They’re all conducted under prescribed conditions. To stay on the cutting edge of industry trends, MLPerf continues to evolve, holding new tests at regular intervals and adding new workloads that represent the state of the art in AI.
Chalmers University is one of the leading research institutions in Sweden, specializing in multiple areas from nanotechnology to climate studies. As we incorporate AI to advance our research endeavors, we find that the MLPerf benchmark provides a transparent apples-to-apples comparison across multiple AI platforms to showcase actual performance in diverse real-world use cases.
— Chalmers University of Technology, Sweden
TSMC is driving the cutting edge of global semiconductor manufacturing, like our latest 5nm node, which leads the market in process technology. Innovations like machine-learning-based lithography and etch modeling dramatically improve our optical proximity correction (OPC) and etch simulation accuracy. To fully realize the potential of machine learning in model training and inference, we are working with the NVIDIA engineering team to port our Maxwell simulation and inverse lithography technology (ILT) engine to GPUs and see very significant speedups. The MLPerf benchmark is an important factor in our decision-making.
— Dr. Danping Peng, Director, OPC Department, TSMC, San Jose, CA, USA
Computer vision and imaging are at the core of AI research, driving scientific discovery and readily representing core components of medical care. We've worked closely with NVIDIA to bring innovations like 3DUNet to the healthcare market. Industry-standard MLPerf benchmarks provide relevant performance data to the benefit of IT organizations and developers to get the right solution to accelerate their specific projects and applications.
— Prof. Dr. Klaus Maier-Hein, Head of Medical Image Computing, Deutsches Krebsforschungszentrum (DKFZ, German Cancer Research Center)
As the preeminent leader in research and manufacturing, Samsung uses AI to dramatically boost product performance and manufacturing productivity. Productizing these AI advances requires us to have the best computing platform available. The MLPerf benchmark streamlines our selection process by providing us with an open, direct evaluation method to assess uniformly across platforms.
— Samsung Electronics
MLPerf Inference v4.1 measures inference performance on nine different benchmarks, including several large language models (LLMs), text-to-image, natural language processing, recommenders, computer vision, and medical image segmentation.
MLPerf Training v4.1 measures the time to train on seven different benchmarks, including LLM pre-training, LLM fine-tuning, text-to-image, graph neural network (GNN), computer vision, recommendation, and natural language processing.
MLPerf HPC v3.0 measures training performance across four different scientific computing use cases, including climate atmospheric river identification, cosmology parameter prediction, quantum molecular modeling, and protein structure prediction.
The NVIDIA HGX™ B200 platform, powered by NVIDIA Blackwell GPUs, fifth-generation NVLink™, and the latest NVLink Switch, delivered yet another giant leap for LLM training in MLPerf Training v4.1. Through relentless full-stack engineering at data center scale, NVIDIA continues to push the boundaries of generative AI training performance, accelerating the creation and customization of increasingly capable AI models.
NVIDIA Blackwell Supercharges LLM Training
MLPerf™ Training v4.1 results retrieved from http://www.mlcommons.org on November 13, 2024, from the following entries: 4.1-0060 (HGX H100, 2024, 512 GPUs) in the available category, 4.1-0082 (HGX B200, 2024, 64 GPUs) in the preview category. MLPerf™ Training v3.0 results, used for HGX H100 (2023, 512 GPUs), retrieved from entry 3.0-2069. HGX A100 result, using 512 GPUs, not verified by MLCommons association. Normalized per-GPU performance is not a primary metric of MLPerf™ Training. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See http://www.mlcommons.org for more information.
The NVIDIA platform, powered by NVIDIA Hopper™ GPUs, fourth-generation NVLink with third-generation NVSwitch™, and Quantum-2 InfiniBand, continued to demonstrate unmatched performance and versatility in MLPerf Training v4.1. NVIDIA delivered the highest performance at scale on all seven benchmarks.
Benchmark | Time to Train | Number of GPUs |
---|---|---|
LLM (GPT-3 175B) | 3.4 minutes | 11,616 |
LLM Fine-Tuning (Llama 2 70B-LoRA) | 1.2 minutes | 1,024 |
Text-to-Image (Stable Diffusion v2) | 1.4 minutes | 1,024 |
Graph Neural Network (R-GAT) | 0.9 minutes | 512 |
Recommender (DLRM-DCNv2) | 1.0 minutes | 128 |
Natural Language Processing (BERT) | 0.1 minutes | 3,472 |
Object Detection (RetinaNet) | 0.8 minutes | 2,528 |
MLPerf™ Training v4.1 results retrieved from https://mlcommons.org on November 13, 2024, from the following entries: 4.1-0012, 4.1-0054, 4.1-0053, 4.1-0059, 4.1-0055, 4.10058, 4.1-0056. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See https://mlcommons.org for more information.
In its MLPerf Inference debut, the NVIDIA Blackwell platform with the NVIDIA Quasar Quantization System delivered up to 4X higher LLM performance compared to the prior generation H100 Tensor Core GPU. Among available solutions, the NVIDIA H200 Tensor Core GPU, based on the NVIDIA Hopper architecture, delivered the highest performance per GPU for generative AI, including on all three LLM benchmarks, which included Llama 2 70B, GPT-J and the newly added mixture-of-experts LLM, Mixtral 8x7B, as well as on the Stable Diffusion XL text-to-image benchmark. Through relentless software optimization, H200’s performance increased by up to 27 percent in less than six months. For generative AI at the edge, NVIDIA Jetson Orin™ delivered outstanding results, boosting GPT-J throughput by more than 6X and reducing latency by 2.4X just in one round.
MLPerf Inference v4.1 Closed, Data Center. Results retrieved from https://mlcommons.org on August 28, 2024. Blackwell results measured on single GPU and retrieved from entry 4.1-0074 in the Closed, Preview category. H100 results from entry 4.1-0043 in the Closed, Available category on an 8x H100 system and divided by GPU count for per-GPU comparison. Per-GPU throughput is not a primary metric of MLPerf Inference. The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See https://mlcommons.org for more information.
Benchmark | Offline | Server |
---|---|---|
Llama 2 70B | 34,864 tokens/second | 32,790 tokens/second |
Mixtral 8x7B | 59,022 tokens/second | 57,177 tokens/second |
GPT-J | 20,086 tokens/second | 19,243 tokens/second |
Stable Diffusion XL | 17.42 samples/second | 16.78 queries/second |
DLRMv2 99% | 637,342 samples/second | 585,202 queries/second |
DLRMv2 99.9% | 390,953 samples/second | 370,083 queries/second |
BERT 99% | 73,310 samples/second | 57,609 queries/second |
BERT 99.9% | 63,950 samples/second | 51,212 queries/second |
RetinaNet | 14,439 samples/second | 13,604 queries/second |
ResNet-50 v1.5 | 756,960 samples/second | 632,229 queries/second |
3D U-Net | 54.71 samples/second | Not part of benchmark |
MLPerf Inference v4.1 Closed, Data Center. Results retrieved from https://mlcommons.org on August 28, 2024. All results using eight GPUs and retrieved from the following entries: 4.1-0046, 4.1-0048, 4.1-0050. The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See https://mlcommons.org for more information.
The NVIDIA H100 Tensor Core supercharged the NVIDIA platform for HPC and AI in its MLPerf HPC v3.0 debut, enabling up to 16X faster time to train in just three years and delivering the highest performance on all workloads across both time-to-train and throughput metrics. The NVIDIA platform was also the only one to submit results for every MLPerf HPC workload, which span climate segmentation, cosmology parameter prediction, quantum molecular modeling, and the latest addition, protein structure prediction. The unmatched performance and versatility of the NVIDIA platform makes it the instrument of choice to power the next wave of AI-powered scientific discovery.
NVIDIA Full-Stack Innovation Fuels Performance Gains
MLPerf™ HPC v3.0 Results retrieved from https://mlcommons.org on November 8, 2023. Results retrieved from entries 0.7-406, 0.7-407, 1.0-1115, 1.0-1120, 1.0-1122, 2.0-8005, 2.0-8006, 3.0-8006, 3.0-8007, 3.0-8008. CosmoFlow score in v1.0 is normalized to new RCPs introduced in MLPerf HPC v2.0. Scores for v0.7, v1.0, and v2.0 are adjusted to remove data staging time from the benchmark, consistent with new rules adopted for v3.0 to enable fair comparisons between the submission rounds. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See https://mlcommons.org for more information.
MLPerf™ HPC v3.0 Results retrieved from https://mlcommons.org on November 8, 2023. Results retrieved from entries 3.0-8004, 3.0-8009, and 3.0-8010. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited. See https://mlcommons.org for more information.
The complexity of AI demands a tight integration between all aspects of the platform. As demonstrated in MLPerf’s benchmarks, the NVIDIA AI platform delivers leadership performance with the world’s most advanced GPU, powerful and scalable interconnect technologies, and cutting-edge software—an end-to-end solution that can be deployed in the data center, in the cloud, or at the edge with amazing results.
An essential component of NVIDIA’s platform and MLPerf training and inference results, the NGC™ catalog is a hub for GPU-optimized AI, HPC, and data analytics software that simplifies and accelerates end-to-end workflows. With over 150 enterprise-grade containers—including workloads for generative AI, conversational AI, and recommender systems; hundreds of AI models; and industry-specific SDKs that can be deployed on premises, in the cloud, or at the edge—NGC enables data scientists, researchers, and developers to build best-in-class solutions, gather insights, and deliver business value faster than ever.
Achieving world-leading results across training and inference requires infrastructure that’s purpose-built for the world’s most complex AI challenges. The NVIDIA AI platform delivered leading performance powered by the NVIDIA Blackwell platform, the Hopper platform, NVLink™, NVSwitch™, and Quantum InfiniBand. These are at the heart of the NVIDIA data center platform, the engine behind our benchmark performance.
In addition, NVIDIA DGX™ systems offer the scalability, rapid deployment, and incredible compute power that enable every enterprise to build leadership-class AI infrastructure.
NVIDIA Jetson Orin offers unparalleled AI compute, large unified memory, and comprehensive software stacks, delivering superior energy efficiency to drive the latest generative AI applications. It’s capable of fast inference for any generative AI models powered by the transformer architecture, providing superior edge performance on MLPerf.
Learn more about our data center training and inference performance.
MLPerf Training uses the GPT-3 generative language model with 175 billion parameters and a sequence length of 2,048 on the C4 dataset for the LLM pre-training workload. For the LLM fine-tuning test, the Llama 2 70B model with the GovReport dataset with sequence lengths of 8,192.
MLPerf Inference uses the Llama 2 70B model with the OpenORCA dataset; the Mixtral 8x7B model with the OpenORCA, GSM8K, and MBXP datasets; and the GPT-J model with the CNN-DailyMail dataset.
MLPerf Training uses the Stable Diffusion v2 text-to-image model trained on the LAION-400M-filtered dataset.
MLPerf Inference uses the Stable Diffusion XL (SDXL) text-to-image model with a subset of 5,000 prompts from the coco-val-2014 dataset.
MLPerf Training and Inference use the Deep Learning Recommendation Model v2 (DLRMv2) that employs DCNv2 cross-layer and a multi-hot dataset synthesized from the Criteo dataset.
MLPerf Training uses Single-Shot Detector (SSD) with ResNeXt50 backbone on a subset of the Google OpenImages dataset.
MLPerf Training uses R-GAT with the Illinois Graph Benchmark (IGB) - Heterogeneous dataset.
MLPerf Inference uses ResNet v1.5 with the ImageNet dataset.
MLPerf Training uses Bidirectional Encoder Representations from Transformers (BERT) on the Wikipedia 2020/01/01 dataset.
MLPerf Inference uses BERT with the SQuAD v.1.1 dataset.
MLPerf Inference uses 3D U-Net with the KiTS19 dataset.
Uses the DeepCAM model with CAM5 + TECA simulation dataset.
Uses the CosmoFlow model with the CosmoFlow N-body simulation dataset.
Uses the DimeNet++ model with the Open Catalyst 2020 (OC20) dataset.
Uses the OpenFold model trained on the OpenProteinSet dataset.