Visit your regional NVIDIA website for local content, pricing, and where to buy partners specific to your country.
This year, NVIDIA joined Hot Chips 2024 at Stanford University to showcase how the NVIDIA accelerated computing platform is reimagining the data center for the age of AI. Featuring three powerful architectures—GPU, DPU, and CPU—and a rich software stack, it’s built to take on the modern data center’s toughest challenges.
For more information on this year’s sessions, visit the Hot Chips webpage.
A technology conference for processor and system architects from industry and academia has become a key forum for the trillion-dollar data center computing market.
At Hot Chips 2024, senior NVIDIA engineers presented the latest advancements powering the NVIDIA Blackwell platform, plus research on liquid cooling for data centers and AI agents for chip design.
Explore the groundbreaking advancements the NVIDIA Blackwell architecture brings to generative AI and accelerated computing. Building upon generations of NVIDIA technologies, Blackwell defines the next chapter in generative AI with unparalleled performance, efficiency, and scale.
The NVIDIA Grace™ CPU delivers high performance, power efficiency, and high-bandwidth connectivity that can be used for HPC and AI applications. NVIDIA Grace Hopper Superchip is a breakthrough integrated CPU+GPU for giant-scale AI and HPC applications. For CPU-only HPC applications, the NVIDIA Grace CPU Superchip provides the highest performance, memory bandwidth, and energy efficiency compared to today’s leading server chips.
Modern AI workloads operate at data center scale, relying heavily on fast, efficient connectivity between GPU servers. NVIDIA accelerated networking solutions create the secure, robust, and high-performance infrastructure necessary for the next wave of accelerated computing and AI.
NVIDIA AI is the world’s most advanced platform for generative AI. Trusted by global organizations at the forefront of innovation, it’s continuously updated and enables enterprises to confidently deploy production-grade generative AI at scale anywhere.
NVIDIA GB200 NVL72 connects 36 Grace™ CPUs and 72 Blackwell GPUs in a rack-scale design. The GB200 NVL72 is a liquid-cooled, rack-scale solution that boasts a 72-GPU NVIDIA NVLink™ domain that acts as a single massive GPU and delivers 30X faster real-time trillion-parameter LLM inference.
NVIDIA Inception provides over 15,000 worldwide members with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.
Our expert-led courses and workshops provide learners with the knowledge and hands-on experience necessary to unlock the full potential of NVIDIA solutions. Our customized training plans are designed to bridge technical skill gaps and provide relevant, timely, and cost-effective solutions for an organization's growth and development.
Learn how to administer the NVIDIA DGX platform for all clusters and systems. Unique courses for DGX H100 and A100, DGX BasePOD, DGX SuperPOD, and even DGX Cloud offer attendees the knowledge to administer and deploy the platform successfully.
Register now to join NVIDIA at Hot Chips 2024.