NVIDIA AI Summit

NVIDIA at OCP Summit 2024

San Jose McEnery Convention Center
October 15–17

This year, NVIDIA is participating in the Open Compute Global Summit (OCP) at the San Jose McEnery Convention Center from October 15–17 to showcase the latest advancements and innovations in NVIDIA’s accelerated computing platform. We’ll feature a keynote and groundbreaking sessions from NVIDIA speakers on the latest in AI, networking, energy efficiency, security, and more.

NVIDIA Contributes Blackwell Platform Design to Open Hardware Ecosystem, Accelerating AI Infrastructure Innovation

At this year’s OCP Global Summit, NVIDIA will be sharing key portions of the NVIDIA GB200 NVL72 system’s electro-mechanical design with the OCP community — including the rack architecture, compute and switch tray mechanicals, liquid-cooling and thermal environment specifications, and NVIDIA NVLink™ cable cartridge volumetrics—to support higher compute density and networking bandwidth.

Schedule at a Glance

Explore a wide range of groundbreaking sessions in the field of AI, accelerated computing, networking, and more. Take a closer look at the scheduled NVIDIA sessions that are part of this year’s program.

Tuesday 10/15 Wednesday 10/16 Thursday 10/17
Keynote: Fostering Collaboration: Designing Data Centers for Tomorrow's AI Workloads
9:24–9:36 a.m. PT
Grand Ballroom 220
The Exponential Demands AI Places on the Rack and Data Center
8:05–8:25 a.m. PT
Concourse Level - 220B
SIOV R2: Next-Gen I/O Virtualization Framework for Next-Gen Data Centers
9:10–9:30 a.m. PT
Concourse Level - 210CG
Empowering AI Networking With SONiC
1:10–1:30 p.m. PT
Lower Level - LL20BC
OCP Data Formats for Deep Learning
8:25–8:45 a.m. PT
Concourse Level - 220B
OCP Cooling Environments and ASHRAE TC9.9: Roadmap for Future Collaboration
9:45–10:15 a.m. PT
Concourse Level - 210AE
Exploring the Wilderness: Optimizing Ethernet Fabrics for AI Workloads
1:30–1:50 p.m. PT
Lower Level - LL20BC
Standardizing Hyperscaler Requirements for Accelerators
9:00–9:30 a.m. PT
Concourse Level - 2210CG
Chiplet Interoperability and Breakthroughs in AI
10:45–11:05 a.m. PT
Concourse Level - 220B
Mature at Scale Memory Fabrics for all Performance and Price Points
1:40–2:00 p.m. PT
Lower Level - LL20A
Fabric Resiliency at Scale
10:25–10:45 a.m. PT
Lower Level - LL20A
NVIDIA MGX: Faster Time to Market for the Accelerated Data Center
10:50–11:10 a.m. PT
Concourse Level - 210CG
Building the First SONiC Cloud AI-Benchmarked Cluster in the World
2:50–3:10 p.m. PT
Lower Level - LL20BC
New Approaches to Network Telemetry: Essential for AI performance
10:45–11:00 a.m. PT
Lower Level - LL20A
Roadmap for a Durable Chip Coolant Temperature
12:30–12:50 p.m. PT
Concourse Level - 210AE

Deploying the AI Factory
10:45–11:00 a.m. PT
Lower Level - LL20D
Cold Plate Panel Discussion on DLC for AI TTM
2:00–3:00 p.m. PT
Lower Level - LL20BC

Compute Resilience: Industry Update
11:00–11:30 a.m. PT
Concourse Level - 2210CG
Study of Fluid Velocity Limits in Cold Plate Liquid Cooling Loops
3:20–3:40 p.m. PT
Concourse Level - 210AE

High Performance Data Center Storage Using DPUs
1:10–1:30 a.m. PT
Concourse Level - 210DH


AI for Quantum Computing and NVIDIA’s QC Strategy
1:10–1:30 a.m. PT
Lower Level - LL20BC


Introducing the SPDM Authorization Specification
1:30–1:50 p.m. PT
Concourse Level - 220C


Caliptra Journey: What Are We Doing Next?
2:10–2:30 p.m. PT
Concourse Level - 220C


CTAM:Compliance Tool for Accelerator Manageability
4:05–4:25 p.m. PT
Concourse Level - 210CG


OCP Streaming Boot Implementation Update
4:10–4:30 p.m. PT
Concourse Level - 220C


Protecting AI Workloads From Noisy Neighbors in Cloud Networks With NVIDIA Spectrum-X
4:10–4:30 p.m. PT
Concourse Level - 220B


Towards an Open System for AI
4:30–5:00 p.m. PT
Concourse Level - 22S

Architectures for the Modern Data Center

Blackwell GPU Architecture

Explore the groundbreaking advancements the NVIDIA Blackwell architecture—now in full production—brings to generative AI and accelerated computing. Building upon generations of NVIDIA technologies, Blackwell defines the next chapter in generative AI with unparalleled performance, efficiency, and scale.

Grace CPU Architecture

The NVIDIA Grace™ CPU delivers high performance, power efficiency, and high-bandwidth connectivity that can be used for high-performance computing (HPC) and AI applications. NVIDIA Grace Hopper™ Superchip is a breakthrough integrated CPU+GPU for giant-scale AI and HPC applications. For CPU-only HPC applications, the NVIDIA Grace CPU Superchip provides the highest performance, memory bandwidth, and energy efficiency compared to today’s leading server chips.

Networking

Modern AI workloads operate at data center scale, relying heavily on fast, efficient connectivity between GPU servers. NVIDIA accelerated networking solutions create the secure, robust, and high-performance infrastructure necessary for the next wave of accelerated computing and AI.


Explore NVIDIA Solutions

NVIDIA Generative AI

NVIDIA AI is the world’s most advanced platform for generative AI. Trusted by global organizations at the forefront of innovation, it’s continuously updated and enables enterprises to confidently deploy production-grade generative AI at scale anywhere.

NVIDIA Grace Blackwell

NVIDIA GB200 NVL72 connects 36 Grace CPUs and 72 Blackwell GPUs in a rack-scale design. The GB200 NVL72 is a liquid-cooled, rack-scale solution that boasts a 72-GPU NVIDIA NVLink™ domain that acts as a single massive GPU and delivers 30X faster real-time trillion-parameter LLM inference.

Programs and Technical Training

NVIDIA Program for Startups

NVIDIA Inception provides over 15,000 worldwide members with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.

NVIDIA Training

Our expert-led courses and workshops provide learners with the knowledge and hands-on experience necessary to unlock the full potential of NVIDIA solutions. Our customized training plans are designed to bridge technical skill gaps and provide relevant, timely, and cost-effective solutions for an organization's growth and development.

DGX Administrator Training

Learn how to administer the NVIDIA DGX™ platform for all clusters and systems. Unique courses for DGX H100 and A100, DGX BasePOD, DGX SuperPOD, and even DGX Cloud offer attendees the knowledge to administer and deploy the platform successfully.

Meet Our Partners

Read this blog on NVIDIA’s contribution of NVIDIA GB200 NVL72 designs to OCP.