NVIDIA at KubeCon and CloudNativeCon NA 2024

NVIDIA at KubeCon and CloudNativeCon NA 2024

November 12–15
Salt Palace Convention Center
Salt Lake City, Utah

This year, NVIDIA is a proud sponsor of  KubeCon and CloudNativeCon North America 2024 at the Salt Palace Convention Center from November 12–15. The event will feature a keynote from NVIDIA’s Chris Lamb, Vice President, Computing Software Platforms, and eighteen groundbreaking sessions from NVIDIA speakers highlighting our many contributions to the cloud-native computing ecosystem.

The Many Facets of Building and Delivering AI in the Cloud-Native Ecosystem

Join us in learning how NVIDIA uses and contributes to a wide variety of CNCF projects to build, deliver, and advance technologies’ most rapidly advancing fields.  

10:05 a.m. – 10:20 a.m. MT

Salt Palace Level 1, Hall DE

Chris Lamb, Vice President, Computing Software Platforms, NVIDIA

Schedule at a Glance

Explore a wide range of groundbreaking sessions in the field of AI, accelerated computing, networking, and more. Take a closer look at the scheduled NVIDIA sessions that are part of this year’s program.

 

Tues, 10:40 a.m. - 11:05 a.m. MT

Salt Palace Level 2 - 251 AD

Building a Cutting-Edge Kubernetes Internal Developer Platform at NVIDIA

Feng Zhou, NVIDIA

Carlos Santana, AWS

Tues, 5:45 p.m. - 5:50 p.m. MT

Hyatt Regency Level 4 - Regency Ballroom B

Lightning Talk: Evaluating Scheduler Efficiency for AI/ML Jobs Using Custom Resource Metrics

Dmitry Shmulevich, NVIDIA

Tues, 6:10 p.m. - 6:15 p.m. MT

Hyatt Regency Level 4 Regency Ballroom B

Lightning Talk: Running Kind Clusters With GPU Support Using nvkind.

Evan Lezar, NVIDIA

Wed, 10:05 a.m. - 10:20 a.m. MT

Salt Palace Level 1, Hall DE

Keynote: NVIDIA Case Study: The Many Facets of Building and Delivering AI in the Cloud-Native Ecosystem

Chris Lamb, NVIDIA

Wed, 11:15 a.m. - 11:50 a.m. MT

Salt Palace Level 1, Grand Ballroom H

All-Your-GPUs-Are-Belong-to-Us: An Inside Look at NVIDIA's Self-Healing GeForce NOW Infrastructure

Ryan Hallisey, NVIDIA

Piotr Prokop, NVIDIA

Wed, 2:30 p.m. - 3:05 p.m. MT

Hyatt Regency Level 4, Regency Ballroom B

Kubernetes WG Device Management: Advancing K8s Support for GPUs

Kevin Klues, NVIDIA

John Belamaric, Google

Patrick Ohly, Intel

Wed, 2:30 p.m. - 3:05 p.m. MT

Salt Palace  Level 1, Grand Ballroom A

AIStore as a Fast Tier Storage Solution: Enhancing Petascale Deep Learning Across Cloud Backends

Abhishek Gaikwad, NVIDIA

Aaron Wilson, NVIDIA

Wed, 3:25 p.m. - 4:00 p.m. MT

Salt Palace Level 2, 255 B

A Tale of 2 Drivers: GPU Configuration on the Fly Using DRA

Alay Patel, NVIDIA

Varun Ramachandra Sekar, NVIDIA

Wed, 4:30 p.m. - 6:00 p.m. MT

Salt Palace Level 1, Grand Ballroom G



Tutorial: Get the Most Out of Your GPUs on Kubernetes With the GPU Operator

Christopher Desiniotis, NVIDIA

Tariq Ibrahim, NVIDIA

Eduardo Arango Gutierez, NVIDIA

Amanda Moran, NVIDIA

David Porter, Google

Thur, 11:00 a.m. - 11:35 a.m. MT

Salt Palace Level 1, 151 G

From Silicon to Service: Ensuring Confidentiality in Serverless GPU Cloud Functions

Zvonko Kaiser, NVIDIA

Thur, 2:30 p.m. - 3:05 p.m. MT

Salt Palace Level 2, 255 E

Unlocking the Potential of Large Models in Production

Adam Tetelman, NVIDIA

Yuan Tang, Red Hat

Thur, 4:30 p.m. - 5:05 p.m. MT

Salt Palace Level 2, 255 E

Which GPU-sharing Strategy is Right for You?

A Comprehensive Benchmark Study using DRA

Kevin Klues, NVIDIA

Yuan Chen, NVIDIA

Thur, 5:25 p.m.- 6:00 p.m. MT

Salt Palace Level 1, Grand Ballroom A

Engaging the KServe Community, The Impact of Integrating Solutions With Standardized CNCF Projects

Adam Tetelman, NVIDIA

Tessa Pham, Bloomberg

Andreea Munteanu, Canonical

Johnu George, Nutanix

Taneem Ibrahim, Red Hat

Fri, 11:55 a.m. - 12:30 p.m. MT

Hyatt Regency Level 4, Regency Ballroom B

WG Serving: Accelerating AI/ML Inference Workloads on Kubernetes

Eduardo Arango Gutierrez, NVIDIA

Yuan Tang, Red Hat

Fri, 2:00 p.m. - 2:35 p.m. MT

Salt Palace Level 2, 255 E

From Vectors to Pods: Integrating AI With Cloud Native

Kevin Klues, NVIDIA

Joseph Sandoval, Adobe

Rajas Kakodkar, Broadcom

Ricardo Rocha,CERN

Cathy Zhang, Intel

Fri, 2:55 p.m. - 3:30 p.m. MT

Salt Palace Level 2, 255 E

Enabling Fault Tolerance for GPU Accelerated AI Workloads in ML Platforms

Abhijit Paithankar, NVIDIA

Arpit Singh, NVIDIA

Fri, 2:55 p.m. - 3:30 p.m. MT

Salt Palace Level 1, 155 E

Thousands of Gamers, One Kubernetes Network

Girish Moodalbail, NVIDIA

Surya Seetharaman, Red Hat

Fri, 4:00 p.m. - 4:35 p.m. MT

Salt Palace Level 2, 250 AD

Best Practices for Deploying LLM Inference, RAG, and Fine Tuning Pipelines on K8s

Meenakshi Kaushik, NVIDIA

Shiva Krishna Merla, NVIDIA

Fri, 4:55 p.m. - 5:30 p.m. MT

Salt Palace Level 2, 250 AD

Best of Both Worlds: Integrating Slurm With Kubernetes in a Kubernetes Native Way

Eduardo Arango Gutierrez, NVIDIA

Angel Beltre, Sandia National Laboratories

Explore NVIDIA Solutions

NVIDIA NIM Agent Blueprints

NVIDIA NIM™ Agent Blueprints are reference workflows for canonical generative AI use cases. Enterprises can build and operationalize custom AI applications—creating data-driven AI flywheels—using NIM Agent Blueprints along with NVIDIA NIM microservices and NVIDIA NeMo™ framework, all part of the NVIDIA AI Enterprise Platform.

NVIDIA AI Platform

Explore the latest community-built AI models with APIs optimized and accelerated by NVIDIA, then deploy anywhere with NVIDIA NIM inference microservices.

NVIDIA NIM Operator

NVIDIA NIM Operator, a Kubernetes operator designed to facilitate the deployment, scaling, monitoring, and management of NVIDIA NIM microservices on Kubernetes clusters. With NIM Operator, you can deploy, auto-scale, and manage the lifecycle of NVIDIA NIM microservices with just a few clicks or commands.

NVIDIA Cloud-Native Technologies

From the data center and cloud to the desktop and edge, NVIDIA Cloud-Native Technologies provide the ability to run deep learning, machine learning, and other GPU-accelerated workloads managed by Kubernetes on systems with NVIDIA GPUs. They also allow the seamless deployment and development of containerized software on enterprise cloud-native management frameworks.

Programs and Technical Training

NVIDIA Program for Startups

NVIDIA Inception provides thousands of members worldwide with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.

NVIDIA Training and Certification

Develop the skills you and your team need to do your life’s work in AI, accelerated computing, data science, graphics & simulation, and more. Validate your skills with technical certification from NVIDIA.

Access Free Tools, Training, and Experts Community

Join our free NVIDIA Developer Program to access training, resources, and tools that can accelerate your work and advance your skills. Get a free credit for one of our self-paced courses when you join.

Like No Place You’ve Ever Worked

Working at NVIDIA, you’ll solve some of the world’s hardest problems and discover never-before-seen ways to improve the quality of life for people everywhere. From healthcare to robots, self-driving cars to blockbuster movies, you’ll experience it all. Plus, there’s a growing list of new opportunities every single day. Explore all of our open roles, including internships and new college graduate positions.

Learn more about our current job openings, as well as university jobs.

Register now to join NVIDIA at KubeCon.