How to deploy a training and inference workload on Red Hat OpenShift using a Helm chart
Free Hands-On Labs
Deploy an End-to-End AI Workload Using a Helm Chart
Dive into the classification of images with a multi-node training workflow.
How to deploy a training and inference workload on Red Hat OpenShift using a Helm chart
Pre-processing images with TensorFlow and using transfer learning to fine-tune an image classification model
How to deploy the model for production inference using Triton Inference Server
Each Lab Comes With World-Class Service and Support
Take curated labs that walk you through the entire process, from infrastructure optimization to application deployment.
Test and prototype on ready-to-use infrastructure made available to you.
Learn from documented, self-paced experiences and access assistance from NVIDIA experts when you need it.
After using NVIDIA LaunchPad, you’ll make more confident design and purchase decisions to accelerate your journey.
This Lab Is a Collaboration Between: