NVIDIA Deep Learning Institute

Training You to Solve the World’s Most Challenging Problems

The NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI, accelerated computing, and accelerated data science. Developers, data scientists, researchers, and students can get practical experience powered by GPUs in the cloud. And IT professionals can access courses on designing and managing infrastructure to support AI, data science, and HPC workloads across their organizations. Get started with DLI through self-paced, online training for individuals, instructor-led workshops for teams, and downloadable course materials for university educators.  Earn an NVIDIA DLI certificate to demonstrate your subject matter competency and support your career growth.

  • <span style=

    Online
    Courses

  • <span style=

    Instructor-Led
    Workshops

  • <span style=

    University
    Training

For self-learners and small teams, we recommend self-paced, online training through DLI and online courses through our partners. With DLI, you’ll have access to a fully configured, GPU-accelerated server in the cloud, gain practical skills for your work, and have the opportunity to earn a certificate of subject matter competency.

Online training with DLI

Certificate Available

Deep Learning Courses

DEEP LEARNING FUNDAMENTALS

  • Fundamentals of Deep Learning for Computer Vision 

    Explore the fundamentals of deep learning by training neural networks and using results to improve performance and capabilities.

    Prerequisites: Familiarity with basic programming fundamentals such as functions and variables

    Technologies: Caffe, DIGITS

    Duration: 8 hours

    Price: $90 (excludes tax, if applicable)

  • Getting Started with AI on Jetson Nano

    Explore how to build a deep learning classification project with computer vision models using an NVIDIA® Jetson Nano Developer Kit.

    Prerequisites: Familiarity with Python (helpful, not required)

    Technologies: PyTorch, Jetson Nano

    Duration: 8 hours

    Price: Free

  • Optimization and Deployment of TensorFlow Models with TensorRT

    Learn how to optimize TensorFlow models to generate fast inference engines in the deployment stage.

    Prerequisites: Experience with TensorFlow and Python

    Technologies: TensorFlow, Python, NVIDIA TensorRT (TF-TRT)

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • Deep Learning at Scale with Horovod

    Learn how to scale deep learning training to multiple GPUs with Horovod, the open-source distributed training framework originally built by Uber and hosted by the LF AI Foundation.

    Prerequisites: Competency in Python and experience training deep learning models in Python

    Technologies: Horovod, TensorFlow, Keras, Python

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • Getting Started with Image Segmentation

    Learn how to categorize segments of an image.

    Prerequisites: Basic experience training neural networks

    Technologies: TensorFlow

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • Modeling Time Series Data with Recurrent Neural Networks in Keras

    Explore how to classify and forecast time-series data, such as modeling a patient's health over time, using recurrent neural networks (RNNs).

    Prerequisites: Basic experience with deep learning

    Technologies: Keras

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

DEEP LEARNING FOR HEALTHCARE

  • Medical Image Classification Using the MedNIST Dataset

    Explore an introduction to deep learning for radiology and medical imaging by applying CNNs to classify images in a medical imaging dataset.

    Prerequisites: Basic experience with Python

    Technologies: PyTorch, Python

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • Image Classification with TensorFlow: Radiomics—1p19q Chromosome Status Classification

    Learn how to apply deep learning techniques to detect the 1p19q co-deletion biomarker from MRI imaging.

    Prerequisites: Basic experience with CNNs and Python

    Technologies: TensorFlow, CNNs, Python

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • Coarse-to-Fine Contextual Memory for Medical Imaging

    Learn how to use Coarse-to-Fine Context Memory (CFCM) to improve traditional architectures for medical image segmentation and classification tasks.

    Prerequisites: Experience with CNNs and long short term memory (LSTMs)

    Technologies: TensorFlow, CNNs, CFCM

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • Data Augmentation and Segmentation with Generative Networks for Medical Imaging

    Learn how to use generative adversarial networks (GANs) for medical imaging by applying them to the creation and segmentation of brain MRIs.

    Prerequisites: Experience with CNNs

    Technologies: TensorFlow, GANs, CNNs

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

DEEP LEARNING FOR INTELLIGENT VIDEO ANALYTICS

  • AI Workflows for Intelligent Video Analytics with DeepStream

    Learn how to build hardware-accelerated applications for intelligent video analytics (IVA) with DeepStream and deploy them at scale to transform video streams into insights.

    Prerequisites: Experience with C++ and Gstreamer

    Technologies: DeepStream3, C++, Gstreamer

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • Getting Started with DeepStream for Video Analytics on Jetson Nano

    Learn how to build DeepStream applications to annotate video streams using object detection and classification networks.

    Prerequisites: Basic familiarity with C

    Technologies: DeepStream, TensorRT, Jetson Nano

    Duration: 8 hours; Self-paced

    Price: Free

Accelerated Computing Courses

  • Fundamentals of Accelerated Computing with CUDA C/C++ 

    Learn how to accelerate and optimize existing C/C++ CPU-only applications to leverage the power of GPUs using the most essential CUDA techniques and the Nsight Systems profiler.

    Prerequisites: Basic C/C++ competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations.

    Technologies: C/C++, CUDA

    Duration: 8 hours

    Price: $90 (excludes tax, if applicable)

  • Fundamentals of Accelerated Computing with CUDA Python

    Explore how to use Numba—the just-in-time, type-specializing Python function compiler—to create and launch CUDA kernels to accelerate Python programs on GPUs.

    Prerequisites: Basic Python competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations. NumPy competency including the use of ndarrays and ufuncs.

    Technologies: CUDA, Python, Numba, NumPy

    Duration: 8 hours

    Price: $90 (excludes tax, if applicable)

  • Scaling Workloads Across Multiple GPUs with CUDA C++ (New!)

    Learn how to build robust and efficient CUDA C++ applications that can leverage all available GPUs on a single node.

    Prerequisites: Competency writing applications in CUDA C/C++.

    TOOLS, LIBRARIES, FRAMEWORKS: C, C++

    Duration: 4 hours

    Languages: English

    Price: $30 (excludes tax, if applicable)

  • Accelerating CUDA C++ Applications with Concurrent Streams (New!)

    Learn how to improve performance for your CUDA C/C++ applications by overlapping memory transfers to and from the GPU with computations on the GPU.

    Prerequisites: Competency writing applications in CUDA C/C++.

    TOOLS, LIBRARIES, FRAMEWORKS: C, C++

    Duration: 4 hours

    Languages: English

    Price: $30 (excludes tax, if applicable)

  • Fundamentals of Accelerated Computing with OpenACC

    Explore how to build and optimize accelerated heterogeneous applications on multiple GPU clusters using OpenACC, a high-level GPU programming language.

    Prerequisites: Basic experience with C/C++

    Technologies: OpenACC, C/C++

    Duration: 8 hours

    Languages: English

    Price: $90 (excludes tax, if applicable)

  • High-Performance Computing with Containers

    Learn how to reduce complexity and improve portability and efficiency of your code by using a containerized environment for high-performance computing (HPC) application development.

    Prerequisites: Proficiency programming in C/C++ and professional experience working on HPC applications

    Technologies: Docker, Singularity, HPCCM, C/C++

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

  • OpenACC – 2X in 4 Steps

    Learn how to accelerate C/C++ or Fortran applications using OpenACC to harness the power of GPUs.

    Prerequisites: Basic experience with C/C++

    Technologies: C/C++, OpenACC

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

ACCELERATED DATA SCIENCE COURSES

  • Fundamentals of Accelerated Data Science with RAPIDS

    Learn how to perform multiple analysis tasks on large datasets using RAPIDS, a collection of data science libraries that allows end-to-end GPU acceleration for data science workflows.

    Prerequisites: Experience with Python, including pandas and NumPy

    Technologies: RAPIDS, NumPy, XGBoost, DBSCAN, K-Means, SSSP, Python

    Duration: 6 hours

    Price: $90 (excludes tax, if applicable)

  • Accelerating Data Science Workflows with RAPIDS

    Learn to build a GPU-accelerated, end-to-end data science workflow using RAPIDS open-source libraries for massive performance gains.

    Prerequisites: Advanced competency in Pandas, NumPy, and scikit-learn

    Technologies: RAPIDS, cuDF, cuML, XGBoost

    Duration: 2 hours

    Price: $30 (excludes tax, if applicable)

COURSES FOR IT

  • Introduction to AI in the Data Center

    Explore an introduction to AI, GPU computing, NVIDIA AI software architecture, and how to implement and scale AI workloads in the data center. You'll understand how AI is transforming society and how to deploy GPU computing to the data center to facilitate this transformation.

    Prerequisites: Basic knowledge of enterprise networking, storage, and data center operations

    Technologies: Artificial intelligence, machine learning, deep learning, GPU hardware and software

    Duration: 4 hours

    Price: $30 (excludes tax, if applicable)

Online Training with Partners

DLI collaborates with leading educational organizations to expand the reach of deep learning training to developers worldwide.

For teams interested in training, we recommend full-day workshops led by DLI-certified instructors. You can request a full-day workshop onsite or remote delivery for your team. With DLI, you’ll have access to a fully configured, GPU-accelerated server in the cloud, gain practical skills for your work, and have the opportunity to earn a certificate of subject matter competency.

Get a glimpse of the DLI experience in this short video.

Certificate Available

Deep Learning Workshops

DEEP LEARNING FUNDAMENTALS

  • Fundamentals of Deep Learning (New!)

    Businesses worldwide are using artificial intelligence (AI) to solve their greatest challenges. Healthcare professionals use AI to enable more accurate, faster diagnoses in patients. Retail businesses use it to offer personalized customer shopping experiences. Automakers use AI to make personal vehicles, shared mobility, and delivery services safer and more efficient. Deep learning is a powerful approach to implementing AI that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, and language translation. Using deep learning, computers are now able to learn and recognize patterns from data that are considered too complex or subtle for expert-written software.

    In this workshop, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You will train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running today.

    By participating in this is workshop you will:

    • Practice the fundamental techniques and tools required to train a deep learning model
    • Gain experience with common deep learning data types and model architectures
    • Enhance datasets through data augmentation to improve model accuracy
    • Leverage transfer learning between models to achieve efficient results with less data and computation
    • Build confidence to take on your own project with a modern, deep learning framework

    Prerequisites: Understanding of fundamental programming concepts in Python such as functions, loops,dictionaries, and arrays.

    Tools, libraries, and frameworks: Tensorflow, Keras, Pandas, Numpy

  • Building Intelligent Recommender Systems (New!)

    Deep learning-based recommender systems are the secret ingredient behind personalized online experiences and powerful decision support tools in retail, entertainment, healthcare, finance, and other industries. 

    Recommender systems work by understanding the preferences, previous decisions, and other characteristics of many people. For example, recommenders can help a streaming media service understand the types of movies an individual enjoys, which movies they’ve actually watched, and the languages they understand. Training a neural network to generalize this mountain of data and quickly provide specific recommendations for similar individuals or situations requires massive amounts of computation, which can be accelerated dramatically by GPUs. Organizations seeking to provide more delightful user experiences, deeper engagement with their customers, and better informed decisions can realize tremendous value by applying properly designed and trained recommender systems.

    This workshop covers the fundamental tools and techniques for building highly effective recommender systems, as well as how to deploy GPU-accelerated solutions for real-time recommendations. 

    By participating in this workshop, you’ll learn how to:

    • Build a content-based recommender system using the open-source cuDF library and Apache Arrow
    • Construct a collaborative filtering recommender system using alternating least squares (ALS) and CuPy
    • Design a wide and deep neural network using TensorFlow 2 to create a hybrid recommender system
    • Optimize performance for both training and inference using large, sparse datasets
    • Deploy a recommender model as a high-performance web service

    Prerequisites:

    • Intermediate knowledge of Python, including understanding of list comprehension.
    • Data science experience using Python.
    • Familiarity with NumPy and matrix mathematics.

    Tools, libraries, and frameworks: CuDF, CuPy, TensorFlow 2, and NVIDIA Triton™ Inference Server

  • Building Transformer-Based Natural Language Processing Applications (New!)

    Applications for Natural Language Processing (NLP) have exploded in the past decade. With the proliferation of AI assistants, and organizations infusing their businesses with more interactive human/machine experiences, understanding how NLP techniques can be used to manipulate, analyze, and generate text-based data is essential. Modern techniques can be used to capture the nuance, context, and sophistication of language, just as humans do. And when designed correctly, developers can use these techniques to build powerful NLP applications that provide natural and seamless human-computer interactions within Chat Bots, AI Voice Agents, and many more.

    Deep learning models have gained widespread popularity for NLP because of their ability to accurately generalize over a range of contexts and languages. Transformer-based models, such as Bidirectional Encoder Representations from Transformers (BERT), have revolutionized progress in NLP by offering accuracy comparable to human baselines on benchmarks like SQuAD for question-answer, entity recognition, intent recognition, sentiment analysis, and more. NVIDIA provides software and hardware that helps you quickly build state-of-the-art NLP models. You can speed-up the training process up to 4.5x with mixed-precision, and easily scale performance to multi-GPU across multiple server nodes without compromising accuracy.

    In this workshop, you’ll learn how to use Transformer-based natural language processing models for text classification tasks, such as categorizing documents. You will also learn how to leverage Transformer-based models for named-entity recognition (NER) tasks and how to analyze various model features, constraints, and characteristics to determine which model is best suited for a particular use case based on metrics, domain specificity, and available resources.

    By participating in this workshop, you’ll be able to:

    • Understand how text embeddings have rapidly evolved in NLP tasks such as Word2Vec, recurrent neural network (RNN)-based embeddings, and Transformers
    • See how Transformer architecture features, especially self-attention, are used to create language models without RNNs
    • Use self-supervision to improve the Transformer architecture in BERT, Megatron, and other variants for superior NLP results
    • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering
    • Manage inference challenges and deploy refined models for live applications

    Prerequisites:

    • Experience with Python coding and use of library functions and parameters
    • Fundamental understanding of a deep learning framework such as TensorFlow, PyTorch, or Keras.
    • Basic understanding of neural networks.

    Tools, libraries, and frameworks: PyTorch, pandas, NVIDIA NeMo™, NVIDIA Triton™ Inference Server

  • Fundamentals of Deep Learning for Multi-GPUs 

    Modern deep learning challenges leverage increasingly larger datasets and more complex models. As a result, significant computational power is required to train models effectively and efficiently.

    In this course, you will learn how to scale deep learning training to multiple GPUs. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible. This course will teach you how to use multiple GPUs to train neural networks. You'll learn:

    • Approaches to multi-GPU training
    • Algorithmic and engineering challenges to large-scale training
    • Key techniques used to overcome the challenges mentioned above

    Upon completion, you'll be able to effectively parallelize training of deep neural networks using Horovod.

    Prerequisites: Competency in the Python programming language and experience training deep learning models in Python

    Technologies: Python, Tensorflow

DEEP LEARNING BY INDUSTRY

  • Deep Learning for Autonomous Vehicles—Perception

    Learn how to design, train, and deploy deep neural networks for autonomous vehicles using the NVIDIA DRIVE development platform.

    You'll learn how to:

    • Work with CUDA® code, memory management, and GPU acceleration on the NVIDIA DRIVE AGX System
    • Train a semantic segmentation neural network
    • Optimize, validate, and deploy a trained neural network using NVIDIA® TensorRT

    Upon completion, you'll be able to create and optimize perception components for autonomous vehicles using NVIDIA DRIVE.

    Prerequisites: Experience with CNNs and C++

    Technologies: TensorFlow, TensorRT, Python, CUDA C++, DIGITS

  • Deep Learning for Robotics

    AI is revolutionizing the acceleration and development of robotics across a broad range of industries. Explore how to create robotics solutions on a Jetson for embedded applications.

    You’ll learn how to:

    • Apply computer vision models to perform detection
    • Prune and optimize the model for embedded application
    • Train a robot to actuate the correct output based on the visual input

    Upon completion, you’ll know how to deploy high-performance deep learning applications for robotics.

    Prerequisites: Basic familiarity with deep neural networks, basic coding experience in Python or similar language

  • Applications of AI for Anomaly Detection

    The amount of information moving through our world’s telecommunications infrastructure makes it one of the most complex and dynamic systems that humanity has ever built. In this workshop, you’ll implement multiple AI-based solutions to solve an important telecommunications problem: identifying network intrusions.

    In this workshop, you’ll:

    • Implement three different anomaly detection techniques: accelerated XGBoost, deep learning-based autoencoders, and generative adversarial networks (GANs)
    • Build and compare supervised learning with unsupervised learning-based solutions
    • Discuss other use cases within your industry that could benefit from modern computing approaches

    Upon completion, you'll be able to detect anomalies within large datasets using supervised and unsupervised machine learning. 

    Prerequisites: Experience with CNNs and Python

    Technologies: RAPIDS, Keras, GANs, XGBoost

  • Applications of AI for Predictive Maintenance

    Learn how to identify anomalies and failures in time-series data, estimate the remaining useful life of the corresponding parts, and use this information to map anomalies to failure conditions. 

    You’ll learn how to:

    • Leverage predictive maintenance to manage failures and avoid costly unplanned downtimes 
    • Identify key challenges around identifying anomalies that can lead to costly breakdowns
    • Use time-series data to predict outcomes using machine learning classification models with XGBoost
    • Apply predictive maintenance procedures by using a long short-term memory ( LSTM)-based model to predict device failure 
    • Experiment with autoencoders to detect anomalies by using the time-series sequences from the previous steps

    Upon completion, you’ll understand how to use AI to predict the condition of equipment and estimate when maintenance should be performed.

    Prerequisites: Experience with Python and deep neural networks

    Technologies: TensorFlow, Keras

  • Deep Learning for Industrial Inspection

    Explore how to build a deep learning model to automate the verification of capacitors in NVIDIA's printed circuit board (PCB) using a real production dataset. This can lower the verification cost and increase the production throughput across a variety of manufacturing use cases. You'll learn how to:

    • Extract meaningful insights from the provided dataset using Pandas DataFrame and NumPy library
    • Apply transfer-learning to a deep learning classification model known as InceptionV3
    • Fine-tune the deep learning model and set up evaluation metrics
    • Optimize the trained InceptionV3 model on V100 GPU using TensorRT 5
    • Experiment with FP16 half-precision fast inferencing using V100’s TensorCore

    Upon completion, you'll be able to design, train, test, and deploy building blocks of a hardware-accelerated industrial inspection pipeline.

    Prerequisites: Experience with Python and convolutional neural networks (CNNs)

    Technologies: TensorFlow, NVIDIA TensorRT, Keras

  • Deep Learning for Intelligent Video Analytics

    With the increase in traffic cameras, growing prospect of autonomous vehicles, and promising outlook of smart cities, there's a rise in demand for faster and more efficient object detection and tracking models. This involves identification, tracking, segmentation and prediction of different types of objects within video frames.

    In this workshop, you’ll learn how to:

    • Efficiently process and prepare video feeds using hardware accelerated decoding methods
    • Train and evaluate deep learning models and leverage ""transfer learning"" techniques to elevate efficiency and accuracy of these models and mitigate data sparsity issues
    • Explore the strategies and trade-offs involved in developing high-quality neural network models to track moving objects in large-scale video datasets
    • Optimize and deploy video analytics inference engines by acquiring the DeepStream SDK

    Upon completion, you'll be able to design, train, test and deploy building blocks of a hardware-accelerated traffic management system based on parking lot camera feeds.

    Prerequisites: Experience with deep networks (specifically variations of CNNs), intermediate-level experience with C++ and Python

    Technologies: deep learning, intelligent video analytics, deepstream 3.0, tensorflow, iva, fmv, opencv, accelerated video decoding/encoding, object detection and tracking, anomaly detection, deployment, optimization, data preparation

  • Deep Learning for Healthcare Image Analysis

    This workshop explores how to apply convolutional neural networks (CNNs) to MRI scans to perform a variety of medical tasks and calculations. You’ll learn how to:

    • Perform image segmentation on MRI images to determine the location of the left ventricle
    • Calculate ejection fractions by measuring differences between diastole and systole using CNNs applied to MRI scans to detect heart disease
    • Apply CNNs to MRI scans of low-grade gliomas (LGGs) to determine 1p/19q chromosome co-deletion status

    Upon completion, you’ll be able to apply CNNs to MRI scans to conduct a variety of medical tasks.

    Prerequisites: Basic familiarity with deep neural networks; basic coding experience in Python or a similar language

    Technologies: R, MXNet, TensorFlow, Caffe, DIGITS

Accelerated Computing Workshops

  • Fundamentals of Accelerated Computing with CUDA C/C++ 

    The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. Experience C/C++ application acceleration by:

    • Accelerating CPU-only applications to run their latent parallelism on GPUs
    • Utilizing essential CUDA memory management techniques to optimize accelerated applications
    • Exposing accelerated application potential for concurrency and exploiting it with CUDA streams
    • Leveraging Nsight Systems to guide and check your work

    Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA techniques and Nsight Systems. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.

    Prerequisites: Basic C/C++ competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations.

    Technologies: C/C++, CUDA

  • Fundamentals of Accelerated Computing with CUDA Python

    This workshop explores how to use Numba—the just-in-time, type-specializing Python function compiler—to accelerate Python programs to run on massively parallel NVIDIA GPUs. You’ll learn how to:

    • Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs)
    • Use Numba to create and launch custom CUDA kernels
    • Apply key GPU memory management techniques
    • Upon completion, you’ll be able to use Numba to compile and launch CUDA kernels to accelerate your Python applications on NVIDIA GPUs.

    Prerequisites: Basic Python competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations. NumPy competency including the use of ndarrays and ufuncs.

    Technologies: CUDA, Python, Numba, NumPy

  • Accelerating CUDA C++ Applications with Multiple GPUs (New!)

    This workshop covers how to write CUDA C++ applications that efficiently and correctly utilize all available GPUs in a single node, dramatically improving the performance of your applications, and making the most cost-effective use of systems with multiple GPUs.

    By participating in this workshop, you’ll learn how to:

    • Use concurrent CUDA Streams to overlap memory transfers with GPU computation.
    • Utilize all available GPUs on a single node to scale workloads across all available GPUs.
    • Combine the use of copy/compute overlap with multiple GPUs.
    • Rely on the NVIDIA® Nsight™ Systems Visual Profiler timeline to observe improvement opportunities and the impact of the techniques covered in the workshop.

    Prerequisites: ·      

    • Professional experience programming CUDA C/C++ applications, including the use of the nvcc compiler, kernel launches, grid-stride loops, host-to-device and device-to-host memory transfers, and CUDA error handling.
    • Familiarity with the Linux command line
    • Experience using Makefiles to compile C/C++ code

    Technologies: CUDA C++, nvcc, Nsight Systems

Accelerated Data Science Workshops

  • Fundamentals of Accelerated Data Science with RAPIDS

    RAPIDS is a collection of data science libraries that allows end-to-end GPU acceleration for data science workflows. In this training, you'll:

    • Use cuDF and Dask to ingest and manipulate massive datasets directly on the GPU
    • Apply a wide variety of GPU-accelerated machine learning algorithms, including XGBoost, cuGRAPH, and cuML, to perform data analysis at massive scale
    • Perform multiple analysis tasks on massive datasets in an effort to stave off a simulated epidemic outbreak affecting the UK

    Upon completion, you'll be able to load, manipulate, and analyze data orders of magnitude faster than before, enabling more iteration cycles and drastically improving productivity.

    Prerequisites: Experience with Python, ideally including pandas and NumPy

    Technologies: RAPIDS, NumPy, XGBoost, DBSCAN, K-Means, SSSP, Python

NETWORKING WORKSHOPS

ENTERPRISE SOLUTION

If you’re interested in more comprehensive enterprise training, the DLI Enterprise Solution offers a package of training and lectures to meet your organization’s unique needs. From hands-on online and onsite training to executive briefings and enterprise-level reporting, DLI can help your company transform into an AI organization. Contact us to learn more.

PUBLIC WORKSHOPS

If you would like to receive updates on upcoming DLI public workshops, sign up to receive communications.

NVIDIA DLI offers downloadable course materials for university educators and free self-paced, online training to students through the DLI Teaching Kits. Educators can also get certified to deliver DLI workshops on campus through the University Ambassador Program.

Teaching Kits

DLI Teaching Kits are available to qualified university educators interested in course solutions across deep learning, accelerated computing, and robotics. Educators can integrate lecture materials, hands-on courses, GPU cloud resources, and more into their curriculum.

 

Enhancing Curricula with NVIDIA Teaching Kits

University Ambassador Program

The DLI University Ambassador Program certifies qualified educators to deliver hands-on DLI workshops to university faculty, students, and researchers at no cost. Educators are encouraged to download the DLI Teaching Kits to be qualified for participation in the Ambassador Program.

 

Furthering the Frontiers of Education

DLI has certified University Ambassadors at hundreds of universities, including:

Arizona State University
Columbia
The Hong Kong University Of Science And Technology
Massachusetts Institute of Technology
NUS - National University of Singapore
University of Oxford
Arizona State University
Columbia
The Hong Kong University Of Science And Technology
Massachusetts Institute of Technology
NUS - National University of Singapore
University of Oxford

Partners

DLI works with industry partners to build DLI content and deliver DLI instructor-led workshops around the world. Here are some of our leading partners.

CONTACT NVIDIA

GET YOUR QUESTIONS ANSWERED.

TRAINING

Inquire about NVIDIA Deep Learning Institute services.

SALES

Connect with an NVIDIA Sales Representative
and get purchase info.

DEVELOPER

For Technical Questions please use the NVIDIA Developer Forums.