Robot Learning

Train robot policies in simulation.

Boston Dynamics

Workloads

Robotics
Simulation / Modeling / Design

Industries

Manufacturing
Smart Cities/Spaces
Healthcare and Life Sciences
Retail/ Consumer Packaged Goods

Business Goal

Innovation
Return on Investment

Products

NVIDIA Isaac Lab
NVIDIA OSMO
NVIDIA Isaac GR00T
NVIDIA Jetson Thor

Build Generalist Robot Policies

Preprogrammed offline robots are designed to execute predefined tasks and a fixed set of instructions within a predetermined environment. This means they’re likely to struggle when encountering an unexpected change in their surroundings.

AI-driven, generalized robots can overcome the limitations of preprogrammed robot behaviors. To achieve this, simulation-based robot learning is necessary to enable these robots to perceive, plan, and act autonomously under dynamic conditions. 

Robot learning lets these robots gain and refine new capabilities by using robot policies to improve their performance for a variety of scenarios. These policies are learned sets of behaviors—including navigation, dextrous manipulation, locomotion, and many others—that define how a robot should make decisions in various situations.

Benefits of Simulation-Based Robot Learning

Flexibility and Scalability

Iterate, refine, and deploy robot policies for real-world scenarios using various data sources from your real robot-captured data and synthetic data in simulation for any robot embodiment, such as autonomous mobile robots (AMRs), robotic arms, and humanoid robots. The sim-based approach also allows you to quickly train hundreds or thousands of robot instances in parallel.

Accelerated Skill Development

Train robots in simulated environments to adapt to new task variations without the need for reprogramming physical robot hardware. 

Physically Accurate Environments

Easily model physical factors like object interactions (rigid or deformables), friction, etc., to significantly reduce the sim-to-real gap.

Safe Proving Environment

Safely test potentially hazardous scenarios without risking human safety or damaging equipment.

Reduce Costs
Avoid the burden of real-world data collection and labeling costs by generating large amounts of synthetic data, validating trained robot policies in simulation and deploying on robots faster. 

Robot Learning Algorithms

Robot learning algorithms, such as imitation learning or reinforcement learning, can help robots generalize learned skills, enabling robots to improve their performance with changing or novel environments. There are various learning techniques, including:

  • Reinforcement learning: A trial-and-error approach where the robot receives a reward or a penalty based on the actions it takes. 
  • Imitation learning: The robot can learn from human demonstrations of tasks. 
  • Supervised learning: The robot can be trained using labeled data to learn specific tasks.
  • Diffusion policy: The robot uses generative models to create and optimize robot actions for desired outcomes.
  • Self-supervised learning: When there are limited labeled datasets, robots can generate their own training labels from unlabeled data to extract meaningful information.

Robots Learn and Adapt

General-purpose robots need to adapt and interact with novel environments, and therefore rely on simulation-based robot learning tools and scalable workflows.

A typical end-to-end robot workflow involves data processing, AI model training, parallel processing with NVIDIA GPUs, and deploying on a real robot.

To bridge the data gaps, you can consider a diverse set of high-quality data sources by combining internet-scale data, synthetic data, and live robot data. 

Robots need to be trained and deployed for task-defined scenarios and require accurate virtual representations of real-world conditions. NVIDIA Isaac™ Lab is open source and helps train robot policies by using reinforcement learning and imitation learning techniques in a modular approach. 

Isaac Lab is built on Isaac Sim™, a reference application built on NVIDIA Omniverse™ that enables developers to design, simulate, test, and train AI-driven robots in physically accurate environments and ships with 16+ robot sim models, 25+ environments, and offers the option to use various sensor models including RGB, contact sensors, tactile sensors, height scanner, and raycaster sensors. 

Isaac Lab can be used with NVIDIA Isaac Sim or MuJoCo developer simulation platforms for rapid prototyping and deployment of robot policies.

NVIDIA OSMO is a cloud-native platform that orchestrates multi-container workflows across diverse compute environments for tasks like synthetic data generation, model training, robot learning, and software/hardware-in-the-loop testing.

The trained robot policies and AI models are ready to be deployed on NVIDIA Jetson™ on-robot computers, enabling effective transfer from the virtual world to the real robot.

NVIDIA Isaac GR00T for Humanoid Robot Developers

NVIDIA Isaac GR00T is a platform of general-purpose robot foundation models and data pipelines to accelerate humanoid robot developers.

If you’re a humanoid robot company or building software for humanoid robots, the NVIDIA Humanoid Robot Developer Program give you access to advanced tools and technologies including Isaac Sim, Isaac Lab, OSMO, and more.

Fourier

Get Started

Build adaptable robots with robust, perception-enabled, simulation-trained policies using NVIDIA Isaac Lab,  an open-source modular framework for robot learning.

Resources

Synthetic Data

Close the sim-to-real gap by creating physically accurate virtual scenes and objects to train AI models while saving on training time and costs. 

Reinforcement Learning

Apply reinforcement learning (RL) techniques to any type of robot embodiment and build robot policies.

Simulation

Isaac Sim is a robot simulation framework built on top of NVIDIA Omniverse that provides high-fidelity photorealistic simulations to train humanoid robots.

Humanoid Robots

Accelerate humanoid robot development using NVIDIA tools, libraries, and three computers—NVIDIA DGX™ for AI training, OVX™ for simulation, and Jetson Thor for deploying multimodal AI on humanoid robots.

News