NVIDIA Omniverse Audio2Face

Beta

Omniverse Audio2Face

Instantly create expressive facial animation from just an audio source using generative AI.

Audio-to-Animation Made Easy With Generative AI

NVIDIA Omniverse Audio2Face beta is a foundation application for animating 3D characters facial characteristics to match any voice-over track, whether for a game, film, real-time digital assistant, or just for fun. You can use the Universal Scene Description (OpenUSD)-based app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.

How It Works

Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple—just select your audio and upload. The audio input is then fed into a pre-trained Deep Neural Network and the output drives the 3D vertices of your character mesh to create the facial animation in real-time. You also have the option to edit various post-processing parameters to edit the performance of your character. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.

Omniverse Audio2Face App
 
Audio2Face generates facial animation through audio input.

Audio Input

Use a Recording, or Animate Live

Simply record a voice audio track, input into the app, and see your 3D face come alive. You can even generate facial animations live using a microphone.

Audio2Face will be able to process any language easily. And we’re continually updating with more and more languages.  

Character Transfer

Face-Swap in an Instant

Audio2Face lets you retarget to any 3D human or human-esque face, whether realistic or stylized. This makes swapping characters on the fly—whether human or animal—take just a few clicks.

 
Animate any character face with Audio2Face
 
Use multiple instances to generate facial animation for more than one character

Scale Output

Express Yourself—or Everyone at Once

It’s easy to run multiple instances of Audio2Face with as many characters in a scene as you like - all animated from the same, or different audio tracks. Breathe life and sound into dialogue between a duo, a sing-off between a trio, an in-sync quartet, and beyond. Plus, you can dial up or down the level of facial expression on each face and batch output multiple animation files from multiple audio sources.

Emotion Control

Bring the Drama

Audio2Face gives you the ability to choose and animate your character’s emotions in the wink of an eye. The AI network automatically manipulates the face, eyes, mouth, tongue, and head motion to match your selected emotional range and customized level of intensity, or, automatically infers emotion directly from the audio clip.

 
Add emotions to your animation.
 
Add emotions to your animation.

Data Conversion

Connect and Convert

The latest update to Omniverse Audio2Face now enables blendshape conversion and also blendweight export options. Plus, the app now supports export-import with Blendshapes for Blender and Epic Games Unreal Engine to generate motion for characters using their respective Omniverse Connectors.

Announcing NVIDIA Omniverse Avatar Cloud Engine Early Access

Join the program and experience how Omniverse Avatar Cloud Engine (ACE) eases avatar development, delivering all the AI building blocks necessary to create, customize and deploy interactive avatars.

See Audio2Face in Action

Creatures and Aliens

Drive facial animation of fantastical creatures and aliens. Here we have Digital Mark driving the performance of the Alien.

Misty the Animated Chatbot

Presented at GTC Spring 2020, Misty is an interactive weather bot that is driven by Audio2Face in run-time. We demonstrated retargeting from a realistic human mesh to a stylized character mesh to be used as an interactive service agent.

Omniverse Machinima

Unveiled during the GeForce 30 series launch, Audio2Face is seen in the Omniverse Machinima demo. Facial animation is notoriously complex and cost-prohibitive. Audio2Face automates detailed facial animation to democratize the 3D content creation process.   

NVIDIA Omniverse is helping me achieve more natural results for my digital humans and speeding up my workflow so that I can spend more time on the creative process.

— Anderson Rohr, 3D Artist

The Conference for the Era of AI and the Metaverse

Developer Conference March 20-23 | Keynote March 21

Don't miss these three upcoming Omniverse Creators sessions at GTC.

3D Art Goes Multiplayer - Behind the Scenes of Adobe Substance's "End of Summer" Project with Omniverse

Join the Adobe Substance 3D team as they break down the process of creating “End of Summer,” an inspiring 3D project completed using NVIDIA Omniverse, Unreal Engine 5, and the Substance 3D Collection in an end-to-end Universal Scene Description (USD) pipeline.

3D by AI: How Generative AI will Make Building Virtual Worlds Easier

As the next generation of artist tools and apprentices, AI will enable us to build 3D virtual worlds bigger, faster, and easier than ever before. Join this session to see NVIDIA’s latest work in generative AI models for creating 3D content and scenes, and see how these tools and research can help 3D artists in their workflows.

Custom World Building with AI Avatars: The Little Martians Sci-Fi Project

We'll explore the role of physical spaces and objects in metaverse creation. Using 3D scanning technologies, one can easily create an editable digital twin of any space, person, or object. With NVIDIA’s Omniverse tools, especially Audio2Face, it’s possible to animate custom avatars with their own voices and intelligence.

System Requirements
Element Minimum Specifications Recommended
OS Supported Windows 10 (Version 1903 and above) Windows 10 (Version 1903 and above)
CPU Intel Core i5 10th Series
AMD Ryzen 5 5th Series
Intel Core i7 13th Series
AMD Ryzen 7 7th Series
RAM 16 GB 32 GB
Storage 250 GB SSD 500 GB SSD
GPU Any RTX GPU with 8 GB GeForce RTX 4070 Ti, NVIDIA RTX A4500 or higher
Min. Video Driver Version See the latest drivers here See the latest drivers here

Dive into Step-by-Step Tutorials

Discover More Omniverse Apps

Explore the entire Omniverse ecosystem of connections and 3rd party tools, or try out other foundation applications below.

Code

An integrated development environment for developers and power users to easily build Omniverse extensions, apps, or microservices.

Machinima

Remix, recreate, and redefine animated video game storytelling with an AI-powered toolkit for creators.

USD Composer

Accelerate advanced world-building with Pixar USD and interactively assemble, simulate, and render scenes in real time.

USD Presenter

Collaboratively review design projects with this powerful, physically accurate, and photorealistic visualization tool.

Get Live Help

Connect with Omniverse experts live to get your questions answered. 

Explore Resources

Learn at your own pace with free getting started material.

Download Omniverse Audio2Face

Find the right license to fit your 3D workflows and start exploring Omniverse right away.

Frequently Asked Questions

  • What is an Omniverse foundation application?

    Omniverse foundation applications are best practice example implementations and configurations of Omniverse extensions. They are provided as a generic template on which developers and customers can customize, extend, and personalize according to their workflow. 

    Foundation applications can be used out-of-the-box, but, to maximize the true value of the Omniverse platform, customization and extension is highly encouraged. Every developer, customer, user will have their own interpretation of Omniverse foundation applications. 

    Omniverse foundation applications can be explored here.

  • How do I install Omniverse Audio2Face?

    To install Omniverse Audio2Face, follow the steps below: 

    • Download NVIDIA Omniverse and run the installation
    • Once installed, open the Omniverse launcher
    • Head to the Omniverse Exchange and find Omniverse Audio2Face in the Apps section
    • Click install, then launch

Become Part of Our Community

Access Tutorials

Take advantage of hundreds of free tutorials, sessions, or our beginner’s training to get started with USD

Become an Omnivore

Join our community! Attend our weekly live streams on Twitch and connect with us on Discord and our forums.

Get Technical Support

Having trouble? Post your questions in the forums for quick guidance from Omniverse experts, or refer to the platform documentation.

Showcase Your Work

Created an Omniverse masterpiece? Submit it to the Omniverse Gallery, where you can get inspired and inspire others.

The Design and Simulation Conference for the Era of AI and the Metaverse

The Design and Simulation Conference
for the Era of AI and the Metaverse

Connect your creative worlds to a universe of possibility with NVIDIA Omniverse.

  • An Artist's Omniverse: How to Build Large-Scale, Photoreal Virtual Worlds

    • Gabriele Leone, Senior Art Director, NVIDIA

    Hear from NVIDIA's expert environmental artists and see how 30 artists built an iconic multi-world demo in three months. Dive into a workflow featuring Adobe Substance 3D Painter, Photoshop, Autodesk 3ds Max, Maya, Blender, Modo, Maxon Zbrush, SideFX Houdini, and NVIDIA Omniverse Create, and see how the artists pulled off delivery of a massive scene that showcases the latest in NVIDIA RTX, AI, and physics technologies.

    View Details >

  • Next Evolution of Universal Scene Description (USD) for Building Virtual Worlds

    • Aaron Luk, Senior Engineering Manager, Omniverse, NVIDIA

    Universal Scene Description is more than just a file format. This open, powerful, easily extensible world composition framework has APIs for creating, editing, querying, rendering, simulating, and collaborating within virtual worlds. NVIDIA continues to invest in helping evolve USD for workflows beyond Media & Entertainment—to enable the industrial metaverse and the next wave of AIs. Join this session to see why we are "all on" in USD, our USD development roadmap, and learn about our recent projects and initiatives at NVIDIA and with our ecosystem of partners.

    View Details >

  • Foundations of the Metaverse: The HTML for 3D Virtual Worlds

    • Michael Kass, Senior Distinguished Engineer, NVIDIA
    • Rev Lebaredian, VP Simulation Technology and Omniverse Engineering, NVIDIA
    • Guido Quaroni, Senior Director of Engineering of 3D & Immersive, Adobe
    • Steve May, Vice President, CTO, Pixar
    • Mason Sheffield, Director of Creative Technology, Lowe’s Innovation Labs, Lowe's
    • Natalya Tatarchuk, Distinguished Technical Fellow and Chief Architect, Professional Artistry & Graphics Innovation, Unity
    • Matt Sivertson, Vice President and Chief Architect, Media & Entertainment, Autodesk
    • Mattias Wikenmalm, Senior Expert, Volvo Cars

    Join this session to hear from a panel of distinguished technical leaders as they talk about Universal Scene Description (USD) as a standard for the 3D evolution of the internet—the metaverse. These luminaries will discuss why they are investing in or adopting USD, and what technological advancements need to come next to see its true potential unlocked.

    View Details >

  • How to Build Simulation-Ready USD 3D Assets

    • Renato Gasoto, Robotics & AI Engineer, NVIDIA 
    • Beau Perschall, Director, Omniverse Sim Data Ops, NVIDIA

    The next wave of industries and AI requires us to build physically accurate virtual worlds indistinguishable from reality. Creating virtual worlds is hard, and today's existing universe of 3D assets is inadequate, representing just the visual representation of an object. Whether building digital twins or virtual worlds for training and testing autonomous vehicles or robots, 3D assets require many more technical properties, requiring a need to develop and adopt novel processes, techniques, and tools. NVIDIA is introducing a new class of 3D assets called "SimReady" assets—the building blocks of virtual worlds. SimReady assets are more than just 3D objects—they encompass accurate physical properties, behavior, and connected data streams built on Universal Scene Description (USD). We'll show you how to get started with SimReady USD assets, and present the tools and techniques required to develop and test these assets.

    View Details >

  • How Spatial Computing is Going Hyperscale

    • Omer Shapira, Senior Engineer, Omniverse, NVIDIA

    Recent advances in compute pipelines have enabled leaps in body-centered technology such as fully ray-traced virtual reality (VR). Simultaneously, network bottlenecks have decreased to the point that streaming pixels directly from datacenters to HMDs is a reality. Join this talk to explore the potential of body-centered computing at data center-scale—and what applications, experiences, and new science it enables.

    View Details >

Three headshots with varying dark gray to dark purple backgrounds. The left headshot features a man in a gray shirt with a gold banner that reads Stephen Jones, NVIDIA. The middle headshot features a woman in a red shirt with a gold banner that reads Anima Anandkumar, NVIDIA. The right headshot features a man in a black shirt and gray collar with a gold banner that reads Ian Buck, NVIDIA.

Connect With Us

Stay up-to-date on the latest NVIDIA Omniverse news.