Built on NVIDIA AI graphics and simulation technologies, NVIDIA ACE encompasses technology for every part of the digital human—from speech and translation, vision, and intelligence to realistic animation and behavior, to lifelike appearance.
NVIDIA® Riva
Technologies that enable digital humans to understand human language, translate responses for up to 32 languages, and respond with natural responses.
NVIDIA Nemotron
Family of large language models (LLMs) and small language models (SLMs) that provide digital humans with intelligence—capable of providing contextually aware responses for humanlike conversations.
NVIDIA Audio2Face™
Technologies that provide digital humans with dynamic facial animations and accurate lip sync. With just an audio input, a 2D or 3D avatar animates realistically.
NVIDIA RTX™
A collection of rendering technologies that enable real-time path-traced subsurface scattering to simulate how light penetrates the skin and hair, giving digital humans a more realistic appearance.
Developers can leverage NVIDIA digital human technologies to build their own solutions from the ground up. Or, they can use NVIDIA’s suite of domain-specific AI workflows for next-generation interactive avatars for customer service, humanoid robots for virtual factories, digital experiences in virtual presence applications, or AI non-player characters in gaming. Generative AI models can be both compute and memory intensive, and running both AI and graphics on the local system will require a powerful GPU with dedicated AI hardware. ACE is flexible in allowing models to be run across cloud and PC depending on local GPU capabilities to ensure the user gets the best experience.