Build a World of Interactive Avatars Based on NVIDIA Omniverse, AIGC, and LLM
, Solution Architect, NVIDIA
, Engineering VP, Mobvoi
With the emergence of large language models and AI-generated content (AIGC) technologies, it became possible to build a more intelligent, immersive, and interactive digital human. With Mobvoi, a company that mainly focus on speech AI and generative AI technologies, we'll discuss the challenges of making an interactive digital avatar, including how to replicate a person's voice, expressions, and general behavior, and how to react to a human's questions or commands.
Join this session to: • Learn how to drive and render a real-time, realistic digital human via Omniverse Audio2Face. • See how to leverage speech and conversational AI abilities to create an interactive, autonomous avatar, such as RIVA, Mobvoi TTS service of cloning a voice within three seconds, ChatGLM. • See our explorations of combining AIGC/LLM technologies with digital avatar, such as video/motion generation, commands understanding, and code generation. • Get a behind-the-scenes look at how we can combine Audio2Face in Unreal Engine.