Conversational AI is the technology that powers automated messaging and speech-enabled applications, and its applications are used in diverse industries to improve the overall customer experience and customer service efficiency.
Conversational AI pipelines are complex and expensive to develop from scratch. In this course, you’ll learn how to build conversational AI services using the NVIDIA® Riva framework. With Riva, developers can create customized language-based AI services for intelligent virtual assistants, virtual customer service agents, real-time transcription, multi-user diarization, chatbots, and much more.
In this workshop, you’ll learn how to quickly build and deploy a conversational AI pipeline including transcription, NLP, and speech. You’ll explore automatic speech recognition (ASR) and text-to-speech (TTS) models and their customization in detail with the NVIDIA NeMo framework and learn how to deploy the models with Riva. Finally, you’ll explore the production-level deployment performance and scaling considerations of Riva services with Helm charts and Kubernetes clusters.
Learning Objectives
By participating in this workshop, you’ll learn:
- How to customize and deploy ASR and TTS models on Riva.
- How to build and deploy an end-to-end conversational AI pipeline, including ASR, NLP, and TTS models, on Riva.
- How to deploy a production-level conversational AI application with a Helm chart for scaling in Kubernetes clusters.
Download workshop datasheet (PDF 318 KB)