Accelerating Contact Center AI Workflows with NVIDIA AI Enterprise
Transform Call Centers with Scalable, Finely-Tuned AI Agent SystemsContact centers are undergoing a massive transformation as they integrate advanced AI solutions to enhance user experience, reduce costs, and operate at scale. This webinar introduces a robust contact center AI solution developed by NVIDIA technologies. The solution combines state-of-the-art speech services, multi-turn RAG architectures, and finely tuned LLMs to deliver human-like interactions with high throughput and low latency.The NVIDIA AI Enterprise software platform consists of NVIDIA NIM™ microservices, NVIDIA NeMo, NVIDIA Dynamo, NVIDIA® TensorRT™ ecosystem, and other tools to simplify building, sharing, and deploying AI applications.In this webinar, you'll gain practical insights into designing, optimizing, and scaling AI-powered agentic workflows specifically for voice-based inquiries. Discover how the system transitions from traditional IVR to real-time AI conversations, integrates member data for personalization, and utilizes NVIDIA Riva, NeMo, and NIM for production-grade deployment and continuous learning.Register for this webinar and explore the benefits of NVIDIA AI for accelerated inference. Time will be available at the end of the webinar for Q&A.Who Should AttendAI/ML engineers, contact center technology leads, healthcare IT architects, and developers with experience in conversational AI and Python.PrerequisitesPython programming proficiencyUnderstanding of AI agents, LLMs, and speech processing interacting in CCAIFamiliarity with GPU computing and enterprise AI deploymentsLearningsIn this webinar, you'll gain a deep understanding of NVIDIA’s AI Enterprise stack and learn :Designing AI workflows to handle high-volume contact center calls with real-time ASR and TTS.Leveraging multi-turn RAG and embedded policy/member data to power accurate responses.Fine-tuning LLMs (e.g., Llama 3 8B) for healthcare-specific tone, empathy, and response fidelity.Understanding technical architecture for integrating AI modules into on-premise Cisco-based systems.Using NVIDIA Riva, NeMo Guardrails, and NIM for secure, scalable, low-latency deployment.Real-world metrics on latency, throughput, and accuracy improvements pre- and post-LLM finetuning.(C) NVIDIA Corporation 2025. All rights reserved. No recording of this webinar should be made or reposted without the express written consent of NVIDIA Corporation.