How to Operationalize Large Language Models Safely and Responsibly
, CEO, Fiddler
How do we develop AI models and systems taking fairness, accuracy, explainability, robustness, and privacy into account? How do we operationalize models in production, and address their governance, management, and monitoring? Model validation, monitoring, and governance are essential for building trust and adoption of AI systems in high-stakes domains such as financial services, manufacturing, and healthcare. In this talk, we'll present risks associated with operationalizing large learning models (LLMs) and other Generative AI-based applications, highlight challenges faced by various stakeholders when operationalizing AI/ML models from a human-centric perspective, and emphasize the need for adopting responsible AI practices not only during model validation but also post-deployment as part of model monitoring. We'll then present a brief overview of techniques and tools for monitoring deployed ML models, industry case studies, key takeaways, and open challenges.