Deploying Hugging Face Models to Production at Scale with GPUs
, Co-Founder and CTO, Iguazio
Seems like everyone's using Hugging Face to simplify and reuse advanced models and work collectively as a community. But how do you deploy these models into real business environments, along with the required data and application logic? How do you serve them continuously, efficiently, and at scale? How do you manage their life cycle in production (deploy, monitor, retrain)? How do you leverage GPUs efficiently for your Hugging Face deep learning models?
We’ll share MLOps orchestration best practices that'll enable you to automate the continuous integration and deployment of your Hugging Face models, along with the application logic in production. Learn how to manage and monitor the application pipelines, at scale. We’ll show how to enable GPU sharing to maximize application performance while protecting your investment in AI infrastructure and share how to make the whole process efficient, effective, and collaborative.