Accelerating AI Workflows on AI Data Center Infrastructure (Presented by Run:ai)
, Director of Solutions Engineering, Run:ai
, Chief Technology Officer, Run:AI
Learn about the essential capabilities required to maximize throughput in advanced AI data centers. We'll consider the vital role of Kubernetes in orchestrating and managing complex AI workloads and emphasize the importance of dynamic scheduling for efficient resource allocation and system optimization. We'll also cover support mechanisms for popular model development and distributed training techniques. Furthermore, we'll delve into the necessity of GPU fractioning for efficiently handling interactive and inference workloads, ensuring the most effective use of data center resources for AI projects. You'll leave with a comprehensive understanding of optimizing AI workflows in high-performance data center infrastructures.