NVIDIA NIM™ provides prebuilt, optimized inference microservices for rapidly deploying the latest AI models on any NVIDIA-accelerated infrastructure—cloud, data center, workstation, and edge.
Generative AI Inference Powered by NVIDIA NIM: Performance and TCO
See how NIM microservices perform better than popular alternatives. They can process up to 3x more tokens per second when running on the same NVIDIA-accelerated infrastructure.
Get access to unlimited prototyping with hosted APIs for NIM accelerated by DGX Cloud, or download and self-host NIM microservices for research and development as part of the NVIDIA Developer program.
NVIDIA NIM combines the ease of use and operational simplicity of managed APIs with the flexibility and security of self-hosting models on your preferred infrastructure. NIM microservices come with everything AI teams need—the latest AI foundation models, optimized inference engines, industry-standard APIs, and runtime dependencies—prepackaged in enterprise-grade software containers ready to deploy and scale anywhere.
Benefits
Enterprise Generative AI That Does More for Less
Easy, enterprise-grade microservices built for high-performance AI—designed to work seamlessly and scale affordably. Experience the fastest time to value for AI agents and other enterprise generative AI applications powered by the latest AI models for reasoning, simulation, speech, and more.
Ease of Use
Accelerate innovation and time to market with prebuilt, optimized microservices for the latest AI models. With standard APIs, models can be deployed in five minutes and easily integrated into applications.
Deploy enterprise-grade microservices that are continuously managed by NVIDIA through rigorous validation processes and dedicated feature branches—all backed by NVIDIA enterprise support, which also offers direct access to NVIDIA AI experts.
Performance and Scale
Improve TCO with low-latency, high-throughput AI inference that scales with the cloud, and achieve the best accuracy with support for fine-tuned models out of the box.
Deploy anywhere with prebuilt, cloud-native microservices ready to run on any NVIDIA-accelerated infrastructure—cloud, data center, and workstation—and scale seamlessly on Kubernetes and cloud service provider environments.
Demo
Build AI Agents With NIM
Learn how to set up two AI agents—one for content generation and another for digital graphic design—and see how easy it is to get up and running with NIM microservices.
Get the latest AI models for reasoning, language, retrieval, speech, vision and more—ready to deploy in five minutes on any NVIDIA-accelerated infrastructure.
Weave NIM microservices into agentic AI applications with the NVIDIA AgentIQ library, a developer toolkit for building AI agents and integrating them into custom workflows.
NVIDIA NIM provides optimized throughput and latency out of the box to maximize token generation, support concurrent users at peak times, and improve responsiveness. NIM microservices are continuously updated with the latest optimized inference engines, boosting performance on the same infrastructure over time.
Configuration: Llama 3.1 8B instruct, 1x H100 SXM; concurrent requests: 200. NIM ON: FP8, throughput 1201 tokens/s, ITL 32ms. NIM OFF: FP8, throughput 613 tokens/sec, ITL 37ms.
Models
Build With the Leading Open Models
Get optimized inference performance for the latest AI models to power multimodal agentic AI with reasoning, language, retrieval, speech, image, and more. NIM comes with accelerated inference engines from NVIDIA and the community, including NVIDIA® TensorRT™, TensorRT-LLM, and more—prebuilt and optimized for low-latency, high-throughput inferencing on NVIDIA-accelerated infrastructure.
Designed to run anywhere, NIM inference microservices expose industry-standard APIs for easy integration with enterprise systems and applications and scale seamlessly on Kubernetes to deliver high-throughput, low-latency inference at cloud scale.
Deploy NIM
Deploy NIM for your model with a single command. You can also easily run NIM with fine-tuned models.
Run Inference
Get NIM up and running with the optimal runtime engine based on your NVIDIA-accelerated infrastructure.
Build
Integrate self-hosted NIM endpoints with just a few lines of code.
Talk to an NVIDIA AI specialist about moving generative AI pilots to production with the security, API stability, and support that comes with NVIDIA AI Enterprise.
Explore your generative AI use cases.
Discuss your technical requirements.
Align NVIDIA AI solutions to your goals and requirements.
Review the process of creating an AI-enabled NVIDIA Omniverse™ Kit-based application. You’ll learn how to use Omniverse extensions, NIM microservices, and Python code to add an extension capable of generating backgrounds from text input.
Get unlimited access to NIM API endpoints for prototyping, accelerated by DGX Cloud. When ready for production, download and self-host NIM on your preferred infrastructure—workstation, datacenter, edge or cloud, or access NIM endpoints hosted by NVIDIA partners.
Talk to an NVIDIA product specialist about moving from pilot to production with the security, API stability, and support that comes with NVIDIA AI Enterprise.