NVIDIA AI

NVIDIA NIM Microservices

Designed for rapid, reliable deployment of accelerated generative AI inference anywhere.

Overview

What Is NVIDIA NIM?

NVIDIA NIM™ provides prebuilt, optimized inference microservices for rapidly deploying the latest AI models on any NVIDIA-accelerated infrastructure—cloud, data center, workstation, and edge.

Generative AI Inference Powered by NVIDIA NIM: Performance and TCO

See how NIM microservices perform better than popular alternatives. They can process up to 3x more tokens per second when running on the same NVIDIA-accelerated infrastructure.

Free Development Access to NIM

Get access to unlimited prototyping with hosted APIs for NIM accelerated by DGX Cloud, or download and self-host NIM microservices for research and development as part of the NVIDIA Developer program.

Accelerate AI Deployment With NVIDIA NIM

NVIDIA NIM combines the ease of use and operational simplicity of managed APIs with the flexibility and security of self-hosting models on your preferred infrastructure. NIM microservices come with everything AI teams need—the latest AI foundation models, optimized inference engines, industry-standard APIs, and runtime dependencies—prepackaged in enterprise-grade software containers ready to deploy and scale anywhere.

NVIDIA NIM Stack Diagram

Benefits

Enterprise Generative AI That Does More for Less

Easy, enterprise-grade microservices built for high-performance AI—designed to work seamlessly and scale affordably. Experience the fastest time to value for AI agents and other enterprise generative AI applications powered by the latest AI models for reasoning, simulation, speech, and more. 

 Icon showing ease of use

Ease of Use

Accelerate innovation and time to market with prebuilt, optimized microservices for the latest AI models. With standard APIs, models can be deployed in five minutes and easily integrated into applications.

 Icon showing security and manageability

Enterprise Grade

Deploy enterprise-grade microservices that are continuously managed by NVIDIA through rigorous validation processes and dedicated feature branches—all backed by NVIDIA enterprise support, which also offers direct access to NVIDIA AI experts.

Icon showing performance and scale

Performance and Scale

Improve TCO with low-latency, high-throughput AI inference that scales with the cloud, and achieve the best accuracy with support for fine-tuned models out of the box.

Icon showing portability

Portability

Deploy anywhere with prebuilt, cloud-native microservices ready to run on any NVIDIA-accelerated infrastructure—cloud, data center, and workstation—and scale seamlessly on Kubernetes and cloud service provider environments. 

Demo

Build AI Agents With NIM

Learn how to set up two AI agents—one for content generation and another for digital graphic design—and see how easy it is to get up and running with NIM microservices.


Technology

Building Blocks for Agentic AI

Reasoning NIM icon

Get the Latest AI Reasoning Models

Get the latest AI models for reasoning, language, retrieval, speech, vision and more—ready to deploy in five minutes on any NVIDIA-accelerated infrastructure.

NVIDIA Blueprints icon

Jump-Start Development With NVIDIA Blueprints

Build impactful agentic AI applications with comprehensive reference workflows featuring NVIDIA acceleration libraries, SDKs, and NIM microservices.

AgentIQ Toolkit icon

Simplify Development With NVIDIA AgentIQ Toolkit

Weave NIM microservices into agentic AI applications with the NVIDIA AgentIQ library, a developer toolkit for building AI agents and integrating them into custom workflows.

Benchmarks

Boost Throughput With NIM

NVIDIA NIM provides optimized throughput and latency out of the box to maximize token generation, support concurrent users at peak times, and improve responsiveness. NIM microservices are continuously updated with the latest optimized inference engines, boosting performance on the same infrastructure over time.

0.0x0.5x1.0x1.5x2x2X1XNIM OnNIM Off

Configuration: Llama 3.1 8B instruct, 1x H100 SXM; concurrent requests: 200. NIM ON: FP8, throughput 1201 tokens/s, ITL 32ms. NIM OFF: FP8, throughput 613 tokens/sec, ITL 37ms.

Models

Build With the Leading Open Models

Get optimized inference performance for the latest AI models to power multimodal agentic AI with reasoning, language, retrieval, speech, image, and more. NIM comes with accelerated inference engines from NVIDIA and the community, including NVIDIA® TensorRT™, TensorRT-LLM, and more—prebuilt and optimized for low-latency, high-throughput inferencing on NVIDIA-accelerated infrastructure. 


Features

The Easy Button for AI Development and Deployment

Designed to run anywhere, NIM inference microservices expose industry-standard APIs for easy integration with enterprise systems and applications and scale seamlessly on Kubernetes to deliver high-throughput, low-latency inference at cloud scale.

Deploy NIM

Deploy NIM for your model with a single command. You can also easily run NIM with fine-tuned models.

Run Inference

Get NIM up and running with the optimal runtime engine based on your NVIDIA-accelerated infrastructure.

Build

Integrate self-hosted NIM endpoints with just a few lines of code.

Deploy
Run
Build
docker run nvcr.io/nim/publisher_name/model_name
curl -X 'POST' \ 'http://0.0.0.0:8000/v1/completions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "model" : "model_name", "prompt" : "Once upon a time", "max_tokens" : 64 }'
import openai client = openai.OpenAI( base_url = "YOUR_LOCAL_ENDPOINT_URL", api_key="YOUR_LOCAL_API_KEY" ) chat_completion = client.chat.completions.create( model="model_name", messages=[{"role" : "user" , "content" : "Write me a love song" }], temperature=0.7 )

Use Cases

How NIM Is Being Used

See how NVIDIA NIM supports industry use cases, and jump-start your AI development with curated examples.

AI Virtual Assistants

Enhance customer experiences and improve business processes with generative AI.

Virtual human in a virtual chat session.

Starting Options

Ways to Get Started With NVIDIA NIM

Start Prototyping for Free

Get started with easy-to-use API endpoints for NIM, powered by DGX Cloud.

  • Access fully accelerated AI infrastructure.
  • Ensure your data isn't used for model training.
  • Access for development and testing as part of the NVIDIA Developer Program.

Download and Deploy

Run NVIDIA NIM to scale optimized AI models in the cloud or data center of your choice.

  • Ensure data never leaves your secure enclave.
  • Seamlessly transition from cloud endpoints to self-hosted APIs without code changes.
  • Start with free access for development and testing, and move to an NVIDIA AI Enterprise license for production.

Get in Touch

Talk to an NVIDIA AI specialist about moving generative AI pilots to production with the security, API stability, and support that comes with NVIDIA AI Enterprise.

  • Explore your generative AI use cases.
  • Discuss your technical requirements.
  • Align NVIDIA AI solutions to your goals and requirements.

Resources

The Latest NVIDIA NIM Resources

NVIDIA NIM in the News

Next Steps

Ready to Get Started?

Get unlimited access to NIM API endpoints for prototyping, accelerated by DGX Cloud. When ready for production, download and self-host NIM on your preferred infrastructure—workstation, datacenter, edge or cloud, or access NIM endpoints hosted by NVIDIA partners.

Get in Touch

Talk to an NVIDIA product specialist about moving from pilot to production with the security, API stability, and support that comes with NVIDIA AI Enterprise.

Stay Up to Date on NVIDIA NIM News

Get the latest news, technologies, breakthroughs, and more sent straight to your inbox.

Select Location
Middle East