Discover a collection of reference workflows that use vision language models to deliver rich, interactive visual perception capabilities for a wide range of industries.
Workloads
Computer Vision / Video Analytics
Industries
Manufacturing
Smart Cities/Spaces
Retail/ Consumer Packaged Goods
Media and Entertainment
Healthcare and Life Sciences
Business Goal
Return on Investment
Innovation
Products
NVIDIA Metropolis
NVIDIA AI Enterprise
Traditional video analytics applications and their development workflows are typically built on fixed-function, limited models that are designed to detect and identify only a select set of predefined objects. With generative AI and foundation models, you can now build applications with fewer models that have incredibly complex and broad perception and rich contextual understanding. This newer generation of vision language models (VLMs) is giving rise to smart, powerful video analytics AI agents.
A video analytics AI agent can combine both vision and language modalities to understand natural language prompts and perform visual question-answering. For example, answering a broad range of questions in natural language that can be applied against a recorded or live video stream. This deeper understanding of video content enables more accurate and meaningful interpretations, improving the functionality of video analytics applications and analysis of real-world scenarios. These agents promise to unlock entirely new insights and possibilities for automation.
Highly perceptive, accurate, and interactive video analytics AI agents will be deployed throughout factories, warehouses, retail stores, airports, traffic intersections, and more. This will have a tremendous impact on operations teams looking to make better decisions using richer insights generated from natural interactions. Managers and operations teams will communicate with these agents in natural language, all powered by generative AI and VLMs with NVIDIA NIM™ microservices at their core.
Explore the technical implementation.
Quick Links
NVIDIA NIM is a set of inference microservices that includes industry-standard APIs, domain-specific code, optimized inference engines, and enterprise runtime. It delivers multiple VLMs for building your video analytics AI agent that can process live or archived images or videos to extract actionable insight using natural language. We’ve created a reference workflow of a video analytics AI agent that you can try out to accelerate your development process.
The NVIDIA AI Blueprint for video search and summarization (VSS) makes it easy to get started building and customizing video analytics AI agents—all powered by generative AI, vision language models (VLMs), large language models (LLMs), and NVIDIA NIM. The video analytics AI agents are given tasks through natural language and can process vast amounts of video data to provide critical insights that help a range of industries optimize processes, improve safety, and cut costs.
The AI agents built from the blueprint can analyze, interpret, and process video data at scale, producing video summaries up to 200X faster than going through the videos manually. The blueprint can fast-track AI agents development by bringing together various generative AI models and services, and provides a lot of flexibility through a wide range of NVIDIA and 3rd-party VLMs/LLMs, as well as optimized deployments options from edge to cloud.
Apply for early access.
You can build video analytics AI agents powered by the NVIDIA Jetson™ edge AI platform using the new feature of NVIDIA JetPack™—Jetson Platform Services. The generative AI application is completely running on an NVIDIA Jetson Orin™ device that’s capable of detecting events to generate alerts and facilitate interactive Q&A sessions.
Explore frequently asked questions.
NIM is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing across the cloud, data center, and workstations. It supports a wide range of AI models, including open-source community and NVIDIA AI Foundation models, to ensure seamless, scalable AI inferencing—on-premises or in the cloud—using industry-standard APIs. All NIM microservices and associated preview APIs can be found at build.nvidia.com.
To get started with NIM microservices and NVIDIA AI Blueprints, visit build.nvidia.com to create an account and start exploring the available NIM microservices. You can check out the VLM NIMs available here.
Try the NVIDIA AI Blueprint for video search and summarization for free.
All users can get started for free with the preview APIs on build.nvidia.com. Each new account can receive up to 5,000 credits to try out the APIs. To continue development after credits run out, you can deploy the downloadable NIM microservices locally to your hardware or to a cloud instance. Developers can also access NIM via the NVIDIA Developer Program. See details in this FAQ.
Downloadable NIM microservices require an NVIDIA AI Enterprise License. To learn more and try them for free, visit this page.
The NIM developer forum is the best place to ask questions and engage with our developer community. You can access the forums here.
Explore the reference workflow, powered by multiple visual language models, to easily build your video analytics AI agent.