Solutions: AI Workflows
Track objects and the customer journey across multiple cameras throughout the store.
Retail spaces are gaining valuable insight into the movement of objects and customers by applying computer vision AI to many cameras covering multiple physical areas. The multi-camera tracking AI workflow uses the NVIDIA DeepStream SDK and delivers multi-target multi-camera (MTMC) capabilities to track objects and store associates across cameras.
This AI workflow uses the NVIDIA DeepStream SDK, pretrained models, and new state-of-the-art microservices to deliver advanced multi-target, multi-camera (MTMC) capabilities. Developers can now more easily create systems that track objects across multiple cameras throughout a retail store or warehouse.
This MTMC workflow tracks and associates objects across cameras and maintains a unique ID for each object. This ID is tracked through visual embeddings/appearance rather than any personal biometric information, so privacy is fully maintained.
MTMC capabilities help bolster the security of self-checkout and are foundational for fully autonomous stores. The workflow can also be trained to detect anomalous behavior and be deployed and scaled with Kubernetes and managed by Helm.
The multi-camera tracking application can be broken down into three key components: detection and single-camera tracking, multi-camera tracking, and storage and output. Live RTSP streams from cameras go through the detection and tracking microservices to generate feature embeddings of an object that are representative of its appearance. This metadata is sent downstream over a broker to other microservices. Metadata is picked up by a multi-camera tracking service, which takes specific data, fuses and synchronizes the data together, and correlates it across cameras. It will identify objects based on feature embeddings and the spatial temporal information that compares the time and space vector of the objects and correlates from different cameras to associate that the objects are the same. Finally, this generates a global ID for each uniquely identified object that will remain common across all cameras.
Please complete this short application for early access to the multi-camera tracking AI workflow.
Please note that you must be a registered NVIDIA Developer to join the program. Log in using your organizational email address. We cannot accept applications from accounts using Gmail, Yahoo, QQ, or other personal email addresses.
Highly accurate models are provided to enable identifying objects and creating a unique global ID based on embeddings/appearance rather than any personal biometric information.
A state-of-the-art microservice that uses objects’ feature embeddings, along with spatial temporal information, to uniquely identify and associate objects across cameras.
Delivered through cloud-native microservices, this AI workflow allows for jump-starting development and ease of customization to rapidly create solutions requiring cross-camera object tracking.
AI workflows accelerate the path to AI outcomes. The multi-camera AI workflow provides a reference for developers to rapidly get started in creating a flexible and scalable MTMC AI solution.
Best-in-class AI software streamlines development and deployment of AI solutions.
Frameworks and containers are performance-tuned and tested for NVIDIA GPUs.
Access prepackaged, customizable reference applications deployable in the cloud.
Cloud-native NVIDIA Metropolis microservices are designed to deploy at scale with Kubernetes and manage with HELM.
Read more about how NVIDIA is helping the retail industry tackle its $100 Billion annual shrinkage problem.
Learn how the retail AI workflows address highly complex application development challenges and provide the initial ‘building blocks’ necessary to build an effective solution.
Want to get started? Apply for early access and get a jump start on creating a flexible and scalable loss prevention AI solution.
NVIDIA Privacy Policy