Solutions: AI Workflows
Track objects and the customer journey across multiple cameras throughout the store.
Retail spaces are gaining valuable insight into the movement of objects and customers by applying computer vision AI to many cameras covering multiple physical areas. NVIDIA's customizable multi-camera tracking (MTMC) workflow gives you a starting point to get your development in gear without having to start from scratch and eliminates months of development time. The workflow also provides a validated path to production for tracking objects across cameras in stores, warehouses, and distribution centers.
This AI workflow uses the NVIDIA DeepStream SDK, pretrained models, and new state-of-the-art microservices to deliver advanced multi-target, multi-camera (MTMC) capabilities. Developers can now more easily create systems that track objects across multiple cameras throughout a retail store or warehouse.
This MTMC workflow tracks and associates objects across cameras and maintains a unique ID for each object. This ID is tracked through visual embeddings/appearance rather than any personal biometric information, so privacy is fully maintained.
MTMC capabilities help bolster the security of self-checkout and are foundational for fully autonomous stores. The workflow can also be trained to detect anomalous behavior and be deployed and scaled with Kubernetes and managed by Helm.
The image above shows a visual representation of the Multi-Camera Tracking app end-to-end pipeline. This reference application uses live camera feeds as input; performs object detection, object tracking, streaming analytics, and multi-target multi-camera tracking; provides various aggregated analytics functions as API endpoints; and visualizes the results via a browser-based user interface. Live camera feeds are simulated by streaming video files in RTSP format. Various analytics microservices are connected via Kafka message broker, and processed results are saved in a database for long-term storage.
Please complete this short application for early access to the multi-camera tracking AI workflow.
Please note that you must be a registered NVIDIA Developer to join the program. Log in using your organizational email address. We cannot accept applications from accounts using Gmail, Yahoo, QQ, or other personal email addresses.
Highly accurate models are provided to enable identifying objects and creating a unique global ID based on embeddings/appearance rather than any personal biometric information.
A state-of-the-art microservice that uses objects’ feature embeddings, along with spatial temporal information, to uniquely identify and associate objects across cameras.
Delivered through cloud-native microservices, this AI workflow allows for jump-starting development and ease of customization to rapidly create solutions requiring cross-camera object tracking.
AI workflows accelerate the path to AI outcomes. The multi-camera AI workflow provides a reference for developers to rapidly get started in creating a flexible and scalable MTMC AI solution.