The Edge Meets Massive Compute: Leveraging DOCA and Morpheus for a Complete End-to-end Detection and Mitigation Pipeline
, Big Data Analytics Manager, NVIDIA
, Senior AI Infrastructure Manager, NVIDIA
While moving compute and detection as far to the edge as possible is desirable, it often comes at the cost of compute resources. Conversely, it’s expensive to constantly move data back from the edge to a central collection point. At NVIDIA, we’re tackling this problem by combining the capabilities of edge-based sensors on the DPU with edge and gateway compute on the GPU. Running native on the DPU using DOCA, telemetry agents actively collect, filter, and aggregate data that's sent to Morpheus for analysis. Morpheus enables full AI inferencing across both machine learning models and complex neural networks, taking advantage of massive parallelization on the GPU for maximum throughput. Using the output of that analysis, the DPU can make real-time changes to filters, heuristics, and sensing profiles to adapt instantly to changes in the network and provide a more effective defense to evolving threats. We'll discuss the engineering pipeline that makes this possible, as well as how cybersecurity software vendors and enterprises can implement something similar in their products and data centers.