Connect with the Experts: Deploying AI Models to the Workstation
, Developer Technology Engineer, NVIDIA
, PRINCIPAL ENGINEER, NVIDIA
, Developer Technology Engineer, NVIDIA
, Senior Developer Technology Engineer, NVIDIA
, Developer Technology Engineer, NVIDIA
, Software Engineer, NVIDIA
, Engineer, QA, NVIDIA
Our experts are highly experienced with moving AI Inference models from research to production environments and are happy to share these experiences, tools, and techniques with you, including topics such as:
- Moving from research to production - Minimizing device memory usage - Performance optimization - Integration with existing code bases
Technologies include : -TensorRT -ONNX and DirectML -Cuda and cuDNN -Triton Inference Server
Join us to learn more about the constraints related to deployment of AI inference models on Windows workstations using a local GPU.
*IMPORTANT: Connect with the Experts sessions are interactive sessions that give you a unique opportunity to meet, in either a group or 1:1 setting, with the brilliant minds behind NVIDIA’s products and research to get your questions answered. Space is limited - first come, first served. We request that you limit your 1:1 discussion with our Experts to 5 minutes. You will have the option to ask questions in a group setting as well. We also recommend that you use a headset microphone to ensure our Experts can hear you clearly. To test your webcam (optional) and microphone settings, please visit this link.