Triton Inference Server supports all major frameworks like TensorFlow, NVIDIA® TensorRT™, PyTorch, ONNX Runtime, as well as custom backend frameworks. It provides AI researchers and data scientists the freedom to choose the right framework for their project.
Simplify Model Deployment
Leverage NVIDIA Triton Inference Server to easily deploy multi-framework AI models at scale.