Optimizing Inference Performance and Incorporating New LLM Features in Desktops and Workstations
, Deep Learning Solution Architect, NVIDIA
, Product Manager, NVIDIA
TensorRT has become the preferred choice for independent software vendor applications used in desktop and workstation environments, including those developed by Topaz, BlackMagic, and others. As these applications adapt to embrace the emerging generative AI trend, they seek to incorporate more features driven by large language models (LLMs) and stable diffusion techniques. We'll describe the journey as developer on how to apply TensorRT optimizations to achieve the speed-of-light inference performance, and share best practices. We'll also tell stories of how NVIDIA and partners worked together to come up with new features and improvements to support the release of TensorRT.