AI models are ubiquitous in modern autonomy stacks, enabling tasks such as perception and prediction. However, providing safety assurances for such models represents a major challenge, due in part to their data-driven design and dynamic behavior. We'll present recent results from NVIDIA Research on building trust in AI models for autonomous vehicle systems, along four main directions: (1) techniques to robustly train machine learning models, along with safety key performance indicators that allow one to measure the safety of AI models at scale; (2) tools to monitor AI components at run-time that detect and identify possible anomalies and trigger early warnings; (3) approaches to design safety filters, which bound the behavior of AI components at run-time in order to enforce their safety by design; and (4) data-driven traffic models for closed-loop simulation and safety assessment of autonomy stacks. We'll discuss how such a multipronged approach is necessary to achieve the level of trust required for safety-critical vehicle autonomy.