NVIDIA NGC™ is the portal of enterprise services, software, management tools, and support for end-to-end AI and digital twin workflows. Bring your solutions to market faster with fully managed services, or take advantage of performance-optimized software to build and deploy solutions on your preferred cloud, on-prem, and edge systems.
Enterprise Cloud Services
NGC offers a collection of cloud services, including NVIDIA NeMo , BioNemo, and Riva Studio for generative AI, drug discovery, and speech AI solutions, and the NGC Private Registry for securely sharing proprietary AI software.
Try State-of-the-Art Generative AI Models From Your Browser
NVIDIA AI Foundation models offers an easy-to-use interface to quickly experience generative AI models from your browser without any setup. After you explore the model, you can customize it and take it to production with NVIDIA AI Enterprise.
Try State-of-the-Art Generative AI Models From Your Browser
NVIDIA AI Foundation models offers an easy-to-use interface to quickly experience generative AI models from your browser without any setup. After you explore the model, you can customize it and take it to production with NVIDIA AI Enterprise.
NGC Catalog: GPU-Optimized Software Hub for AI, Digital Twins, and HPC
The NGC catalog provides access to GPU-accelerated software that speeds up end-to-end workflows with performance-optimized containers, pretrained AI models, and industry-specific SDKs that can be deployed on premises, in the cloud, or at the edge.
Test drive the models directly from your browser, integrate them into your applications using APIs, or download and run on your Windows machine without any setup.
From HPC to conversational AI to medical imaging to recommender systems and more, NGC Collections offer ready-to-use containers, pretrained models, SDKs, and Helm charts for diverse use cases and industries—in one place—to speed up your application development and deployment process.
Language Modeling
Language modeling is a natural language processing (NLP) task that determines the probability of a given sequence of words occurring in a sentence.
Image segmentation is the field of image processing that deals with separating an image into multiple subgroups or regions that represent distinctive objects or subparts.
Object detection is about, not only detecting the presence and location of objects in images and videos, but also categorizing them into everyday objects.
Automatic speech recognition (ASR) systems include giving voice commands to an interactive virtual assistant, converting audio to subtitles on an online video, and more.
Speech synthesis or text-to-speech is the task of artificially producing human speech from raw transcripts. Text-to-speech models are used when a mobile device converts text on a webpage to speech.
High-performance computing (HPC) is one of the most essential tools fueling the advancement of computational science, and that universe of scientific computing has expanded in all directions.
NVIDIA AI Enterprise is an end-to-end, secure, cloud-native suite of AI software that enables organizations to solve new challenges while increasing operational efficiency.
Trustworthy AI
Take advantage of software containers that are scanned monthly to reduce security concerns and AI models that provide details around bias, explainability, safety, security, and privacy.
Build AI solutions and manage the lifecycle of AI applications with global enterprise support that ensures your business-critical projects stay on track.
Software from the NGC catalog runs on bare-metal servers, Kubernetes, or on virtualized environments and can be deployed on premises, in the cloud, or at the edge—maximizing utilization of GPUs, portability, and scalability of applications. Users can manage the end-to-end AI development lifecycle with NVIDIA Base Command™.
Software from the NGC catalog can be deployed on GPU-powered instances. The software can be deployed directly on virtual machines (VMs) or on Kubernetes services offered by major cloud service providers (CSPs). NVIDIA AI software makes it easy for enterprises to develop and deploy their solutions in the cloud.
At the Edge
As computing expands beyond data centers and to the edge, the software from NGC catalog can be deployed on Kubernetes-based edge systems for low-latency, high-throughput inference. Securely deploy, manage, and scale AI applications from NGC across distributed edge infrastructure with NVIDIA AI Enterprise.
Create AI Applications Faster with NVIDIA TAO
NVIDIA TAO is a framework to train, adapt and optimize AI models that eliminates the need for large training sets and deep AI expertise, simplifying the creation of enterprise AI applications and services.
The NGC software catalog provides a range of resources that meet the needs of data scientists, developers, and researchers with varying levels of expertise, including containers, pretrained models, domain-specific SDKs, use case-based collections, and Helm charts for the fastest AI implementations.
COLLECTIONS
CONTAINERS
MODELS
JUPYTER NOTEBOOKS
HELM CHARTS
INDUSTRY SOLUTIONS
Build AI Solutions Faster with All of the Software You Need
Collections makes it easy to discover the compatible framework containers, models, Juptyer notebooks, and other resources to get started in AI faster. The respective collections also provide detailed documentation to deploy the content for specific use cases.
NGC catalog offers ready-to-use collections for various applications, including NLP, ASR, intelligent video analytics, and object detection.
The NGC catalog hosts containers for the top AI and data science software, tuned, tested, and optimized by NVIDIA. Fully tested containers for HPC applications and data analytics are also available, allowing users to build solutions from a tested framework with complete control.
Accelerate Your AI Projects With Pretrained Models
The NGC catalog hosts pretrained GPU-optimized models for a variety of common AI tasks that developers can use as-is or retrain them easily, thus saving valuable time in bringing solutions to market. Each model comes with a model resume outlining the architecture, training details, datasets used, and limitations. The AI playground enables developers to experience the models directly in their browsers, integrate them into applications using APIs or download and run them on Windows machines with RTX GPUs.
Helm charts automate software deployment on Kubernetes clusters. The NGC catalog hosts Kubernetes-ready Helm charts that make it easy to consistently and secure deploy both NVIDIA and third-party software.
NVIDIA GPU Operator is a suite of NVIDIA drivers, container runtime, device plug-in, and management software that IT teams can install on Kubernetes clusters to give users faster access to run their workloads.
Deliver Solutions Faster with Ready-to-Deploy AI Workflows
The NGC catalog features NVIDIA TAO Toolkit, NVIDIA Triton™ Inference Server, and NVIDIA TensorRT™ to enable deep learning application developers and data scientists to re-train deep learning models and easily optimize and deploy them for inference.
Learn how DeepZen, an AI company focused on human-like speech with emotions, leverages the NGC catalog to automate processes such as audio recordings and voiceovers.
Learn how the University of Arizona employs containers from the NGC catalog to accelerate their scientific research by creating 3D point clouds directly on drones.
The NGC catalog provides a comprehensive collection of GPU-optimized containers for AI, machine learning, and HPC that are tested and ready to run on supported NVIDIA GPUs on premises, in the cloud, or at the edge. In addition, the catalog provides pretrained models, model scripts, and industry solutions that can be easily integrated into existing workflows.
Compiling and deploying deep learning frameworks can be time consuming and prone to errors. Optimizing AI software requires expertise. Building models requires expertise, time, and compute resources. The NGC catalog takes care of these challenges with GPU-optimized software and tools that data scientists, developers, IT, and users can leverage so they can focus on building their solutions.
Each container has a pre-integrated set of GPU-accelerated software. The stack includes the chosen application or framework, NVIDIA CUDA® Toolkit, accelerated libraries, and other necessary drivers—all tested and tuned to work together immediately with no additional setup.
The NGC catalog features the top AI software, including TensorFlow, PyTorch, MxNet, NVIDIA TensorRT, RAPIDS™, and many more. Browse the NGC catalog to see the full list.
The NGC catalog containers run on PCs, workstations, HPC clusters, NVIDIA DGX systems, on NVIDIA GPUs on supported cloud providers, and NVIDIA-Certified Systems. The containers run in Docker and Singularity runtimes. View the NGC documentation for more information.
NVIDIA offers virtual machine image files in the marketplace section of each supported cloud service provider. To run an NGC container, simply pick the appropriate instance type, run the NGC image, and pull the container into it from the NGC catalog. The exact steps vary by cloud provider, but you can find step-by-step instructions in the NGC documentation.
The most popular deep learning software such as TensorFlow, PyTorch, and MXNet are updated monthly by NVIDIA engineers to optimize the complete software stack and get the most from your NVIDIA GPUs.
There’s no charge to download the containers from the NGC catalog (subject to the terms of use). However, for running in the cloud, each cloud service provider will have their own pricing for GPU compute instances.
The NGC Private Registry was developed to provide users with a secure space to store and share custom containers, models, model scripts, and Helm charts within their enterprise. The Private Registry allows them to protect their IP while increasing collaboration.
Users get access to the NVIDIA Developer Forum, supported by a large community of AI and GPU experts from the NVIDIA customer, partner, and employee ecosystem. NVIDIA Enterprise Support is available with NVIDIA AI Enterprise licenses and provides direct access to NVIDIA experts, control of your upgrade and maintenance schedules with long-term support options, and access to training and knowledge-base resources.
In addition, NGC Support Services provides L1-L3 support on NVIDIA-Certified Systems, available through our OEM partners.
NVIDIA-Certified Systems, consisting of NVIDIA EGX™ and HGX™ platforms, enable enterprises to confidently choose performance-optimized hardware and software solutions that securely and optimally run their AI workloads—both in smaller configurations and at scale. See full list of NVIDIA-Certified Systems.