NVIDIA-Certified Associate

Generative AI LLMs

(NCA-GENL)

About This Certification

The NCA Generative AI LLMs certification is an entry-level credential that validates the foundational concepts for developing, integrating, and maintaining AI-driven applications using generative AI and large language models (LLMs) with NVIDIA solutions. The exam is online and proctored remotely, includes 50 questions, and has a 60-minute time limit.

Please carefully review NVIDIA's examination policy before scheduling your exam.

If you have any questions, please contact us here.

Certification Exam Details

Duration: 1 hour

Price: $135 

Certification level: Associate

Subject: Generative AI and large language models

Number of questions: 50

Prerequisites: A basic understanding of generative AI and large language models

Language: English 

Validity: This certification is valid for two years from issuance. Recertification may be achieved by retaking the exam.

Credentials: Upon passing the exam, participants will receive a digital badge and optional certificate indicating the certification level and topic.

Exam Preparation

Topics Covered in the Exam

Topics covered in the exam include:

  • Fundamentals of machine learning and neural networks
  • Prompt engineering
  • Alignment
  • Data analysis and visualization
  • Experimentation
  • Data preprocessing and feature engineering
  • Experiment design
  • Software development
  • Python libraries for LLMs
  • LLM integration and deployment

Candidate Audiences

  • AI DevOps engineers
  • AI strategists
  • Applied data scientists
  • Applied data research engineers
  • Applied deep learning research scientists
  • Cloud solution architects
  • Data scientists
  • Deep learning performance engineers
  • Generative AI specialists
  • LLM specialists and researchers
  • Machine learning engineers
  • Senior researchers
  • Software engineers
  • Solutions architects

Exam Study Guide

Review study guide

Exam Blueprint

Please review the table below. It’s organized by topic and weight to indicate how much of the exam is focused on each subject. Topics are mapped to NVIDIA Training courses and workshops that cover those subjects and that you can use to prepare for the exam.

Recommended Training
Type of course | Duration | Cost
Content Breakdown 30%
Core Machine Learning and AI Knowledge
24%
Software Development
22%
Experimentation
14%
Data Analysis and Visualization
10%
Trustworthy AI

Generative AI Explained
Self-paced | 2 hours | Free

You can take one of these courses:
Getting Started With Deep Learning
Self-paced | 8 hours | $90

Fundamentals of Deep Learning
Workshop | 8 hours | $500

You can take one of these courses:
Accelerating End-to-End Data Science Workflows
Self-paced | 6 hours | $90

Fundamentals of Accelerated Data Science
Workshop | 8 hours | $500

Introduction to Transformer-Based Natural Language Processing
Self-paced | 6 hours | $30

Building Transformer-Based Natural Language Processing Applications
Workshop | 8 hours | $500

Prompt Engineering With LLaMA-2
Self-paced | 3 hours | $30

Augment Your LLM Using Retrieval-Augmented Generation
Self-paced | 1 hour | Free

You can take one of these courses:
Building RAG Agents for LLMs
Self-paced | 8 hours | Free

Building RAG Agents for LLMs
Workshop | 8 hours | $500

Rapid Application Development With Large Language Models (LLMs)
Workshop | 8 hours | $500

You can take one of these courses:
Generative AI With Diffusion Models
Self-paced | 8 hours | $90

Generative AI With Diffusion Models
Workshop | 8 hours | $500

Efficient Large Language Model (LLM) Customization
Workshop | 8 hours | $500

Review These Additional Materials

Contact Us

NVIDIA offers training and certification for professionals looking to enhance their skills and knowledge in the field of AI, accelerated computing, data science, advanced networking, graphics, simulation, and more.

Contact us to learn how we can help you achieve your goals.

Stay Up to Date

Get training news, announcements, and more from NVIDIA, including the latest information on new self-paced courses, instructor-led workshops, free training, discounts, and more. You can unsubscribe at any time.

Generative AI Explained

Skills covered in this course:

Core Machine Learning and AI Knowledge

  • Define generative AI and explain how it works. ​
  • Describe various generative AI applications. ​
  • Explain the challenges and opportunities of generative AI.

You can take one of these courses:

Getting Started With Deep Learning
Fundamentals of Deep Learning

Skills covered in these courses:

Core Machine Learning and AI Knowledge​

  • Understand the fundamental techniques and tools required to train a deep learning model.

Software Development

  • Gain experience with common deep learning data types and model architectures. 
  • Leverage transfer learning between models to achieve efficient results with less data and computation. 
  • Take on your own project with a modern deep learning framework.

Experimentation

  • Enhance datasets through data augmentation to improve model accuracy.

You can take one of these courses:

​Accelerating End-to-End Data Science Workflows
Fundamentals of Accelerated Data Science

Skills covered in these courses:

Data Analysis​ and Visualization​

Understand GPU-accelerated data manipulation:

  • Ingest and prepare several datasets (some larger-than-memory) for use in multiple machine learning exercises.
  • Read data directly to single and multiple GPUs with cuDF and Dask cuDF.
  • Prepare information for machine learning tasks on the GPU with cuDF.
  • Apply several essential machine learning techniques to prepared data.
  • Use supervised and unsupervised GPU-accelerated algorithms with cuML.
  • Train XGBoost models with Dask on multiple GPUs.
  • Create and analyze graph data on the GPU with cuGraph.
  • Use NVIDIA RAPIDS™ to integrate multiple massive datasets and perform analysis.
  • Implement GPU-accelerated data preparation and feature extraction using cuDF and Apache Arrow data frames.
  • Apply a broad spectrum of GPU-accelerated machine learning tasks using XGBoost and a variety of cuML algorithms.
  • Execute GPU-accelerated graph analysis with cuGraph, achieving massive-scale analytics in small amounts of time.
  • Rapidly achieve massive-scale graph analytics using cuGraph routines.

Introduction to Transformer-Based Natural Language Processing

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Learn to describe how transformers are used as the basic building blocks of modern LLMs for natural language processing (NLP) applications.
  • Understand how transformer-based LLMs can be used to manipulate, analyze, and generate text-based data.

Software Development

  • Leverage pretrained, modern LLMs to solve various NLP tasks such as token classification, text classification, summarization, and question-answering.

Experimentation

  • Understand how transformer-based LLMs can be used to manipulate, analyze, and generate text-based data.  
  • Leverage pretrained, modern LLMs to solve various NLP tasks such as token classification, text classification,  summarization, and question-answering.

Data Analysis​ and Visualization​

  • Understand how transformer-based LLMs can be used to manipulate, analyze, and generate text-based data.

Building Transformer-Based Natural Language Processing Applications

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Know how transformers are used as the basic building blocks of modern LLMs for natural language processing (NLP) applications. 
  •  Know how self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results.

Software Development

  • Apply self-supervised transformer-based models to concrete NLP tasks using NVIDIA NeMo™. 
  • Deploy an NLP project for live inference on NVIDIA Triton™. 
  • Manage inference challenges and deploy refined models for live applications.

Experimentation

  • Leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and  question-answering.

Prompt Engineering With LLaMA-2

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  •  Iteratively write precise prompts to bring LLM behavior in line with your intentions.

Experimentation

  • Leverage editing the powerful system message.  
  • Guide LLMs with one-to-many-shot prompt engineering.  
  • Incorporate prompt-response history into the LLM context to create chatbot behavior.

Augment Your LLM Using Retrieval-Augmented Generation

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Understand the basics of retrieval-augmented generation (RAG).  
  • Understand the RAG process.  
  • Be familiar with NVIDIA AI Foundation models and the components that constitute a RAG model.

Building RAG Agents for LLMs

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Explore scalable deployment strategies for LLMs and vector databases. 
  • Practice with state-of-the-art models with clear next steps regarding productionalization and  framework exploration.

Software Development

  • Understand microservices—how to work between them and how to develop your own. 
  • Practice with state-of-the-art models with clear next steps regarding productionalization and framework exploration.

Experimentation

  • Experiment with modern LangChain paradigms to develop dialog management and document-retrieval solutions.

Trustworthy AI

  • Practice with state-of-the-art models with clear next steps regarding productionalization and framework exploration.

Rapid Application Development With Large Language Models (LLMs)

Skills covered in this course:

Software Development

  • Find, pull in, and experiment with the Hugging Face model repository and the associated transformers API.
  • Use state management and composition techniques to guide LLMs for safe, effective, and accurate conversation.

Experimentation

  • Find, pull in, and experiment with the Hugging Face model repository and the associated transformers API. 
  • Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification. 
  • Use decoder models to generate sequences like code, unbounded answers, and conversations. 
  • Use state management and composition techniques to guide LLMs for safe, effective, and accurate conversation.

Trustworthy AI

  • Use state management and composition techniques to guide LLMs for safe, effective, and accurate  conversation.

Generative AI With Diffusion Models

Skills covered in this course:

Trustworthy AI

  • Understand content authenticity and how to build trustworthy models.

Efficient Large Language Model (LLM) Customization

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Know how to apply fine-tuning techniques.
  • Understand how to effectively integrate and interpret diverse data types within a single-model framework.

Software Development

  • Leverage the NVIDIA NeMo™ framework to customize models like GPT, LLaMA-2, and Falcon with ease.

Experimentation

  • Use prompt engineering to improve the performance of pretrained LLMs. 
  • Apply various fine-tuning techniques.

Data Analysis​ and Visualization​

  • Assess the performance of fine-tuned models.