Visit your regional NVIDIA website for local content, pricing, and where to buy partners specific to your country.
A free virtual event, hosted by the NVIDIA Deep Learning Institute.
November 17, 8:00 a.m. PT / 5:00 p.m. CEST
Check out an exciting and interactive day delving into cutting-edge techniques in large-language-model (LLM) application development.
LLM Developer Day offers hands-on, practical guidance from LLM practitioners, who share their insights and best-practices for getting started with and advancing LLM application development.
Learn practical methods for designing and implementing LLM-powered systems on real-world business data using popular, ready-to-go LLM APIs—no specialized hardware, model training, or tricky deployment required. We'll show techniques for engineering effective inputs to the models (“prompts”) and how to combine LLMs with other systems, including business databases, with toolkits like LangChain. Join us and learn how to build LLM systems to generate tangible business results.
Push LLMs beyond the quality limits of off-the-shelf models and APIs by customizing them for domain-specific applications. We'll discuss strategies for preparing datasets and showcase gains from different forms of customization using practical, real-world examples. We will also outline strategies to build RAG-based (Retrieval Augmented Generation) systems. Join us and learn about model tuning techniques applicable for both API-based and self-managed LLMs.
In this session, we'll explore foundational AI models in biology, as well as practical protein engineering and design applications supported by real-world examples. We'll discuss recent biology breakthroughs and apply that to how you can use LLMs to predict protein structure and function and encode protein data computationally. Attendees will learn techniques for how to use NVIDIA BioNeMo™, a generative AI platform for drug discovery, to simplify and accelerate training of models on their own data, ensuring easy and scalable deployment of models for drug discovery applications.
Cybersecurity is a data problem, and one of the most effective ways of contextualizing data is via natural language. With the advancement of LLMs and accelerated compute, we can represent security data in ways that expand our detection and data generation techniques. In this session, we’ll discuss advancements in LLMs, including how to leverage them throughout the cybersecurity stack, from copilots to synthetic data generation.
Optimizing and deploying LLMs on self-managed hardware—whether in the cloud or on premises–can produce tangible efficiency, data governance, and cost improvements for organizations operating at scale. We'll discuss open, commercially licensed LLMs that run on commonly available hardware and show how to use optimizers to get both lower-latency and higher-throughput inference to reduce compute needs. Join us and learn how to scale up self-managed LLMs to accommodate unique business and application requirements.
In this session, we'll recap LLM Developer Day content and answer any questions that attendees may have.
Senior Solution Architecture Manager
Cybersecurity Engineering Director
Healthcare AI Startups Lead
Senior Solutions Architect
Senior Solutions Engineer
Senior Deep Learning Data Scientist
Senior Technical Marketing Engineering Manager
Director of Solutions Engineering
Enterprise Services Lead, EMEA
Take advantage of our comprehensive LLM learning path, covering fundamental to advanced topics, featuring hands-on training developed and delivered by NVIDIA experts. You can opt for the flexibility of self-paced courses or enroll in instructor-led workshops to earn a certificate of competency.