NVIDIA NeMo

24 세션
December 2024
, Product Marketing Manager, NVIDIA
, TME, NVIDIA
, Product Manager, Deep Learning Software, NVIDIA
High-quality training data ensures that generative AI models learn accurately and generalize well, leading to more reliable outputs. In this webinar, we’ll explore how NVIDIA NeMo™ Curator enables developers to easily build scalable data processing pipelines to create high-quality datasets for
August 2024
, Senior Manager of Enterprise Product Marketing, NVIDIA
, Generative AI Software Product Manager, NVIDIA
, Deep Learning Developer Advocate, NVIDIA
Watch this insightful webinar replayy to learn how you can improve the accuracy and scalability of text retrieval for production-ready generative AI pipelines. With the newest available NVIDIA NeMo™ Retriever and NVIDIA NIM™ microservices, developers and IT practitioners can elevate enterprise data
March 2024
, Senior Manager, AI Enterprise Solution Architecture, NVIDIA
, Technical Marketing Engineer, NVIDIA
Getting started with the right tools for creating and customizing Generative AI solutions for your enterprise can be overwhelming. And even beyond the tools and model choice, there are decisions to be made when selecting infrastructure to support your AI lifecycle. In this session we will break down
October 2024
Advancements in Large Language Models (LLMs) have enabled developers to create a variety of applications such as code generation, translation, and text summarization. The effectiveness of all these models depends on the quality of the data used for training LLMs. Data from public sources
March 2024
, Senior Applied Scientist, Amazon Store Foundational AI
, Senior Manager, Amazon Store Foundational AI
Training a large language model at scale while ensuring efficiency and reliability poses numerous challenges. During this presentation, we'll share our experience training LLMs at Amazon Search, utilizing NVIDIA's Nemo Framework in collaboration with AWS. We'll discuss the process of selecting
March 2024
, Senior Solutions Architect - GenAI&Inference, NVIDIA
, Solutions Architect, NVIDIA
, Senior Deep Learning Data Scientist, NVIDIA
, Senior Deep Learning Data Scientist, NVIDIA
, Solution Architect, NVIDIA
We'll focus on customizing foundation large language models (LLMs) for languages other than English. We'll go through techniques like prompt-engineering, prompt-tuning, parameter-efficient fine-tuning, and supervised instruction fine-tuning (SFT), enabling LLMs to adapt to diverse use cases. We'll
March 2024
, Deep Learning Solutions Architect, NVIDIA
, Deep Learning Solutions Architect, NVIDIA
The demand for accelerated large language models (LLMs) has surged with the growing popularity of generative models. These models, often boasting billions of parameters, hold immense potential, but also pose challenges during large-scale deployments. Join us as we delve into the world of
March 2024
, Senior Product Manager, AI, Domino Data Lab
, Director of Solution Architecture, Tech Alliances, Domino Data Lab
We all recognize the immense business opportunity from generative AI and large language models (LLMs) — particularly those trained or developed on proprietary company data. However, developing them is resource-intensive, time-consuming, and requires deep technical expertise. The NVIDIA NeMo
March 2024
, Head of Applied AI Research, BlackRock
This session focuses on the integration of NeMo Framework and NeMo Retriever to generate a groundbreaking knowledge graph (KG) for financial documents data. By combining KGs with large language models and retrieval-augmented generation mechanisms, this innovative approach
March 2024
, Senior Product Manager, AI, Domino Data Lab
, Director of Solution Architecture, Tech Alliances, Domino Data Lab
We all recognize the immense business opportunity from generative AI and large language models (LLMs) — particularly those trained or developed on proprietary company data. However, developing them is resource-intensive, time-consuming, and requires deep technical expertise. The NVIDIA NeMo
March 2024
, Product Manager, AI on Google Kubernetes Engine, Google
, Cloud Architect, Google
Are you having trouble getting language models (LLMs) to work in your organization? You're not alone. We'll look at how to deploy an open-source language model on GKE. We'll show data scientists and machine learning engineers how to use NeMo and TRT LLM with GKE's notebooks. Plus, GKE has a
October 2024
, Chief AI Architect, Wipro Limited
, Senior Data Scientist, Wipro Limited
Discover how health insurance calls can be improved through an AI-powered voice assistant leveraging NVIDIA Retriever, NIM inference microservices, and NeMo Guardrails. Learn how these technologies reduce human intervention and costs while improving caller experience with fast, accurate responses.
March 2024
, AVP of AI and Data Science, Softserve, Inc.
Large language models (LLMs) provide new possibilities for engaging and intelligent conversational systems. However, productionizing and managing these models and ensuring they work to your advantage can be challenging. Two key strategies that can help are RAG-workflows and NeMo
October 2024
, Manager - Solution Architect & Engineering, NVIDIA AI & Cloud, NVIDIA
In this talk, we’ll explore the challenges and solutions related to building and deploying conversational AI workflows, including modalities like automatic speech recognition, large language models, and speech synthesis models focusing on regional languages. We’ll dive deep into various aspects of the
October 2024
, CEO, CoRover.AI
, VP (Tech. & Product), CoRover.AI
This session will cover the technical and architectural approach that CoRover took to create a grounded and secure generative AI-powered conversational platform. Architected with a large language model fine-tuned to deliver regional language capabilities and cost economics—BharatGPT—the platform is
March 2024
, Senior Solutions Architect, NVIDIA
, Senior Solutions Architect, NVIDIA
, Solution Architect Manager, NVIDIA
Generative AI (GenAI) and large language models (LLMs) enable retailers to build novel and innovative solutions that empower internal employees, reduce costs, and revolutionize the customer experience. As the world’s most advanced platform for accelerated computing, NVIDIA provides hardware and
March 2024
, Principal Data Cloud Architect, Snowflake Computing
, Solution Architect, NVIDIA
Data storage and retrieval shifted to the cloud removing data siloing and enabling easier, more efficient sharing of large-scale data in Enterprises. Behind the rise of large language models (LLMs) has been an intense focus on leveraging custom data that can be used for business applications. This
March 2024
, VP Generative AI Software Product Management, NVIDIA
In this session, Kari Briski, VP Generative AI Software, will provide a deeper understanding of the new NVIDIA offerings announced at GTC and how they are helping organizations supercharge the development and tuning of custom generative AI applications. Kari will also share insights on the
October 2024
, Head - Gen AI and AI for Automotive, Tata Consultancy Services Limited
, Technical Architect, Tata Consultancy Services Limited
This session explores how generative AI can accelerate the development of software-defined vehicles by enhancing customer experience and streamlining the software engineering lifecycle. We'll focus on utilizing NVIDIA's NeMo framework to fine-tune large language models with automotive-specific
March 2024
, Solution Architect, Quantiphi
, Senior Solution Architect - Machine Learning, Quantiphi
LLM-based agents, a concept that emerged from the capabilities of LLMs, represent a paradigm shift from mere automation to genuine intelligence. Agents, in the context of artificial intelligence, are autonomous entities that interact with their environment to execute specific tasks. The advances in
March 2024
, Solutions Architect, NVIDIA
, Solutions Architect, NVIDIA
Join us for an exciting introduction to foundation models in generative AI! In this training lab, we'll explore the basics of foundation models, their significance, applications, and the latest developments in the field of AI. Whether you're an academic, industry professional, or anyone eager to dive into the
March 2024
, Senior Solution Architect, NVIDIA
, Director of Product Management, Dropbox
, Principal Machine Learning Engineer, Dropbox
Recent advancements in AI and machine learning have opened up a new world of possibilities, but many companies are drowning in petabytes of data and are struggling to deliver large language model (LLM) workflows and applications. Using a real-world use case, Dropbox will do a deep dive into how
March 2024
, Chief Data Scientist, Sapia.ai
, Senior Data Scientist, Sapia.ai
, Senior Machine Learning Engineer, Sapia.ai
Large language models (LLMs) such as GPT-4 can now generate realistic text in real time that's difficult to distinguish from human-written content. The reliability, validity, and fairness of text-based chat interview assessments can be impacted when job candidates use LLMs to generate answers. In this
March 2024
, Chief Executive Officer, Glean
, Senior Strategic Alliances Manager - NLP, NVIDIA
, Chief Executive Officer, LangChain
, Senior Director, Product Management for Nemo (LLMs), NVIDIA
, Senior Director, Generative AI Data Strategy, NVIDIA
, Chief Executive Officer, LlamaIndex
Our panel of experts will talk about the best practices for building robust large language model (LLM)-based enterprise applications that deliver value and efficiency. Products such as ChatGPT have demonstrated the unprecedented power of LLMs in processing information and generating content.