Visit your regional NVIDIA website for local content, pricing, and where to buy partners specific to your country.
The contest is now closed. Thank You for your participation.
Join the AI innovators pushing the boundaries of generative AI-powered applications using NVIDIA and LangChain technologies. Enter our contest to develop practical, efficient, and creative text and multimodal agents in an area of your choice and you could win one of several exciting prizes. To get started, check out our step-by-step developer resources and connect with NVIDIA and LangChain technical experts and the wider AI community on Discord to navigate challenges during your development journey.
The contest will run May 15–June 24 in the United States, United Kingdom, Japan, Germany, and more.
Project Name: Agents of Inference
Project Name: AI Personal Trainer
Project Name: Mystery Manor
Project Name: Persona - Unreal Engine
Project Name: RGS Tool
Project Name: L'Agent
Project Name: LLM Trading Agent
Project Name: VRM AI Facial Emotions
Project Name: Aileen 2
Project Name: Test Generator App
Project Name: Satyuki, the Code Mentor
Project Name: Trainer's Ally
Project Name: Narravive
Create your next GPU-accelerated generative AI agent project in one of the following categories.
Large language models (LLMs)—with over 8 billion parameters) are rapidly evolving, from GPT-based models to Llama, Gemma, and Mixtral. Developers can leverage these large models to build diverse agents for tasks such as question-answering, summarization, and content generation.
As models grow larger, a new wave is driving the development of smaller language models (SLMs)—with 8 billion parameters or less). For this option, developers are encouraged to use these smaller language models to build applications such as local copilots or on-device applications.
There are several ways to build generative AI apps that are powered by LLMs and SLMs. Below are a few examples, along with resources, to guide you on your creative journey.
Build powerful LLM-powered applications with LangChain, a leading framework for creating agents.
You can use popular open-source and NVIDIA foundation models either through the NVIDIA NIM APIs or by using NVIDIA AI Foundation endpoints within the LangChain framework. Once you’ve developed your app, you can add NVIDIA NeMo™ Guardrails to control the output of the LLM according to your use case.
If you’d like to develop advanced agents, you can start with LangGraph, a multi-agent framework that's built on top of LangChain.
If you’re interested in customizing an agent for a specific task, one way to do this is to fine-tune the models on your dataset. To do that, you can begin by curating the dataset using NeMo Curator and fine-tuning the model with your dataset using the NeMo framework or HuggingFace transformers.
Once you have your custom LLM, you can use the model within the LangChain framework to develop an agent.
For any agents that need to run locally due to privacy and security considerations, you can start developing an agent similar to LLM-powered agents.
But instead of using LLMs, you could leverage smaller language models that have 8 billion parameters or less and quantize them through NVIDIA TensorRT™-LLM to reduce the model size so it can fit on your GPU.
NVIDIA, along with the LangChain framework, lets you build agents that can run on local compute resources.
Register to join the contest and get started using our resources.
Connect with the community of LLM developers and NVIDIA and LangChain technical experts on the NVIDIA Developer Discord channel and NVIDIA Developer Forums.
Set up your development environment and build your project. Use any one of the following NVIDIA technologies along with the LangChain/LangGraph framework to develop your agent app.
Post a 45- to 90-second demo video of your generative AI project on X (Twitter), LinkedIn, or Instagram using the hashtags #NVIDIADevContest and #LangChain. Also, tag one of these NVIDIA social handles:
X (Twitter): @NVIDIAAIDev LinkedIn: @NVIDIAAI Instagram: @NVIDIAAI
Once completed, submit all your assets, including links to the source code, demo video, social post, and any other supplementary materials. For an eligible submission, it is mandatory to fill in all the required fields on the submission form.
Participants will have the chance to win GPUs and hundreds of dollars worth of rewards from LangChain to continue their learning journey:
See the contest Terms & Conditions
Qualifying submissions will be judged by:
Explore several getting-started generative AI examples that use state-of-the-art models such as Mixtral, Llama, and Gemma, along with accelerated frameworks and libraries from NVIDIA and LangChain.
Get started quickly with foundation models from the NVIDIA NIM APIs, control LLM outputs with NeMo Guardrails, create high-quality datasets using NeMo Curator, and optimize inference with TensorRT-LLM
Familiarize yourself with the LangChain and LangGraph frameworks through Python and JavaScript documentation and YouTube tutorials.
Get guidance from NVIDIA and LangChain technical experts. Join our LLM community on the NVIDIA Developer Discord channel and NVIDIA Developer Forums to ask your questions and accelerate your development process for the contest.
You can find the list of countries that are open for the contest in the contest terms and conditions.
In addition to NVIDIA and LangChain tools, you can use OpenAI’s API.
Yes, you can fine-tune any LLM and use it for your application. It should be noted that, to fine-tune models, you'll need your own compute infrastructure. You can also use colab notebooks using NeMo. A few examples generative AI examples can be found in GitHub.
NVIDIA AI Foundation endpoints are models developed by the community and further optimized by NVIDIA. You can use these models for inferencing.
Self-hosted NVIDIA NIM Microservices aren’t available at the moment. An alternative is to deploy the model using Hugging Face.
We encourage you to use your personal email address when registering for the contest.
Currently, tool calling isn't supported and won't be available for this contest. Here’s an example you can use if you plan to use a NIM endpoint. You're welcome to use other tools as well.
The goal of this Hackathon is to enable developers to create compelling applications using tools and frameworks offered by NVIDIA and Langchain. You can also use other tools popular in the developer community.
You can use our endpoints with deployed models. You can create an API key and get 1,000 calls for free here NVIDIA AI Foundation Endpoints in LangChain.
Yes, you can use multimodal LLMs in your application.
Submissions will be judged based on these parameters:
No, using all the tools isn't a requirement, but using one of the offerings from both NVIDIA and LangChain where applicable is important.
Team entries aren't allowed to participate in the contest.
If you’re using NVIDIA NIM, keep the API key section blank. For any other service, please provide the API keys. Best practice would be to create a config file and populate the API keys there.
Details about how to use Hugging Face Inference as a Service powered by NVIDIA NIM can be found in this technical blog: NVIDIA Collaborates With Hugging Face to Simplify Generative AI Model Deployments.
Any open-source license should be good, such as Apache 2.0, BSD, or MIT.
NVIDIA Privacy Policy