Join the AI innovators pushing the boundaries of generative AI-powered applications using NVIDIA and LangChain technologies. Enter our contest to develop practical, efficient, and creative text and multimodal agents in an area of your choice and you could win one of several exciting prizes. To get started, check out our step-by-step developer resources and connect with NVIDIA and LangChain technical experts and the wider AI community on Discord to navigate challenges during your development journey.
The contest will run May 15–June 24 in the United States, United Kingdom, Japan, Germany, and more.
Create your next GPU-accelerated generative AI agent project in one of the following categories.
Large Language Models (Over 8B Parameters)
Large language models (LLMs)—with over 8 billion parameters) are rapidly evolving, from GPT-based models to Llama, Gemma, and Mixtral. Developers can leverage these large models to build diverse agents for tasks such as question-answering, summarization, and content generation.
Small Language Models (8B Parameters or Less)
As models grow larger, a new wave is driving the development of smaller language models (SLMs)—with 8 billion parameters or less). For this option, developers are encouraged to use these smaller language models to build applications such as local copilots or on-device applications.
How to Get Started
There are several ways to build generative AI apps that are powered by LLMs and SLMs. Below are a few examples, along with resources, to guide you on your creative journey.
LLM-Powered Agents
Build powerful LLM-powered applications with LangChain, a leading framework for creating agents.
You can use popular open-source and NVIDIA foundation models either through the NVIDIA NIM APIs or by using NVIDIA AI Foundation endpoints within the LangChain framework. Once you’ve developed your app, you can add NVIDIA NeMo™ Guardrails to control the output of the LLM according to your use case.
If you’d like to develop advanced agents, you can start with LangGraph, a multi-agent framework that's built on top of LangChain.
Customizing Agents
If you’re interested in customizing an agent for a specific task, one way to do this is to fine-tune the models on your dataset. To do that, you can begin by curating the dataset using NeMo Curator and fine-tuning the model with your dataset using the NeMo framework or HuggingFace transformers.
Once you have your custom LLM, you can use the model within the LangChain framework to develop an agent.
Local Copilots
For any agents that need to run locally due to privacy and security considerations, you can start developing an agent similar to LLM-powered agents.
But instead of using LLMs, you could leverage smaller language models that have 8 billion parameters or less and quantize them through NVIDIA TensorRT™-LLM to reduce the model size so it can fit on your GPU.
NVIDIA, along with the LangChain framework, lets you build agents that can run on local compute resources.
Enter the Contest
Step 1: Start Now
Register to join the contest and get started using our resources.
Set up your development environment and build your project. Use any one of the following NVIDIA technologies along with the LangChain/LangGraph framework to develop your agent app.
Foundation models through the NVIDIA NIM APIs or endpoints
Post a 45- to 90-second demo video of your generative AI project on X (Twitter), LinkedIn, or Instagram using the hashtags #NVIDIADevContest and #LangChain. Also, tag one of these NVIDIA social handles:
Once completed, submit all your assets, including links to the source code, demo video, social post, and any other supplementary materials. For an eligible submission, it is mandatory to fill in all the required fields on the submission form.
Real-world application: Evaluates the impact and novelty of the project in addressing real-world challenges and the ease of use for its target audience
Technology integration: Assesses how effectively the developer has used NVIDIA’s LLM stack and LangChain technologies in the project
Quality of submission: Reviews the comprehensiveness and clarity of the project details, instructions, and demo
Additional Resources
Explore Generative AI Examples
Explore several getting-started generative AI examples that use state-of-the-art models such as Mixtral, Llama, and Gemma, along with accelerated frameworks and libraries from NVIDIA and LangChain.
Get guidance from NVIDIA and LangChain technical experts. Join our LLM community on the NVIDIA Developer Discord channel and NVIDIA Developer Forums to ask your questions and accelerate your development process for the contest.
In addition to NVIDIA and LangChain tools, you can use OpenAI’s API.
Yes, you can fine-tune any LLM and use it for your application. It should be noted that, to fine-tune models, you'll need your own compute infrastructure. You can also use colab notebooks using NeMo. A few examples generative AI examples can be found in GitHub.
NVIDIA AI Foundation endpoints are models developed by the community and further optimized by NVIDIA. You can use these models for inferencing.
Self-hosted NVIDIA NIM Microservices aren’t available at the moment. An alternative is to deploy the model using Hugging Face.
We encourage you to use your personal email address when registering for the contest.
Currently, tool calling isn't supported and won't be available for this contest. Here’s an example you can use if you plan to use a NIM endpoint. You're welcome to use other tools as well.
The goal of this Hackathon is to enable developers to create compelling applications using tools and frameworks offered by NVIDIA and Langchain. You can also use other tools popular in the developer community.
Yes, you can use multimodal LLMs in your application.
Submissions will be judged based on these parameters:
Real-world application: Evaluates the impact and novelty of the project in addressing real-world challenges and the ease of use for its target audience
Technology integration: Assesses how effectively the developer has used NVIDIA’s LLM products and LangChain technologies in the project
Quality of submission: Reviews the comprehensiveness and clarity of the project’s details, instructions, and demo
No, using all the tools isn't a requirement, but using one of the offerings from both NVIDIA and LangChain where applicable is important.
Team entries aren't allowed to participate in the contest.
If you’re using NVIDIA NIM, keep the API key section blank. For any other service, please provide the API keys. Best practice would be to create a config file and populate the API keys there.