The contest is now closed. Thank You for your participation.
Join the AI innovators pushing the boundaries of generative AI-powered applications using NVIDIA and LangChain technologies. Enter our contest to develop practical, efficient, and creative text and multimodal agents in an area of your choice and you could win one of several exciting prizes. To get started, check out our step-by-step developer resources and connect with NVIDIA and LangChain technical experts and the wider AI community on Discord to navigate challenges during your development journey.
The contest will run May 15–June 24 in the United States, United Kingdom, Japan, Germany, and more.
Create your next GPU-accelerated generative AI agent project in one of the following categories.
Large language models (LLMs)—with over 8 billion parameters) are rapidly evolving, from GPT-based models to Llama, Gemma, and Mixtral. Developers can leverage these large models to build diverse agents for tasks such as question-answering, summarization, and content generation.
As models grow larger, a new wave is driving the development of smaller language models (SLMs)—with 8 billion parameters or less). For this option, developers are encouraged to use these smaller language models to build applications such as local copilots or on-device applications.
There are several ways to build generative AI apps that are powered by LLMs and SLMs. Below are a few examples, along with resources, to guide you on your creative journey.
Participants will have the chance to win GPUs and hundreds of dollars worth of rewards from LangChain to continue their learning journey:
See the contest Terms & Conditions
Qualifying submissions will be judged by:
You can find the list of countries that are open for the contest in the contest terms and conditions.
In addition to NVIDIA and LangChain tools, you can use OpenAI’s API.
Yes, you can fine-tune any LLM and use it for your application. It should be noted that, to fine-tune models, you'll need your own compute infrastructure. You can also use colab notebooks using NeMo. A few examples generative AI examples can be found in GitHub.
NVIDIA AI Foundation endpoints are models developed by the community and further optimized by NVIDIA. You can use these models for inferencing.
Self-hosted NVIDIA NIM Microservices aren’t available at the moment. An alternative is to deploy the model using Hugging Face.
We encourage you to use your personal email address when registering for the contest.
Currently, tool calling isn't supported and won't be available for this contest. Here’s an example you can use if you plan to use a NIM endpoint. You're welcome to use other tools as well.
The goal of this Hackathon is to enable developers to create compelling applications using tools and frameworks offered by NVIDIA and Langchain. You can also use other tools popular in the developer community.
You can use our endpoints with deployed models. You can create an API key and get 1,000 calls for free here NVIDIA AI Foundation Endpoints in LangChain.
Yes, you can use multimodal LLMs in your application.
Submissions will be judged based on these parameters:
No, using all the tools isn't a requirement, but using one of the offerings from both NVIDIA and LangChain where applicable is important.
Team entries aren't allowed to participate in the contest.
If you’re using NVIDIA NIM, keep the API key section blank. For any other service, please provide the API keys. Best practice would be to create a config file and populate the API keys there.
Details about how to use Hugging Face Inference as a Service powered by NVIDIA NIM can be found in this technical blog: NVIDIA Collaborates With Hugging Face to Simplify Generative AI Model Deployments.
Any open-source license should be good, such as Apache 2.0, BSD, or MIT.