Project G-Assist: An AI Assistant For GeForce RTX AI PCs, Comes To NVIDIA App In February

Six months ago at Computex, NVIDIA showcased Project G-Assist — a tech demo that offered a glimpse of how AI assistants could elevate the PC experience for gamers, creators, and more. Today, we’re excited to announce the initial release of the G-Assist System Assistant feature is coming to GeForce RTX users via the NVIDIA app in February.

As modern PCs become more powerful, they also grow more complex to operate. Users today face over a trillion possible combinations of hardware and software settings when configuring a PC for peak performance — spanning GPU, CPU, monitors, motherboard, peripherals, and more.

NVIDIA built Project G-Assist, an experimental AI assistant that runs locally on GeForce RTX AI PCs, to simplify this experience. G-Assist helps users control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting — all via basic voice or text commands.

Project G-Assist System Assistant

Project G-Assist uses a specially tuned Small Language Model (SLM) to efficiently interpret natural language instructions, and call a variety of NVIDIA and third-party PC APIs to execute actions on the PC.

G-Assist can provide real-time diagnostics and recommendations to alleviate system bottlenecks, improve power efficiency, optimize game settings, overclock your GPU, and much more.

 

It can chart and export various performance metrics, such as FPS, latency, GPU utilization, temperatures, among others.

 

It can answer questions about your PC, or about NVIDIA software onboard your GeForce RTX GPU.

G-Assist can even control select peripherals and software applications with simple commands — enabling users to benchmark or adjust fan speeds, or change lighting on supported Logitech G, Corsair, MSI, and Nanoleaf devices.

 

G-Assist will be available via the NVIDIA app overlay, and users will be able to type in commands to the chat interface, or hold a push-to-talk hotkey to speak to G-Assist.

On-Device AI

Unlike massive cloud-hosted AI models that require online access and paid subscriptions, G-Assist runs on your GeForce RTX GPU. This means it is responsive, free to use, and can run offline.

Under the hood, G-Assist uses a Llama-based Instruct model with 3 billion parameters, packing language understanding into a tiny fraction of the size—well under 1%—of today’s large scale AI models. This allows G-Assist to run on a wide array of RTX hardware with good performance. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.

When G-Assist is prompted for help—say, to optimize graphics settings or check GPU temperatures—the RTX GPU briefly allocates a portion of its horsepower to AI inference. During those few seconds, a short dip in render rate may occur if you’re gaming or running another GPU-heavy application. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app.

Powering Assistants For Partners & Community Tinkerers

G-Assist is built on NVIDIA ACE—the same AI tech suite game developers use to breathe life into NPCs. OEMs and ISVs are already leveraging ACE technology to create custom AI Assistants like G-Assist.

MSI unveiled the “AI Robot” engine at CES, designed to power AI Assistants built into MSI Center and MSI Afterburner. Logitech is using ACE to develop the Streamlabs Intelligent AI Assistant, complete with an interactive avatar that can chat with the streamer, comment on gameplay, and more. And HP is also working on leveraging ACE for AI assistant capabilities in Omen Gaming Hub.

Beyond industry partners, we’re opening this framework to the broader AI community. Tools like CrewAI, Flowise, and LangFlow will be able to leverage this service, enabling enthusiasts and developers to integrate function-calling capabilities in low-code, customizable language processing workflows, AI applications, and agentic flows.

G-Assist itself is designed to be extensible by the community. NVIDIA will publish a GitHub repository with samples for creating “plugins” that teach G-Assist additional functionality. Tinkerers will be able to define functions in straightforward JSON formats, then submit them to NVIDIA for review and potential inclusion, making these new capabilities available for others. Before merging, plugins can be tested locally by placing config files in a designated directory, allowing G-Assist to load and interpret them.

Details on how to build, share, and load plugins, will be available in documentation from the GitHub repo when G-Assist launches. We can’t wait to see what the community dreams up!

Project G-Assist: Coming To NVIDIA App In February

In 2013 we launched GeForce Experience, featuring Optimal Playable Settings that automatically configured a game’s options for smooth gameplay with a single click. Tens of millions of gamers have clicked “Optimize” over the years.

Now, AI has unlocked new ways to intelligently interact with and optimize RTX AI PCs. When released in February, Project G-Assist will appear on the NVIDIA app’s Home tab as a new entry in the Discover section for GeForce RTX owners to download and install.

While you wait for G-Assist’s release, check out all of our other GeForce RTX 50 Series announcements to see how we’re further improving experiences in games and apps, and delivering new innovations that advance the PC industry.