On GeForce graphics cards, TGP is the power cap limit for GPU Boost, our technology that maximizes GPU performance based on available power, the temperature of the card, and other factors.
For high power apps, like games, the GPU may hit the TGP power cap limit, and the GPU Boost clock will be optimized within the power and thermal limits. However, in cases where the GPU is bottlenecked by the CPU, or the GPU is running light workloads, the GPU’s power consumption may be far less than the TGP.
In these cases, the GPU Boost clocks may still hit the GPU’s maximum frequency, and thus the GPU’s efficiency will be maximized.
Under most operating conditions, including many gaming workloads, this allows our GeForce RTX 40 Series graphics cards to consume significantly less power than TGP.
Take a look at the table below that shows the GeForce RTX 4080’s power consumption running five demanding games at various resolutions, with ray tracing and DLSS enabled where available:
Game |
1080p |
1440p |
4K |
Control |
212 W |
288 W |
297 W |
Cyberpunk 2077 |
224 W |
275 W |
287 W |
Forza Horizob 5 |
172 W |
197 W |
238 W |
Guardians of the Galaxy |
117 W |
233 W |
266 W |
Metro Exodus |
205 W |
262 W |
295 W |
As shown, the average power consumption of the GeForce RTX 4080 never hits 320 Watts, the card’s TGP, even at 4K.
At 1080p and 1440p, the GeForce RTX 4080 consumes significantly less power because Ada is much more power efficient. Our GPU Boost algorithms still increase clocks until they hit a limit, but on previous-generation GeForce RTX 30 Series Ampere architecture GPUs that limit was typically the power limit. Since the Ada Lovelace architecture requires less power overall, we’re hitting other limits first. For example, max clocks or voltage, and we’re able to reduce overall power levels to conserve energy.