Since the creation of the first graphics cards decades ago, their power consumption has been measured by their listed maximum draw, referred to by NVIDIA as the Total Graphics Power (TGP). For buyers, this has been an important statistic, inferring efficiency, with cards drawing x Watts of power, while running games at y frame rates (FPS).
But TGP or the peak power draw is only one part of the story when measuring power. For a more complete view of power consumption, more measurements are needed.
For example, measuring power usage across 22 games at 4K, 1440p and 1080p, and video playback power usage when using the AV1 codec, here’s how this week’s new GeForce RTX 4080 measures up against our fastest previous-generation graphics cards:
Metric | RTX 3080 | RTX 3090 | RTX 3090 Ti | RTX 4080 |
Idle (W) | 12 | 13 | 16 | 13 |
Video Playback (W) | 27 | 33 | 26 | 21 |
Average Gaming (W) | 320 | 345 | 398 | 251 |
TGP (W) | 320 | 350 | 450 | 320 |
For average gaming, the GeForce RTX 4080 uses only 251W on average, and barely registers on other day-to-day tasks such as video playback, or when it’s idle. Using just 78% of its rated TGP, the GeForce RTX 4080 outperforms all previous-generation graphics cards.
On GeForce graphics cards, TGP is the power cap limit for GPU Boost, our technology that maximizes GPU performance based on available power, the temperature of the card, and other factors.
For high power apps, like games, the GPU may hit the TGP power cap limit, and the GPU Boost clock will be optimized within the power and thermal limits. However, in cases where the GPU is bottlenecked by the CPU, or the GPU is running light workloads, the GPU’s power consumption may be far less than the TGP.
In these cases, the GPU Boost clocks may still hit the GPU’s maximum frequency, and thus the GPU’s efficiency will be maximized.
Under most operating conditions, including many gaming workloads, this allows our GeForce RTX 40 Series graphics cards to consume significantly less power than TGP.
Take a look at the table below that shows the GeForce RTX 4080’s power consumption running five demanding games at various resolutions, with ray tracing and DLSS enabled where available:
Game | 1080p | 1440p | 4K |
Control | 212 W | 288 W | 297 W |
Cyberpunk 2077 | 224 W | 275 W | 287 W |
Forza Horizob 5 | 172 W | 197 W | 238 W |
Guardians of the Galaxy | 117 W | 233 W | 266 W |
Metro Exodus | 205 W | 262 W | 295 W |
As shown, the average power consumption of the GeForce RTX 4080 never hits 320 Watts, the card’s TGP, even at 4K.
At 1080p and 1440p, the GeForce RTX 4080 consumes significantly less power because Ada is much more power efficient. Our GPU Boost algorithms still increase clocks until they hit a limit, but on previous-generation GeForce RTX 30 Series Ampere architecture GPUs that limit was typically the power limit. Since the Ada Lovelace architecture requires less power overall, we’re hitting other limits first. For example, max clocks or voltage, and we’re able to reduce overall power levels to conserve energy.
Graphics cards are used in a wide variety of applications, and in games at various resolutions and detail levels. While it’s impossible to test all of the combinations, it’s important to sample more than one or two data points to evaluate performance, power consumption and overall efficiency.
NVIDIA has provided platform-agnostic tools to media to accurately measure power consumption on all graphics cards, so if you’re searching for the most efficient high performance GPU, be sure to seek out reviews that report real-world efficiency.
If you already have a graphics card and want to measure its efficiency, you can use our free FrameView application, which can simultaneously record performance and other metrics, as well as power usage. Download FrameView and check out the linked guides for setup steps and recommendations.