A deep neural network is trained on tens of thousands of high-resolution beautiful images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Based on knowledge from countless hours of training, the network can then take lower-resolution from multiple frames as input and construct high-resolution, beautiful images.
Turing’s Tensor Cores, with up to 110 teraflops of dedicated AI horsepower, make it possible for the first time to run a deep learning network on games in real time. The result is a big performance gain and sharp video quality, while minimising ringing and temporal artifacts like sparkling.
DLSS avoids the typical tradeoffs gamers are forced to make between image quality and performance. With traditional resolution scaling algorithms, fewer pixels are rendered for faster frame rates. This results in pixelated or blurry images. Adding sharpening algorithms can help, but these often miss fine details and introduce temporal instability or noise. DLSS provides faster performance and near-native image quality by using AI to reconstruct the details of a higher resolution image and stabilise it across frames.