Nearly all games today render new frames “from scratch”, meaning that they don’t use any calculations made prior to that frame (unless they use temporal anti-aliasing). But in most games - like in the real world - relatively little changes from frame to frame. If you look outside your window, you might see trees blowing in the wind, pedestrians passing by, or birds flying in the distance. But the majority of the scene is “static” or unchanged. The main thing that changes is your point of view.
Now, some objects will indeed change appearance as you change your point of view - notably those which are glossy or shiny. But most objects will actually change appearance very little as you move your head, and as such, it’s a waste of precious GPU cycles to keep recalculating the same exact colors that make up those objects every frame. It’d be far more efficient to shade those objects at a lower rate (say, every third frame, or perhaps even lower than that) and reuse the object’s colors (or “texels” as they’re referred to) calculated in the past. This notion of work reuse becomes particularly important in ray tracing and especially in the case of global illumination, which is a very common example of a slow-changing and very expensive shading computation.
This technique is what’s referred to as Texture-Space Shading, in that the calculations are not performed frame-to-frame in screen space (i.e. from the point of view of the gamer), but rather calculated at a different shading rate in texture space (essentially from the point of view of the object itself). Why texture space? Because all objects in games these days have textures and they’re independent of the gamer’s point of view, making them a perfect choice for storing shaded objects’ colors and “carrying” them from frame to frame.
This same technique can be effectively applied to VR, too: because our eyes are fairly close together, the vast majority of objects that you see with one eye are also seen by the other eye. The main difference is not the shading of those objects (e.g. your left eye sees the pencil on your desk as being the same yellow as the right eye), but rather the orientation of them. As such, with Texture-Space Shading you can “borrow” the shading calculations from one eye and use them with the other eye, essentially halving your shading workload. And if your performance is limited by pixel shading in your game, then Texture-Space Shading could theoretically double your framerates.
With texture-space shading, a game engine will not shade all rasterized pixels immediately. Instead, it will first identify which texels are referenced by the pixels it rasterized. This operation is very similar to what the texture unit does when it finds texels needed for a given texturing operation. This set of texels is then queued up for shading, which will happen at a later point in time. Note that during this process, the same texel may be referenced by multiple pixels but for efficiency, we of course don’t want to shade texels redundantly. So the game engine will have to perform what is referred to as deduplication of shade requests, isolating unique texel references and ensuring that each texel is only shaded once.
Once the set of unique texel references is identified, the game will shade those texels, storing results in corresponding textures for later reuse. This is analogous to pixel shading, except that what is being shaded are not pixels on the screen but rather texels within a texture.
Finally, the computed texels can be used to calculate their corresponding pixel’s colors - in the exact same way as static textures are used today. This step is extremely cheap, since it merely performs a single texturing operation.
The process of finding the set of visible texels and isolating unique ones is a computationally expensive process, and for that reason applications couldn’t afford texture-space shading in the past. The Turing architecture addresses this problem by introducing hardware acceleration for this key step: Turing’s texture unit is now capable of providing texel address information directly to the shader, and new data-parallel intrinsics make the deduplication step of the process very efficient.
Though Texture-space shading might seem like a very straightforward and logical thing to do, there are some “gotchas” that have kept the technique from becoming mainstream.
First of all, modern games reuse a lot of objects, including their textures. If you see a forest within a game, it’s highly unlikely that each leaf has its own texture. Most likely, the leaves are all taken from a small set of leaf models and textures. These objects won’t work for Texture-Space Shading, because all visible objects must have their own textures - no sharing allowed! Imagine if you didn’t enforce this - that all the leaves on a tree continued to share the same leaf texture - and then you used your hand to put one of them into shadow. Then all the leaves on the tree would simultaneously go into shadow as well! So this is the first thing developers have to deal with - ensuring that every object has its own texture which can be shaded independently of all other objects.
Texture-space shading will therefore require fairly significant rethinking of how game engines are built, but once this is done, the possibilities are endless - a door is opened to entirely new ways of doing rendering, building on the benefits of both ray-tracing and rasterization to generate photo-realistic images at high framerates, without compromises. The Turing architecture’s hardware support for texture-space shading makes such hybrid approaches finally practical.