The history of graphics cards has been deeply linked to evolution of shaders. NVIDIA has been one of the great drivers of advances in shaders over the last two decades, and with the GeForce RTX 50 it has once again marked a turning point thanks to the use of neural shaders.
These new shaders open the doors to neural rendering, but before delving into this topic and discovering what it is and why it makes such a big difference, it is necessary to discover how we got here.
The first big change occurred in 2001 with the GeForce 3, which introduced the shaders programablesa technology that made possible the development of games as advanced at the time as DOOM III. In 2003, DirectX 9 arrived, and with it the high-level shader language landed. We had to wait until 2006 to see the debut of unified shaders and the arrival of DirectX 10, which opened the doors to geometry shaders.
In 2009, DirectX 11 was released and the computing shaders. We saw another of the great advances more recently with the low level programmable shaderswhich debuted in 2015 with DirectX 12. The last major renewal occurred in 2018 with DirectX 12 and ray tracing, which also uses shaders to complete the workload it represents.
Neural rendering comes to the GeForce RTX 50
With the GeForce RTX 50 NVIDIA has introduced neural shaders. These represent the next step in the evolution of shaders, and represent a very important change, because they allow run small neural networks in shaders to create access to advanced functions that a few years ago would have seemed impossible:
- Neural textures.
- Neural materials.
- Neural volumes.
- Neural radiance fields.
- Neural radiance cache.
- Neural faces.
For all this to work, it is necessary that the neural shaders rely on the tensor cores, something that until now was impossible, because there was no graphic API that could access the tensor cores. This will change with the introduction of Cooperative Vectors in DirectXwhich uses a new shading language known as Slang so that developers can introduce neural rendering techniques into their workflows in a simple and effective way.
These neural rendering techniques can replace parts of the workload typically performed in the traditional graphics pipeline, improving both performance and image quality. NVIDIA gave many examples during the demonstration we saw at CES, and the truth is that the results were spectacular, because not only did they improve the graphic quality noticeably, but they also They reduced memory consumption up to three times.
With these neural rendering techniques it is possible to achieve a much more realistic finish in textures, materials, volumes and in the radiance of lighting, something that we could also see in several scenes of Half-Life 2 RTX.
In this title, the neural radiance cache was especially highlighted, a technology that traces one or two rays and stores them in a cache dedicated to radiance. Once this is done, is capable of carrying out an inference process to achieve a large number of rays and bounces with which you can improve the quality and precision of indirect lighting in a given scene. This improves image quality and also performance.
The new shaders used by the Blackwell architecture are supported by improved SER (Shader Execution Reordering) technology, and can work simultaneously with FP32 and INT32 operations. Tensor cores are twice as powerful, and are compatible with INT4 FP4. These types of operations reduce the level of precision, but in return greatly improve the level of performance.
New RT cores with Mega Geometry
The GeForce RTX 50 uses fourth generation RT cores. These not only multiply by 8 the capacity to calculate ray-triangle intersectionsbut also incorporate important advances that we can group around two major keys: optimization of ray tracing applied to hair and Mega Geometry.
To improve the ray tracing applied to hair NVIDIA has introduced the “linear swept spheres”, that are integrated in key areas for the rendering of each hair and allow a more efficient geometric representation of the hair, thereby reducing memory consumption without compromising graphic quality.
Mega Geometry is NVIDIA’s response to the increasing geometric load in games, an advance that makes it increasingly difficult to apply high-quality ray tracing. Currently the games move billions of polygons, a figure that represents a challenge, and to which Blackwell has responded with this new technology, which allows the use high resolution polygon meshes, an approach reminiscent of Nanite, one of the great stars of Unreal Engine 5.
When applying Mega Geometry a process of clustering and hardware compression much more efficient, making it no longer necessary to resort to low-resolution proxy meshes. In this way, graphical quality and performance are improved even in games with high geometry complexity. The fourth generation RT cores also maintain the frame intersection engine and the micromap opacity engine.
Source: www.muycomputer.com