Ray Tracing 101: The RTX Series and Why it Matters

by Taliesin Coward

nVidia has caused quite a stir in the PC gaming world with its recent announcement that it has, a decade earlier than anyone expected, developed a new generation of GPUs (graphics processing units) capable of handling real-time ray tracing: the RTX series. Unfortunately for anyone interested in the new technology and curious about the advantages it may bring, there’s a lot of noise and frankly confusing explanations running around the internet. So, to help clarify things (and to help you make an informed decision), let’s go back to basic: what is ray tracing?

In a nutshell, ray tracing is a physics simulation. However, instead of simulating the effects of gravity, dust clouds, fluid dynamics and hair/fabric (the more common uses in PC games), ray tracing simulates the behaviour of light. Now, simulating how light works in the real word – coming out of a light-source, bouncing around the place with only some beams reaching our eyes – is beyond the capacity of current technology (not to mention needlessly wasteful; why simulate billions of rays which won’t have any impact on the final image?).

Instead, in order to simplify and speed things up, ray tracing starts not with the light-source, but with what the camera can see. A ray is ‘traced’ from the camera out into the virtual world. This ray is then realistically bounced around the environment (including objects which are off-camera) to see if it traces back to a light source, or if there is an obstruction. The computer can then calculate what was hit, and how it will affect the final image. Colour casts, refraction, diffusion, reflections, shadows can all be calculated from this. The more rays are projected from the camera into the virtual world, the more complete and convincing the simulation.

This technique is not new. It’s been used for decades in the film industry, and the first ray tracing algorithm was presented in 1969 by IBM’s Arthur Appel. However, there has been a major barrier to using ray tracing in games, or anything rendered in real-time: it is incredibly computationally intensive. Rendering a few seconds of footage can take days, if not weeks. Each rendered frame is saved, and then stitched together for smooth playback. Hardly a technique you can use for a fast-moving computer game (unless you like your games to run a 2 frames-per-hour, if that).

Because of this, games have instead traditionally used a variety of techniques which approximate the effect of light, rather than physically simulate light itself. While these techniques, likened by nVidia’s Tom Petersen to being more akin to painting a picture, can create quite convincing images, there are a number of inherent limitations. For example, ‘baked light’, which approximates the effects of lighting by calculating how a light source interacts with a static environment then saving the result as a texture (an image which can be painted onto an in-game object), suffers from being unable to react to changes in the environment, not to mention taking a lot of time, especially if you have to go back and change a part of the environment.

Screen space reflections, one of the most advanced techniques used today, can only reflect static objects which are on-screen (if you can’t see it, it won’t appear in the reflection unless a myriad of other costly – and again limited – techniques are employed). Soft-shadows and shaped light sources (as opposed to point light sources) have proved problematic to create, as has dynamic indirect diffuse lighting (where light bounces off one surface to indirectly illuminate another). In essence, while you can get close to a realistic image with such techniques, they will always fall short of the mark.

On the other hand, ray tracing can be used to overcome all of these shortcomings because it simulates the physics of light. Want a shaped light-source? Simply build one and watch as accurate lighting and soft-shadows appear. Accurate reflections? Just drop in a reflective surface. And the same goes for indirect diffuse lighting. To see just how much of a difference this technology makes, look at the two real-time demos that have been released: the Star Wars short Reflections, running of the Unreal 4 engine and looking nearly indistinguishable from the actual films; and the humorous Project Sol.

nVidia’s cards have been more than simple graphics renders for quite a while now. Their acquisition of Ageia allowed their cards to not only calculate graphics, but also run complicated physics simulations (something that takes quite a bit of effort for a CPU to handle, but is relatively easy for the specialised architecture of a GPU). The new GPU, the RTX series, adds to these expanded capabilities by including not only an RT core to handle ray tracing calculations, but also an array of ‘Tensor’ cores allowing for blindingly fast calculations and the ability to take advantage of nVidia’s deep-learning AI which supports the ray tracing technology. This technology also has benefits outside of gaming, with the RTX cards being able to significantly speed-up content creation by enabling, for example, real-time 4K video editing and drastically decreasing the time it takes for 3D artists such as model builders and architects to render realistic images.

Will having an RTX card mean your games and applications are then ray traced? Sadly, no. That depends on the developers: the card is simply a platform which lets the developers implement ray tracing. While Q2VKT (an update of the 1997 game Quake 2) shows that it is possible with nVidia’s current technology to have a completely ray-traced game (shadows, lighting, reflections, etc...) we’re still some way off having equipment powerful enough to do a fully ray-traced game with AAA graphics. In the meantime, ray tracing is being incorporated with current rendering techniques to improve specific areas. For example, Battlefield V uses ray tracing to create realistic reflections, Metro Exodus and Control are using it to create dynamic global illumination, and Shadow of the Tomb Raider plans to use it create the shadows (though this feature is yet to be released).

So, to the big question: should you go and purchase a new RTX card? Well, it depends. RTX cards can be on the pricey-side, particularly when compared to the last generation of non-RTX cards; the GTX 10 series. Currently, the low-tier RTX 2060 starts at $599 AUD, and the top-of-the-line RTX 2080 Ti starts at $1,899 AUD (that’s $1,200 USD for our American friends).

In terms of raw power, the consensus seems to be that if you already own last generation’s GTX 1080, you’ll see little improvement in performance. However, when coming from something less powerful, like a GTX 1070, the performance increase will be marked. If you really want to take advantage of ray tracing – whether in gaming or in creative applications – you’ll need an RTX card and, if you want to see the most an RTX can currently offer, you’ll need a powerful one at that.

Ultimately, RTX technology brings with it the promise of far better, far more realistic lighting for gamers, faster content creation for 3D artists, and a potential to cut the amount of time and money spent by developers. Even in this early stage, the results of in-game ray tracing are quite impressive, and look set to become only more so as time goes on. Personally, I can’t wait. ■

© Copyright 2022 Taliesin Coward, or published under licence. No part of this website or any of its contents may be reproduced, copied, modified or adapted without prior written consent.

If there is a problem with this website, please contact the webmaster HERE

We use cookies to give you the best experience. By continuing you agree to our Terms of Service and Privacy Policy.