RTX 2080 versus GTX 1080 Ti
NVIDIA (NVDA) designed the Turing platform specifically for ray tracing, and that is visible from its gaming performance when compared to the previous generation’s Pascal.
The GeForce RTX 20 series is built on TSMC’s (TSM) 12 nm (nanometer) node compared to the 16 nm GTX 1080 and 1080 Ti, giving the former the performance benefit of a smaller node. The RTX 2080 Ti features 4,352 CUDA cores compared to the GTX 1080 Ti’s 3,584 CUDA cores.
The RTX 2080 Ti has 11 GB (gigabyte) of Samsung’s (SSNLF) GDDR6 (graphics double data rate) memory, whereas the GTX 1080 Ti has 11 GB of GDDR5 memory. Moreover, the RTX 2080 Ti has the benefit of streaming multiprocessor architecture, which allows for parallel processing.
Despite having the performance benefits of a smaller manufacturing node, better memory, and improved architecture, the RTX 2080 does not provide any significant improvement from the GTX 1080 Ti in running games that only use traditional rasterization technology. Almost all games at present use rasterization technology.
RTX 2080’s true potential is visible in ray tracing
NVIDIA’s RTX 2080’s true potential is visible in games that use ray tracing technology to look like movies. In its presentation, NVIDIA did not compare the performances of the RTX 2080 and the GTX 1080 Ti using the traditional TeraFLOPS (floating point operations per second) standards. Instead, it showed the performance of its RTX cards using two new measures: RTX-OPS, which shows a card’s average performance across operations such as shading and ray tracing, and GigaRays per second, which measures a card’s ray tracing speed. Based on these measures, the RTX 20 GPUs offer six times the performance GTX 10 GPUs offer in terms of ray tracing.
NVIDIA’s CEO, Jensen Huang, stated that Turing is a new computing model and needs new performance standards. If ray tracing gains popularity, NVIDIA’s RTX 2080 GPU will provide a huge performance leap from its predecessor, the GTX 1080 Ti.
However, investors are more interested in the adoption of NVIDIA’s Turing GPUs by gamers. We’ll look into this next.