VN Trend

Mọi người đang quan tâm điều gì ?

What makes an RTX (2060, 2070, 2080, ti, etc) card any better than a GTX (1060, 1070, 1080, ti, etc) card?

3 min read

This is fascinating topic right now, with both generations of cards so similar in performance trading blows in different types of games and applications.

Right off the bat, the #1 reason why RTX beats GTX is the use of GDDR6. GDDR5 has a bandwidth of 8000 MT/s while GDDR6 runs at an insane 14000 MT/s. That is literally almost double.

But obviously VRAM is only one small part of the equation. If the rest of the GPU is mediocre then it doesn’t matter how fast the data moves around in the system. By and large the workhorses of any graphics processor are the “floating point units” — okay, I know that not what they are called, but essentially that’s what they are. Each GPU “shader” or “CUDA core” or whatever you want to call it is a floating point calculator—and these are the absolute blood and guts of a graphics card, nothing happens without them.

So ultimately, the speed of a GPU depends on the number of shaders (I’ll just call them that) multiplied by the speed of each shader. Let us just say we have a GPU with 2304 shaders running at 1250 MHz (think RX 580) and you have another GPU with 2304 shaders running at 1750 MHz (that’s you GTX 1070 Ti) and then there is a third GPU with 2304 shaders that boosts well above stock speeds to 1950 MHz (we’ll call him RTX 2070) it isn’t hard to figure out that, all else being equal, the RTX 2070 is going to crush the others having the same core count—even if all it had was regular old 8000 MT/s GDDR5, which it doesn’t.

What Nvidia did with the Turing die-shrink to 12nm (from 16nm) was opted for increased complexity as opposed to reduced die size, especially in regard to adding specialized concurrent integer handlers to each shader to run alongside its floating point handlers. This is something that RTX and Turing GTX have in common. Then they added smaller low-presicion floating point units for faster, low precision operations like fog, shadows, particles, reflections and smoke. However, as of Spring 2019, very little software exists which takes full advantage of those small and fast floating point units so at present they don’t pose much of an advantage.

The advantages of RTX right now beyond speed, efficiency, and low memory latency are marginal, but their improved performance in the newest games shows that in the future, the performance gap between Turing and Pascal will only continue to widen. If you have no interest in SotTR, Rainbow 6, or Apex Legends, then maybe the Pascal cards or the RX 580/590 are still your best bet.

I will make a confession: it seems to me that the decision to eliminate some of the floating point units in the Turing SM to make room for “integer” units could be a double-edged sword—for the simple reason that you can calculate integers on a floating point register, but you can’t calculate floating points on an integer register. This may be the reason we see the GTX 1060 beating the GTX 1660 in titles like Witcher 3, Strange Brigade, and Middle Earth—despite having 20% fewer “CUDA” cores.

All in all, however, having those concurrent integer units in the Turing SM seems to be an advantage in most titles. On the other hand, if you figure that 20% more cores should equal 20% better performance, then only 7 out of these 33 titles actually benefit from the “improved” Turing architecture.

So, who is to say exactly what is what?

Chia sẻ ngay

2 thoughts on “What makes an RTX (2060, 2070, 2080, ti, etc) card any better than a GTX (1060, 1070, 1080, ti, etc) card?

  1. Im pretty pleased to find this great site. I need to to thank you for ones time for this particularly fantastic read!! I definitely loved every little bit of it and I have you saved as a favorite to see new stuff on your blog.

Leave a Reply

Your email address will not be published. Required fields are marked *