1 June 2023

Mid-Range Hardware: RTX 4070 review (Part 2)


Last time, I looked at the relative performance between the RTX 4070 and an RTX 3070 on an Intel-based system. This time, I've chucked these two cards into my AMD system and compared them with my RX 6800 to see the performance scaling on a mid-range system.


On Strong Foundations...


The system in question:

  • Ryzen 5 5600X
  • 32 GB DDR4 3200 CL18 Corsair Vengeance
  • MSI B450-A Pro Max
  • WD Black SN750 1TB
  • RTX 3070 Zotac Twin Edge OC
  • RTX 4070 MSI Ventus 2x
  • RX 6800 XFX Speedster SWFT 319
Just another quick explainer on the benchmarking. The average fps metric is self-explanatory. However, the minimum fps is calculated based on a moving average covering a period of approximately 1 second. This differs from how some review site outlets discuss percentile fps numbers, using a singular frametime value to turn it into an fps value. As I've discussed on Twitter when raising this subject, this is not a correct procedure as a singular frametime value is not an averaged experience and thus cannot become a "frames per second" number - it is taken entirely out of context.

So, instead, I present the maximum frametime value as a pure number in milliseconds and also, alongside that, present the number of frametime excursions. This is a term I've made up to help describe the overall experience of the title being tested without having to present a million overlaid frametime graphs. It is drawn from the practice of process performance - a way of monitoring that your process is under control. In this instance, because I am testing with uncapped framerates instead of trying for a target, this value is set to be 3 standard deviations from the mean frametime value. Any time there is a frametime registered above that limit, it is counted; the higher the number of frametime excursions, the more stuttery the experience may be for the player.

With that out of the way, let's move onto the testing!


Results...


First thing's first. Even though I used a slightly lower power limit on the OC profile for this setup, a lot of the older titles' results are quite similar to those obtained on the Intel system. However, the keen-eyed among you will note the overall very slightly lower performance across the entire test suite. This is because the 12400 Intel CPU is a stronger part than the 5600X* - and this allows the GPUs to be able to push out more frames, so aside from some instances where we are CPU limited, we're actually slightly even more limited than we were last time...

*Yes, using DDR4 3200 will cause a slight decrease but my prior testing showed equal or <5 fps on average difference between DDR4 3800 (tuned) and the 4 stick setup we're using here.


Unigine Heaven actually shows the opposite trend to the rest of the results (discounting game engines that are biased towards AMD CPUs/GPUs :) ) where the Ryzen setup performs on par or slightly better at stock. This is roundly reversed in Superposition, though.

For Assassin's Creed Valhalla, there's essentially zero difference between the Nvidia card results on both AMD and Intel systems. In Timespy, we see the difference in CPU performance (yes, it's the graphics test but the graphics card still requires the CPU to feed frames to it!). What was interesting here is that I could not get the RX 6800 stable in the graphics 2 test under any circumstances. Even if I kept the core and memory frequency at stock with a -5% power limit, the test would crash every time. So, this was a bit of a fail...

In hindsight, I shouldn't have tested with these settings on Arkham Knight...

Since AMD cards cannot use the Nvidia gameworks technologies, the performance is higher on that part because the game is less demanding to run! D'oh! Otherwise, the results are consistent between the platforms. Metro Exodus, on the other hand, appears to show a slight decrease on the AMD system. I would theorise that this is a result of the lower RAM speed in this RT-based title.

Moving onto my manual benchmarks things begin to get a bit more interesting:


I tested this result four times...

Hogwart's displays the difference between the performance of the 12400 and the 5600X - with higher GPU utilisation across the board corresponding to higher fps output. Surprisingly, the AMD GPU is highly utilised and has a better minimum and average fps than either Nvidia GPU which I am thinking is related to the dreaded Nvidia driver overhead. Otherwise, the CPU limitation of this title is worse using the 5600X with both 30 and 4070 cards than it was on the 12400 - where the 4070 had better performance than the 3070 and managed to only reach a real limitation when the 4070 was OC'ed.

The number of frametime spikes were about the same across the two systems and essentially correlate to the 3070 not having enough VRAM*, whereas the magnitude of those spikes is also generally slightly worse on the AMD system, too. I'm not quite sure why the 3070 has such a drop in reported power usage from just a 5% difference in the OC between the two systems but I think it is likely due to the CPU limitation reducing the fps by around 10, on average.
*Yes, I know it's a meme, now. However, this is a real effect in this (and other titles). The 3070 performs worse in frametime metrics across both systems...


These test results show that Spider-man is a really well-optimised title. Yes, we observe generally lower performance than on the Intel CPU due to the aforementioned difference in performance but the RX 6800 performs wonderfully in this title with generally higher minimum and average fps compared to the Nvidia parts (aside from the overclocked 4070).

While the GPU utilisation is at a high on the RX 6800, power consimption is almost as low as the RTX 4070 (which is incredibly impressive!) and whatever optimisations performed by Nixxes to port this title to the PC have resulted in generally low maximum frametime spike values, and consistent frame presentation, as evidenced by the low number of excursions during the test! Interestingly, these excursions are lower on the AMD system than on the Intel system. I don't have an explanation for this other than (potentially) the fact that I have four sticks of RAM on the AMD system and it's allowing better system memory access.



Ironically, not the last test - The Last of Us continues to show us the disparity between the performance of the two CPUs, with AMD's results falling slightly behind. However, once again, this is a title where the AMD GPU is performing better than either Nvidia card and at comparable power consumption to that of the 4070 (which we know is a very efficient part). 

It's also interesting to note that the maximum frametime spikes are worse on the Nvidia cards than compared to their performance on the 12400 but that AMD's 6800 really provides a better experience. This appears to be a title with optimisations based around the RDNA architecture's design. Additionally, in case you missed it in the chart above, the stability of the presentation is much greater on the 6800 with less than half the frametime excursions.


This can easily be shown in the frametime plot over the course of the benchmark - the RX 6800 essentially decimates the (usually more potent) RTX 4070 across the entire benchmark. It's not perfect, of course, this is a title that is suffering from relatively poor optimisation on PC. However, users with AMD RDNA GPUs will likely not have noticed as many problems as those using Nvidia's products...


Moving onto A Plague Tale Requiem, the CPU limitation on the R5 5600X is more apparent than ever - with the 4070 performing essentially the same as the 3070 and the RX 6800. What is once again impressive, is the power usage of the 6800 - matching and slightly beating the 4070 for a similar performance*.

Additionally, once again, the frametime excursions and max frametime spikes are incredibly good on this title - Asobo studio have crafted an amazing engine that is able to moderate the experience to an extent that most other studios cannot. While I may not be as in love with this title as many other commentators appear to be (the hair strand rendering on the characters really puts me off!) this game is very impressive!
*Of course, with a stronger CPU, this would most likely not be the case...

On frequency...


You may have noted that I made quite a few mentions of the CPU bottleneck experienced in this mid-range review and I am happy to report that DGBurns from over on Twitter has deigned to aid me in some CPU to CPU comparisons on the AMD side. He also has an RTX 4070 but his CPU is an R9 5950X. Since the 4070 is very tightly controlled at stock settings, the card to card variation should be absolutely minimal, meaning that this CPU should be able to show us the effect of having a faster clocked Zen 3 CPU than my 5600X.



Heaven shows that we are essentially identical in performance. However, (and you can't see this from the graph above) I can see that his CPU is only 6% utilised. Meaning that only one logical CPU thread is being used. This means that the Heaven benchmark is entirely GPU-bound in modern scenarios (the 10 point difference is basically a run-to-run variation).

The more modern Superposition has a higher CPU load, in comparison. That ~100 pt difference is most likely indicative of the difference in clock speed between the 5950X (~4.9 GHz) and the 5600X (~4.2 GHz).


Moving on to Arkham Knight, we can see that result repeated with the average fps increasing by around 10 fps and the minimums slightly less so... Metro Exodus Enhanced Edition shows us that there is a similar, slight uplift when increasing processor frequency and core count.


Conclusion...


Like last time, we find ourselves concluding that in the mid-range, we are potentially more CPU-bound  in modern game titles than most might realise. However, it is not purely a case of simply upgrading your CPU to a higher-end part. CPU architecture is more important than clock speed, these days. The difference between a low-to-mid-range part is not that great - even with RAM tuning. Sure, you might manage to also overclock your CPU, too... but that's never a given! 

From what I'm seeing, on the AMD side of the equation - a v-cache part is most likely to guarantee the best performance for the price. In our testing, the 5950X was not that much faster than the 5600X, despite clocking around 500 MHz faster on average.

Similarly, the 12400 gave slightly less performance than the 5950X - showing that architecture's superiority in performance - even at a lower cost.

On the other side of things, the RX 6800 is a beast of a GPU for a mid-range PC; giving performance between a 3070 and 4070, sometimes equalling a 4070 and sometimes beating it. The fact that it easily matches the 4070 on power usage is very impressive. Unfortunately, there are some issues with software and hardware that can crop up on the RX 6000 cards. Yes, these are much better than for the 5000 series but I have experienced several software problems with Radeon Adrenaline that I could only solve through user modification of the powerplay tables using MorePowerTool. I have also, unfortunately, experienced failures of all three of my DisplayPorts on the card for reasons that I cannot understand*.
*I thought it may have been a faulty DP cable until I remembered that I was using an HDMI cable with a DP adapter...
The other issue I have with cards like the 6800 is the size! I cannot fit this card into my SFF PC - it just won't go! On the other side of the coin, the 4070 can be easily purchased in two-slot, two-fan configurations that fit into any build, along with a single 8-pin power connection. 

It really is a shame that the RTX 3070 wasn't released with 12+ GB of VRAM because the card can be seen to be right on the cusp of consistently good performance when using very high or maximum settings at 1080p. As a result, both the 6800 and 4070 are sort of overkill for a low-to-mid-range PC setup. However, their VRAM quantity is adequately matched for gameplay at 1440p.

Personally, the cheaper RX 6000 cards are the way to go - as long as you are okay with some potential troubleshooting and other issues. The real issue, right now, is that modern games are CPU limited and there's not much any mid-range or low-end system owners can do about it - it's not a graphics bottleneck for the most part...

Anyway, I hope you found this review series helpful. I hope to cover more games in the future with a keener eye on framerate performance targets as I did in the Hogwart's analysis. Until next time!

No comments: