24 December 2021

Looking back at 2021 and predictions for 2022...

I'm tempted to make the same joke as last year...

Last year I decided to start a yearly predictions post. It's a bit of fun - it's nice to stretch the imagination and ruminate about what might be coming around the corner, especially with shows such as CES happening right at the beginning of 2022. So there's really no time to wait to see if I'm right in my predictions! 

But before we get ahead of ourselves, let's look back at 2021...

2021 Recap...


Recapping, I predicted:

  • FSR (then-known as Super resolution) would not be that great and I didn't think it would release before July, with a firm date of 30th June
I'm not sure how to call this one as I can argue either way? Technically, the FSR was officially announced in June 2021. However, it wasn't "available" until the 22nd of June, at which point AMD called it "released". However, this was more akin to a "developer preview"* than an actual release and the feature officially released on the 15th of July. So, technically, I was wrong... but in reality, I was correct.

Similarly, the quality of FSR does leave something to be desired, in general (though this is a subjective claim), though it's not as bad as I thought it might be. What I will say is that the feature is mostly beneficial to the higher-end graphics cards when running at higher resolutions... and there's just no getting around this fact. Using FSR at 1080p generally looks worse than DLSS (at least in the games that I've tested them both on) though that does vary depending on how the latter is implemented and also the art style of the game. I would say that you could argue this point either way

*In a similar vein to how DirectStorage is still in preview for select developers...

 Verdict: A wash...

The relative performance chart from TechPowerUp...


  • The RX 6700 and 6700 XT will be disappointing and will be 25 - 30 % faster than the RX 5700 XT at around 2.2 - 2.3 GHz
I think that I was correct about this, though again, there is some leeway to argue either way. The RX 6700 XT was 27% faster than its predecessor, running at a consistent 2.5 GHz (reference model). That's pretty accurate, if a little bit less efficient than I had extrapolated.

There was no RX 6700 released by AMD so that was definitely a disappointment. 

Moving on to the RX 6700 XT, if we ignore price, the card released to acclaim and near performance parity with the RTX 3070, beating it in a good number of titles. Plus, it was designed with a much higher capacity of VRAM (which is a good thing!). However, newer titles that have been released over the period of 2021 have seen the RTX 3070 pulling ahead of its competitor at 1080p and 1440p (though in titles where VRAM use is heavy at this second resolution and at 4K, the RTX 3070 crumples like a piece of rice paper...). There are exceptions, though.

I think that, overall, if the GPU market and availability weren't scuppered right now then I think that I would have been 90% correct. This card is disappointing and, to my mind, priced too closely to the RTX 3070 for its relative value and performance. The saving grace for the RX 6700 XT is the 12 GB of VRAM, now that we're seeing ~7 GB usage at high to ultra settings in games at 1080p...

Verdict: Correct!


  • 'Simulation' in games will rapidly increase, driving huge increases in CPU system requirements
I was plain wrong in this case. The data I collected shows a clear plateauing of requirements. It could be I was too gung-ho - it's still early in the current console generation and games are still heavily targetting cross-generation hardware so the limitations of the Xbox One and PS4 are still present when designing games. Hopefully, this situation won't last too much longer. Already supply of the current gen consoles is improving - I saw my first Xbox Series X and PS5 boxes in a retail shop in my country of residence this week. Sure, they were €750 and €1,000 respectively but they were physically there - more than a year after release.

Verdict: Wrong!

  • The RTX 3060 12 GB and 3070 16 GB will not happen
This is another easy one to decide: half correct, half wrong. We didn't see a 16 GB RTX 3070. However, the 12 GB RTX 3060 is present, accounted for, and still stupid. In retrospect, Nvidia didn't really have much of a choice because AMD upped the RAM count on their upper tiers and so they probably felt they needed to up theirs as well on newer cards. However, given the RX 6600 XT's 8 GB and higher performance, it's clear that AMD outmanoeuvred Nvidia in this aspect even at the lower end.

Verdict: A wash...

  • We will not see the release of Nvidia's Lovelace or RDNA 3 this year
I noted at the time that this was an easy call. I was correct but really only put this out there because there were internet commenters swirling the rumour mill by stating that we might see them at the end of 2021.

Verdict: Too easy...


Counting up, that's 1 correct, 1 incorrect, 2 washes, and 1 'meh'. Still batting my 50:50 in predictions. I'll try and improve on that this year.



2022 Predictions...


So let's get right into it because time's a-wastin' and if I leave this any longer on the boil, everything will be leaked by the time I publish!

  • Intel's Arc will be underwhelming in price, though not in performance. Will not appreciably impact availability of discrete graphics cards for consumers.
I'm really looking forward to this release. I'm hoping that it will be a breath of fresh air for the GPU market - even if it's not the complete reset that many people were hoping for (myself included). I think that we're going to see a good supply... mostly in the laptop segment. However, I think that the price will not be favourable. Intel can push their tech but AIBs have become incredibly greedy and push completely unnessessary coolers on virutally every product tier (not to mention lacking cooling on important aspects of the GPU, such as VRMs or GDDR chips...).

  • The i5-12400K/F CPUs will be more expensive than prior X400K/F level CPUs. The bargain prices of the 10400K/F and 11400K/F really are too good to be true.*
I still feel like the 10400 and 11400 series chips are amazing price for their performance. I have a feeling that the 12400 series will not be so good a value. Why would they? Their CPUs are beating all current AMD chips... so, I guess we expect them to be priced higher than prior iterations? My guess would be higher than $200 at retail for the F entry.
*Aaaand, of course, there was a leak after I originally wrote this to say that they will be approx $179 based on Bestbuy retailer systems. Compared to the launch prices of the 10400F ($160) and 11400F ($170) this seems right in line with a slight increase, whereas I was expecting around $200.


The list of potential announcements is quite large but I doubt we'll see all 5 product announcements from AMD... I bet a maximum of three.

  • AMD's v-cache will not make an appearance on the 6-core CPU. 
This means that the v-cache version (Zen 3D) of the R5 5600X will not release. I believe AMD are more likely to want to reduce the price $/€/£50 in order to compete with the i5-12600/i5-12400 SKUs and apply only 3D cache to the higher end parts because of the added cost to produce.

  • I don't expect a clock speed bump for Zen 3D SKUs. 
People are predicting a 100 - 200 MHz increase but do they really need it? In gaming, they will likely beat the Intel 12th gen series even without a relatively meaningless clock boost.

  • I predict that DirectStorage will be much ado about nothing in 2022. 
Now, I know I'm not the biggest fan of this software... However, I predict it will be a nice tech that simplifies developers lives (like Unreal Engine 5) but which doesn't appreciably affect game performance on PC. At least for this year. For future iterations when and if the system RAM is cut out of the equation? Maybe... However, I am looking forward to the final release date (whenever that will be) to see what they have to show.


Graphs showing calculated efficiencies of a theoretical RX 7950 XT - 1.95x and 3.00x performance of an RX 6900 XT


  • The Radeon RX 7950 XT (or whatever the full-die SKU is called [RX 7950 XT?]) will not be 3x or even 2x the RX 6900 XT. 
I've done some calculations for the efficiency per core, per CU, and per clock cannot get the part to this range unless you assume pretty extravagant efficiency compared to current RDNA die designs. I've shown in the past that RDNA 2 has no real performance increase for the same clock since the design of the architecture is essentially identical for this regard (which is fine because RDNA 2 was able to push the clock fequency comparatively high, instead) but what I've also shown is that "going wider" causes inefficiencies in performance. 

Yes, I've said that AMD really have nowhere to go with the RDNA architecture because going wider will just result in performance left on the table for increasing die area.... and I stand by that comment. I think that the graphs above show the reason why: pushing more frequency on fewer cores results in better performance efficiency (see 6600 XT and 6700 XT), you have to drop the frequency on more cores in order to retain a decent level of efficiency. When I plotted the efficiency, comparing up-clocked versions of the same die, you can see the drop-off in efficiency compared to if the increase in performance would be 1:1 with increased clock for the same number of cores.

Maybe the claimed performance increases can be considered for special cases such as 4K or 8K gaming... However, I estimate that at 2.5 GHz, a theoretical RX 7950 XT* would be around 1.7x to 1.95x the performance of the RX 6900 XT, with an absolute maximum increase of 2.24x the performance based on application of the scaling observed for currently released RDNA 2 parts for a resolution of 1440p.

It has to be noted, though, that this scaling (obtaining 2.24x) is the absolute best case scenario at this clock frequency because we observe worse scaling as we increase core count and I believe that will accelerate as we vastly increase the number of cores on the same card. Also bear in mind that this is supposed to be a two-chip package. There is no indication that gaming graphics scaling is equivalent when spread across two different dies - in fact all historical evidence says that it's not. So I expect further inefficiencies when this is also taken into account.
*With specs according to Greymon's leaks.

  • Radeon 7000 and Geforce 40 series will both be announced at the tail-end of the year. Nvidia will announce first. However, only Nvidia will have a proper product launch in 2022 for this series. AMD's will be a paper launch, with real avilability in Jan 2023.
I'm excited to see what we're going to get with the next generations but I believe that AMD lack the ability (perhaps more due to capacity) to accelerate the release of their next generation of graphics cards. I think that Nvidia will be able to do the same thing that they did last time and push out their first cards before AMD in order to get ahead in terms of PR and mindshare. For AMD, I think they will struggle to get their first cards out which is why I think we'll see the announcement in November, leading to a weak paper launch late December with real availability in Jan 2023.

I also wonder if Nvidia will actually keep prices "static" in fear of AMD's next gen advances or not. This is not something I will bet on though. So don't count it on my tally! :)

  • GPU availability won't appreciably improve in 2022.
As per my analysis, I think that even Intel's entry into the market will not enable a good fluid supply of cards until 2023. Not much more to add to this one.



Wrapping up...

That's all I've got in terms of prognosticating for now. I may have some more posts for you before the end of the year since my Christmas plans have been cancelled. 

Here's wishing you all a happy Christmas and New Year! I hope that 2023 will be better than 2022 was. If you have any predictions or want to disagree with my own, feel free to down in the comments :)

No comments: