Happy birthday! |
It's almost the end of 2020 and, as is usual for most media outlets, they've done the whole "looking back at the previous year phase" and are now making predictions for this exciting new arbitrarily-defined period!
2020...
Although I didn't make any official predictions at the beginning of this year, I had made predictions for the new console hardware and AMDs CPUs and GPUs.
Quickly recapping, I predicted:
- The consoles would be more expensive than they eventually were (though I got the Series S correct and the PS5 disc in the approximate range).
- I thought the Series S would have more graphical horsepower than it eventually did and that it's RAM wouldn't be split in as poor a way as it is in reality.
- I thought Microsoft would release the Xbox Series X before Sony (as in late summer) but we'll never know if that was going to be the case, due to covid...
- I thought that Zen 3 was going to be pushed to 2021 - I just couldn't see how there would be enough 7nm silicon wafers for all these launches... I'm going to claim a partial "correct" on that one ;) - Note that I couldn't find this prediction on the blog but I made it in several places...
- I got the performance ratio of the RX 6800 XT at the "usual" clock speed correct (72 CU at around 2200 MHz) (After I fixed a calculation error!)
Not very successful! Let's see if I can do better for this coming year:
2021...
AMD's Radeon "Super-Resolution"
I am not sold on this feature. It seems like something half-baked and thrown in to the mix at the last minute given that it's not implemented in either the consoles or on desktop cards. Worse, still. It's a very confusing name when AMD already have Virtual Super Resolution (VSR) in their software stack - this piece of software renders at higher resolutions and then scales them down to your display resolution. Confusingly, Super-Resolution renders at lower resolutions and scales them up to your display resolution.
There have been commentators saying this feature will be available in the first quarter of 2021. I don't expect it to appear before July, at best... I actually don't think we'll even see it this year but let's put that firm date of there will be no feature before 30th June.
There is also another reason I don't believe we'll see it very soon - it's supposed to be an Open Source endeavour. Meaning, leaks and wide industry buy-in, which means more leaks from presenations and software implementation across various engines and platforms.
I do not expect the performance gain from "Super-Resolution" to be very impressive...
The final nail in the coffin for this tech, from my limited point of view, is that AMD appear to have no dedicated silicon in their desktop graphics cards to enable it. Meaning that it will likely have to compete with ray tracing, shader compute and texture processing on their Compute Units (CUs). This seems like it will be a bad idea given that you're effectively having to spread those limited resources very thinly. So, I'm not expecting much.
Worse still, the graphics cards that will really need (and benefit from) this tech, the RX 6700 and 6700 XT will have fewer CUs to work on all these things in parallel.... whilst also operating at lower display resolutions, to boot! I see some comentators decrying the performance of DLSS at 1080p, saying it's blurry and that it's better at 4K... well, imagine this!
There's nowhere for the RX 6700 XT and RX 6700 to go... |
The RX 6700 and XT will be relatively disappointing...
I've gone through the lack of an "IPC"* boost between RDNA and RDNA 2 previously, showing that there's no per clock performance uplift. RDNA 2 is more power efficient and memory bandwidth efficient (the two are closely interlinked) than RDNA but the only real performance boost is coming from the increased working clockspeed available in the architectural update.
*Yes, I know there's 'no such thing as IPC for graphics cards', it's just an easy short-hand for performance... ¬_¬
As I said in that article, if the RX 6700 XT is targetting 2.2 - 2.3 GHz for its operating frequency then we're getting a solid 25 - 30% performance increase over the RX 5700 XT. The unfortunate thing about this is that, according to my calculations, it places the card at around 80 - 92% of the RTX 2080, which puts it at a significant disadvantage to the RTX 3060 Ti at $400. However, bear in mind, in real-world scenarios, if that performance uplift were true, it could place it just above an RTX 2080 and just below an RTX 2080 Super - using TechPowerUp's relative performance numbers.
Given that I was essentially correct about the performance of the RX 6800 XT (see above), I'm leaning closer to RTX 2080 performance, myself.
If I'm right about that, the RX 6700 XT is likely to be a disappointment because there is no way AMD can charge $400 for it, essentially conceding a vast amount of space between the $300-350 segment to the $580 (RX 6800 XT) segment. As I mentioned in my post, there really isn't much AMD can do about this because they have no way of easily cutting down their GPU dies (Navi 21 and 22) to get an intermediate performance value that can slot into this position.
I expect the RX 6700 XT to be too expensive for the performance (unless they drop it to $300 instead of what I expect them to do and place it at $350) and, considering the already lacklustre ray tracing performance and lack of machine learning silicon in the RDNA 2 design, will push consumers to purchase fully "next gen" RTX 3050 and RTX 3060 cards instead.
The year of a console release sees an uplift in CPU requirements but the following year also sees a similar jump before things start to flatten out... |
'Simulation' in games will rapidly increase, driving huge increases in CPU system requirements...
In my trending for PC game requirements over the last ten years, I noticed a jump in the year of each console hardware release - but there is also a subsequent jump in the following year, before things start to level off. This is observed across multiple benchmarks.
I predict that, with the huge processing power of the next gen consoles (now current gen and, really, only in relation to the last gen consoles) we will see a lot of developers focus on simulations of various kinds. I believe that I may actually be underestimating the coming CPU requirements for games over the next few years because when I look at my predicted curves, they are far below a linear fit for the period covering 2020-2021.
The RTX 3060 12 GB and 3070 16 GB will not happen...
Now, it's not impossible that Nvidia could completely screw-over their early adopter customers I do think that these two products make little sense. Worse still, there are rumours that the RTX 3070 Ti will have 10-12 GB of GDDR6, with a slightly increased CUDA core count...
Let me put this as simply as possible - The RTX 3060 Ti and 3070 could have benefitted from 12 GB of VRAM. A 3060 with either 6 GB or 12 GB of VRAM? That doesn't make any sense. It doesn't make sense from a marketing perspective, it doesn't make sense from a BOM perspective. If the rumours of the RTX 3060 Ti being difficult to make a profit for the AIB partners, a 12 GB 3060 at $300 makes even less sense. You could try and squeeze it in at $350 with the 6 GB at $300 but then you're only doing that to deny AMD any sales. (Which, to be fair, Nvidia could do.)
From a user perspective, the RTX 3060 is a 2-3 year 1080p60Hz card. 12 GB of VRAM doesn't make sense - are 4K-8K textures reliably visible at a resolution of 1080p? I have my doubts. On the other hand, a 6GB card in 2021 is a crazy suggestion. I really don't know about this but neither of these rumours (no matter how certain some commentators appear to be about them) makes sense to me. Memory bus aside, I think 8 GB is sensible for the RTX 3060, not 6 or 12 GB. Is it possible we'll see a 'split' access like we've seen in the past? It's possible, especially because it was Nvidia that implemented those solutions...
So, if we're really going to go to crazy town, here... my prediction is this - the RTX 3060 will have 8GB of VRAM on a 192-bit bus and there will not be a 12 GB variant.
Similarly, a 3070 Ti with 16 GB? It could happen at $600* but we're looking at a situation where there's almost no space between a 3070 and 3080... in terms of performance, at least.
*In your DREAMS (RRP)
[UPDATE]
I thought of one more prediction:
We will not see the release of Nvidia's Lovelace or RDNA 3 this year...
I just think that the push back of availability for all these products* over the course of 2020 and into the beginning of 2021 would mean that it would be crazy to release next gen products within 12 months of the prior products. I expect these two product releases to be pushed into 2022. At the same time, I am skeptical that Zen 4 will release this year in any meaningful way - it might be in reviewers' hands and announced by the end of the year but likely not really "released" to consumers. However, since I was wrong about Zen 3, I'm going to let that one go.
*Zen 3, RDNA 2, Zen 2/3 mobile and the custom console silicon are all competing for TSMC's 7nm capacity. Meanwhile, the RTX 30 series is not even releasing to the "mainstream" until January/February 2021 with the RTX 3060...
2 comments:
And along comes the 12GB 3060 at $329. With a Feb 2021 launch, meaning we can MAYBE get our hands on one in...August??? :(
Yep, haha, only a few days into 2021 and I'm already wrong! :D
To be honest, I wanted to make some bold predictions that weren't safe bets. I still think the 12 GB RTX 3060 is a stupid SKU but what can we do?!
Post a Comment