I'm a prognosticator, or at least I would like to be. I try and trend aspects of the gaming industry, especially CPU and GPU performance in order to understand where we're going and what we, as consumers, might expect. The important inflection point has past us since the release of the Zen and Navi architectures. However, that doesn't mean we're out of the woods yet.
PC World gave their advice about which graphics cards are good for 1440p gaming and 1440p ultrawide. They recommend an RTX 3060 Ti for the base spec and an RX 6800 for the ultrawide resolution and the important thing here is that they are not wrong, but as Tech, over at Tech Deals, is wont to say, "For a hot minute"...
I think that these are good recommendations, though I do disagree a little. In the here and now - the RTX 3060 Ti is good for 1440p and the RX 6800 is good for 1440p ultrawide.... but for how long? Obviously, Brad is giving his answers based on the best information available today but here is where I am departing from this general advice - according to my analysis, a 3080 or 6800 XT will last you at 30 fps, 1080p until 2025 (in rasterisation).
I think that these are good recommendations, though I do disagree a little. In the here and now - the RTX 3060 Ti is good for 1440p and the RX 6800 is good for 1440p ultrawide.... but for how long? Obviously, Brad is giving his answers based on the best information available today but here is where I am departing from this general advice - according to my analysis, a 3080 or 6800 XT will last you at 30 fps, 1080p until 2025 (in rasterisation).
Given the reduced cadence of the releases from Nvidia and AMD, we're looking at new graphics cards in 2022 and 2024, at best*. There just isn't enough headroom in the process node and frequency advancements to support any real further advances when ignoring architectural differences**.
*And I will get to this below.
**Again, I'll get into this a little further below.
This means that we're recommending a mid-range GPU for a mid-range resolution. Is that wise? One of the trends that I've picked out over the last year is that computer technology is advancing at an increasing rate! Gone are the relatively static trends of the last 10 years, these are times that buyers need to beware due to the rapid flux of technological solutions. We are actually in a period which is, unparalleled in my experience whereby CPU, GPU, storage and memory technologies are all advancing in tandem, beyond the periods of rapid change that defined the 1990s and early 2000s but release schedules are stretched farther than any in history...
Yes, we're only missing the sound card revolution but really, if that happened again, we're royally screwed!
Look at this graph and tell me to buy an RTX 3060 Ti (same performance as a 2080 Super) for 1440p... *Please note that this graph is in relation to medium settings at a resolution of 1080p* |
We are currently in a period where there is a (coming) revolution in graphics design. A period, which, once it shakes out, things will become certain once more. I will tell you now that anyone who bought a graphics card from 2018 - 2022 is a fool.
"anyone who bought a graphics card from 2018 - 2022 is a fool"
Ahhhhh, god, that was satisfying...
Luckily, that includes myself. We're all fools! So shut up and enjoy the ride.
This is what life was like back in those heady late 90s and, basically, the whole of the 00s! You bought the best you could afford and turned down the settings as much as you could to get an acceptable performance. This is the "good ol' computer gaming" experience... sorry, I mean, Personal Computer Master Race (PCMR) experience...
Life had gotten easy and people had become accustomed to a certain level of performance, which in my opinion, was mostly driven by the lacklustre performance of the last console generation: The PS4 and, to a certain extent, the Xbox One, held game development in check - though not entirely. Advancements still came, though in relation to the hardware in those two systems (and their mid-range successors).
The problem with recommendations right now is that everyone is basically talking out of their behinds - though it's not their faults! Humans are designed to recognise patterns and learn from experience. Unfortunately, this leaves us very vulnerable to radical changes in social systems (I'm not even going to link to the various radical changes that have happened over the last 10-15 years). People don't see these things coming because they either don't trend the correct indices or they can't foresee which indices may become important.
We're talking about market inflections, here.
Now, most people will say that you cannot predict a market inflection. However, it is apparent that, given the right information, most people probably could and that most predictions are based on social conformity, instead of not having access to that important data.
I believe that, given all this data that I've collected and the current and recent prices of graphics cards (ignoring covid) we are in for a serious market crash in terms of "performance per dollar"!
This is (probably) where the future of technology lies... |
Apple have, for the last 20 years been disrupters of innovation. This is to say that they have not necessarily innovated themselves but that they have managed to anticipate where the general market will be at a certain point in time and then arrive there a few years early. In so doing, they managed to prematurely capture the market and standardise it to their liking.
They did this with the iPod, iPad, laptops, wearables and all-in-one desktops. Now, some might say that they've also achieved this with the new Macs designed around the custom M1 silicon... but they are incorrect.
There are important distinctions between Apple's disruptions of the past and the trends of the future. First of all, Apple has traditionally disrupted physical manifestations of technology through engineering prowess. All of the listed product categories above are dependent on form factor, tactile feedback and user experience. None of the innovations in question were dependent on the specific underlying technologies or manufacturers that Apple ultimately used.
Secondly, there is no point at which Apple controls all silicon advances or could block advances due to patents. Intel, AMD, Qualcomm, Samsung, SK Hynix all control large swathes of ideas which may be implemented into future products.
This is the point at which you may realise where I am headed with this discourse: Apple is being praised for the marvel that is the M1 chip but Apple didn't invent or preclude any of the approaches it made in making the M1 chip from other manufacturers. In fact, it wasn't even the first manufactuer or manufacturing segment to go this route... twice.
The other aspect that Apple tends to be credited with is the "correct time to market". Apple is usually not first with many of their implementations but they are usually right in choosing when those implementations will become important factors for selling items in the market, filling am as yet-to-be-fulfilled hole.
Unfortunately, this aspect of their prognosticating dominance will also not save them this time around...
Coreteks is a famous prognosticator - he sticks his neck out and tries to predict the future in much broader and, simultaneously more specific, cases than I could ever possibly achieve. However, our thoughts are converging somewhat in the here and now.
An idealised concept from Coreteks' prognostications... |
In the same way that Apple is capitalising on the specialisation of core functionalities in their silicon design, in order to speed up processessing (in comparison to the older, more generalised processor architectures from Intel and AMD), Nvidia has already begun on this path towards specialisation through CUDA cores, Tensor cores and RT cores. These three aspects of Nvidia's silicon allow the company to diversify and easily refocus their technological approach.
AMD, on the other hand, have converged their technologies into a single piece of silicon: the compute unit. It is within this combined architecture that AMD's graphics cards process ray tracing, shader and texture computations. I believe that these two divergent approaches towards graphical prowess are reflected in the difference in performance per watt of Apple's M1 and AMD/Intel's current architectures.
It has been rumoured that both AMD and Nvidia are moving towards chiplet designs for their coming graphical architectures. This makes perfect sense because, it's the only way you can really scale past the current monolithic core designs for both companies. However, as I've mentioned before, I believe that AMD's core design is at a dead-end.
Yes, you can just literally scale-outward ever increasing amounts of CUs... but that doesn't really get you very far because you cannot scale that with respect to power. You cannot overly optimise that generalised silicon approach in order to get the best performance per Watt as you can with specialised silicon.
Now, here's where Nvidia potentially has the future already locked down: They have their separate designs for generalised floating point processing, tensor (AI) processing and RT processing. If Nvidia wanted to scale based on a chiplet design, they do not need to scale all compute silicon to the same degree. They can mix and match any and all aspects of their three solutions across any number of chips, resulting in guaranteed "performance per aspect".
In the very same way that Apple has shown absolute leadership with their M1 chip through the implementation of specialised silicon, Nvidia can do the exact same thing in their GPUs. AMD, on the other hand, is unable to perform the same trick because they don't have specialised silicon. Similar to how "maybe you can perform Int + FP32 or double FP32" in Nvidia's Ampere, in any future AMD GPU device utilising a derivative of RDNA2, there will always be a trade off between each instruction/compute type instead of guaranteed performance.
You think CPU requirements are going to go through the roof? Maybe not... |
Unfortunately for us consumers, this design paradigm is also coming to the CPU side of things. Apple's M1 is a forebear for the coming CPU apocalyse too. However, as I stated above, because Apple doesn't corner this market by a long shot (and never can) both Intel and AMD will be able to implement their own versions of the "multiple specialised core architectures" (or MSCA - fittingly, this acronym already applies to positions that emphasise sharing of information between specialised research positions) without fear of being dominated, as Apple has been able to do in other, more specific markets.
Conclusions...?
There are many current industry commentators that are absolutely destroying AMD and Nvidia for their lacklustre raytracing performance. However, these are the intermediate products that need to be released in order to get the whole ecosystem onboard with "the future". It cannot happen another way... companies that tried this route all failed (PhysX is a good example of this).
What this does mean, though, is that these high GPU prices will ultimately be unsustainable - specialised silicon will most likely lead to cheaper overall GPU prices but also more targetted GPU segmentation, also leading to better overall prices than we have right now for the silicon area inefficient designs (or you can argue, it should). Depending on how far out these sorts of graphics processors are, this could lead to a severe increase in GPU performance/$ ratio by 2022, or maybe more realistically, given Hopper's delay, 2024. By not having to provide these huge monolithic dies with generalised hardware, you could have RTX 3060 levels of rasterisation performance coupled with RTX 3090 levels of ray tracing acceleration and Tensor core upscaling (DLSS). This could finally decouple performance from "features".
Ultimately, there isn't much further we can go in terms of increasing density of transistors and efficiency of heat transmission (aka. the ability to cool very dense chips running at high frequency) from advancing process nodes using generalised architectures. Many commentators have been citing the death of X86 and X64 for some time with the advent of RISC-V and ARM. However, these architectures do not definitively do anything better than the prior two, they just have less functional baggage, meaning that they are free to do whatever they wish. Should Intel and/or AMD chose to release that baggage, they are sure to be a force to be reckoned with in that tight performance-to-power space.
As it currently stands, the future is MSCA X64 coupled with lower frequency and power usage across both CPU and GPU - sometimes implemented in an APU. In that scenario, the ideas that came together to create the CELL processor were too far in advance of their time...
No comments:
Post a Comment