![]() |
| On time, again! |
It's that time of year where "best of" lists start falling like cats and dogs. So, here's my contribution to the year round-ups: trending of the recommended game requirements for games released this year...
In Brief...
Again, I won't rehash the background of this ongoing study. The prior years' posts can address those questions... What I will re-state is that this is designed to track the average developer-recommended hardware for demanding and popular games on PC. This data can be used to trend the advancement of technology in the industry and may even be useful for certain types of devs (indies?) who might lack access to any sort of free, proper data-driven service. From a consumer perspective, I find this trending interesting for making PC purchasing decisions - specifically surrounding expected lifetimes of components that a user may wish to buy.
As usual, all data is available here... So, let's get to the main discussion!
Market Fluctuations...
Considering The Great RAMpocalypse that is currently ongoing, we may as well start with memory!
| Climbing higher... |
Fortunately, and definitely considering the current situation, the most recommended system requirements are for 16 GB of system memory. DRAM (specifically DDR4 and DDR5) is hyper expensive right now - we're talking about 3x or more for a 32 GB kit, which is bad... - and my general buying advice is to buy only what you need and nothing more.
In previous years, I've been advocating for getting 32 GB system RAM but with the current prices of several hundred dollars/euros/<insert currency here> that can no longer logically stand. The vast majority of games don't need more than 16 GB - so stick with that. If you're on DDR4, get 2x 8 GB kits. If you're on DDR5, while some testing shows a negligible loss in performance, other testing shows a bigger performance loss by going to a single DIMM (stick). Though, aside from the difference in CPU speed and performance, the quantity of RAM could be a factor in the difference of these results.
I swear I remember more RAM testing after DDR5 was released which showed minimal gains (capacity equivalent 1 stick vs 2 sticks) but I can't for the life of me find the videos...
What isn't so good for this strategy is that we're still seeing a general increase in random one-off games (including the first game recommending 64 GB of memory!!), so the general trend is still ticking upwards with a +2.4% increase in games asking for 32 GB.
Overall, though, despite the current dire situation regarding DRAM pricing, gamers are unlikely to be suffering as long as they have 16 GB system memory in their PC since the minimum recommended specs are still typically not more than 16 GB.
Moving onto VRAM and we can see that the upward trend is a bit more pronounced and certain. Developers have been holding back for a long time and they just want more VRAM for their more graphically and technologically advanced games. In addition to this, there is definitely a push towards doing more on the GPU itself.
Sure, 8 GB is still the most recommended quantity and the third-most recommended quantity is 12 GB but the all-important second-most recommended quantity has jumped up to 10 GB.
This really isn't a surprise to anyone who partakes in videos from GamersNexus, HardwareUnboxed or DigitalFoundry but the big takeaway here is the rate of change: +13.2% for 10 GB and +5.1% for 12 GB with a total of almost 24% of the games polled recommending more than 8 GB VRAM.
| Looking at the percentages, we're seeing that definitive push for increased VRAM more clearly... |
We're also seeing some positive movement in acknowledging the issue from the GPU vendors, too. Although not on the cheapest models, more VRAM is making its way into lower-cost models as a standard and production of the 8GB variants seems to have been reduced as a direct consequence.
Based on the less than inspiring sales of the 8 GB variants*, it's likely that next generation, the GPU designers will be targeting at least 9-12 GB VRAM** on the $300 - 350 price segment of the market.
*Based on anecdotal evidence the 8GB variants appear less often than even older RTX 3060 12GB cards in etailer best sellers lists and these cards have some of the largest below MSRP price reductions on the market (The RTX 5070 being the exception!)...
**Assuming that they move to 3GB GDDR7 modules for use over a 96 - 128 bit bus. Alternatively, if the DRAM supply improves in time for the design of these products, then they could clamshell 6 or 8 modules of 2 GB capacity and provide 12 - 16 GB VRAM.
Of course, this is highly dependent on what happens with DRAM availability over the course of 2026.
ComputAIng Power...
Over the last year, we've seen a small increase in single core performance recommendations - which isn't surprising because the PS5 Pro isn't really any more powerful than the PS5/Xbox Series X from a single core standpoint. There's some frequency increase that brings it on par with the Series X but in terms of the microarchitecture, there's no new technology that has pushed the consoles beyond what deevlopers would already be targetting for current generation performance.
On the other hand, multicore performance has increased by a good percentage (approximately 16%). This performance increase repeats what we've observed in previous generations because although desktop single core performance isn't increasing that much (X3D chips aside), multicore performance does see larger generational improvements and, as a result, as developers are recommending newer CPU generations, the multicore performance is getting increasingly stronger for the same number of cores.
If we move across to looking at that improvement over a console hardware generation, we see the reduced "demands" from developers for consumers to realise the intended experience. In my understanding of this data, this shows the effect of an increased cross-generational period and the relative CPU performance of the console CPUs compared to the available "average" consumer desktop parts.
So, this has resulted in a 25% less performance increase in single core requirements since the release of the PS5/XSX and the release of the Pro than we had between the PS4/XBO and the Pro/X release. The multicore increase in requirements is closer but still 11% less.
So, we're looking at potentially approching a performance plateau in CPU requirements until we get stronger console hardware... At the end of the last generation, we had 2.3x the single core and 3.5x the multicore performance from the start of the generation. So far, we're 5 years into the current generation. At this point in the last generation, we had a 1.8x and 2.2x increase for single and multicore, respectively. This generation, we have values of 1.4x and 1.8x.
That's a real slow-down... and points to the fact that there are a LOT of CPUs out there that will play modern games without issue and we'll see this in the predictions section, later on.
Stable Confusion...
The average GPU performance recommended requirements for games have been increasing at an almost linear rate over the last couple of years and this continues in 2025 with another 11% performance increase.
This outlines the fact that we're nowhere near maxing-out GPU performance based on the graphical features developers are targetting.
However, looking at the per console hardware generation increases, we see a similar slowdown to that observed for CPU performance. On one hand, this bodes well for consumers - your GPU will last you longer. On the other hand, what does this mean for the availability of new consumer GPU products?
What incentive do AMD, Intel or Nvidia have to put out better low-end hardware if consumers really don't require much better technology - only a higher quantity of VRAM for all of these new features to be enabled.
Now, I'd argue that most of the lower end GPUs don't have that good of a performance and we need higher performing products at lower prices to be able to push the industry and gaming landscape forward. But what we may risk here is one or more of the three GPU manufacturers dropping out of the low-end entirely for at least one generation as the profit drops out of the market due to all these shortages shenanigans.
| Developers just aren't demanding a lot from the consumer for GPU performance, year-on-year... |
The issue here, as I've mentioned in prior posts, is that we can't just keep considering 1080p as the defacto resolution for the rest of time. Monitor technology has advanced and continues to get cheaper and cheaper. Decent 1440p, medium refresh rate monitors can be had below $250 and there's even talk of OLED monitors on the horizon that are below $500 and NONE of those are going to be 1080p.
No one should be looking at a GPU in 2025, let alone 2026 and be looking at 1080p gaming performance as its deciding factor. We need to drop this expectation as it's letting the hardware manufacturers get away with giving consumers worse products that underperform.
You only have to look at VRAM at 1440p to see that you can loose a significant portion of the potential of the GPU once that quantity is exceeded but you don't even need to look at VRAM or higher resolution, you can see on the lower-end cards that they fall apart even in high refresh rate gaming at 1080p...
| We're once again in a period were our 60 class GPUs don't cut it in demanding games at the current mainstream resolution for new displays... |
Right now, consumers are trapped in a cycle of "performance debt". If you can afford mid-to-high-end equipment, you're going to be alright. At the low-to-mid range, you're treading water and every time things start to nudge in the consumer's favour, along come the companies to stamp that out...
I recently made a GPU tier comparison list to see the relative banding of the GPUs that you are most likely to upgrade from and to. I took it back to the RX 5000 and RTX 20 series (though I know there are older cards still in gamer's PCs, I figured these were the relevant comparisons covering around 7 years of products. I made a cut-off for each band of ± 5 %, one band goes from and to every 5 and the other goes from 0 to 0*.
*I had a tough time figuring out how to explain this. For example, cards are roughly banded to 95 - 105 % and 90 - 100 %
What you can see from the chart is that cards where there is no performance overlap (i.e. a decent performance difference to adjacent products, generally sit higher in the stack. (The lines are closer together). Cards which are within the largest bands (i.e. have a negligible or very small, not humanly noticeable performance difference in raw fps numbers) sit lower in the stack.
| GPU performance tier upgrade chart, banding similarly performing GPUs together... |
Now, this isn't a particularly unusual or unexpected result - you would expect old highly performing cards to match performance with new, lower performing cards. However, what is disappointing is the level of overlap and lack of performance differentiation on display.
The area which I dislike the most is from the RX 6750 XT to the RTX 5060 Ti 16GB. That's 13 cards (I forgot to add the 4060 Ti 16 GB to the same line as the 8 GB) and, if we ignore the RTX 2080 Ti - since that's a flagship card and anyone upgrading from that wouldn't likely be in the market for a low/mid-range product* - we're looking at three generations of product which may have been bought second-hand and have nothing to upgrade to within a reasonable price or performance range... Though at least one could upgrade and obtain more VRAM to apply the performance to!
*Unless they realised they got burnt and vowed never to again pay that much for a GPU!
But the area I am despairing at is at the top third of the chart. There's no trickle-down performance. The RTX 3080 Ti was matched by the 4070 Super but the RTX 5070 is essentially the same card. If we look at this chart, the RTX 6060 will have the performance of an RTX 5060 Ti 16 GB (or slightly above) which will be a good 25% below the RTX 5070! It wouldn't have even moved out of that band I just spoke about. An RTX 6070 might not even reach the RTX 5070 Ti, given the difference between that card and the base 5070.
I made the point above that developers aren't demanding performance (or are demanding less performance) uplifts from gamers but I may have had that backwards - there aren't performance gains to be had, so developers are building that into their hardware expectations. And YET people still freak out about Indiana Jones and DOOM: The Dark Ages requiring ray tracing compatible cards. I've seen a lot of incidental posts from people claiming developers need to target older hardware because of the RAM crisis but developers have been doing this from the mid-2010s and even more so since GPU hardware progress began to stall a few years ago. In addition, I've been mentioning this as a conclusion of this trending each year.
So, I don't think we're in any worry of developers not doing that. However, we also can't keep considering zero advancement. Expecting hardware RT feature support in the second half of the 2020s should not be controversial...
Of course, we come back to the main problem: Instead of offering good products at a price point, GPU manufacturers are playing a game of hardware chicken and, unfortunately, the gaming market just isn't worth enough money for them to really care.
Of course, I've already called for the divestment and separation of the gaming parts of the GPU companies - that's the only way this market can be fixed.
Console Comparisons...
Nothing has really changed over the last year and the general trend holds for this console generation - CPU single core performance is holding relatively flat, and still around or slightly below the CPU power of the consoles.
Multicore performance (as noted in the section above) is increasing, most likely as a result of increased multicore efficiency of AMD CPUs and increased numbers of physical cores in Intel CPUs. This isn't really important for gaming except in very particular situations - shader compilation, double-use cases such as streaming, etc.
The GPU performance is increasing slowly - likely as more developers take advantage of the extra power and abilities available on PC hardware compared to the consoles. This is also likely a reason why CPU requirements aren't increasing - we have plenty of CPU performance and, as more features are pushed onto the GPU, CPU performance is less relevant. Of course, this ignores the continued lack of optimisation in many games which mismanage some of these features and also overload the CPU.
It seems a lot of people are still really blind to stutters and frame drops...
I would have liked to begin including ray tracing recommended requirements but so few games are doing this that the data is too sparse to put into any meaningful discussion.
Predictions, Predictions...
Now, we're onto the real reason this series was begun - looking into the "crystal ball" of what we should be targeting in a few years' time. 2025 was the "future" I had originally predicted until and I had never extended that as, for one, it felt like moving the goalposts a little too much. Secondly, it seemed unnecessary. However, looking back, I probably should have extended each year to provide some ongoing discussion and visibility. It's not like the data wasn't there, I just didn't feel like updating the graphs all the time as some of them have overlaid elements which have to be manually adjusted.
But enough navel-gazing... I'm pushing this thing out to 2030.
From the extrapolated curve fit, we can see that single core performance is expected to continue to slowly grow over the next five years. To account for this new performance region, I've adjusted the comparison products on the charts to provide some relevant reference points.
Unless we're about to see some extreme new, highly demanding games on the horizon (which seems unlikely given current technology price trends) a CPU with the equivalent performance of a Ryzen 7 7700X or i5-14600K should see users through until 2030 without issue.
You might note the weird-looking Ryzen 5 9600X result but this is likely an reflection of some of the improvements AMD have made to the Zen 5 architecture which provide a decent performance boost over Zen 4 in many productivity and scientific workloads but are almost invisible in gaming workloads...
The trend for CPU multicore performance is looking to end up somewhere around that of the Ryzen 7 7700X. This metric will likely be less important (as we've historically observed) due to it being affected by the core counts of CPUs available on the market at all price points. However, if you're on a Ryzen 5 Zen 4 or above CPU, you're likely to be fine until 2030.
Contradicting this slightly, my personal predictions for the number of cores/threads are set at 8 and 16 for the next five years. We can see that, this year, we are a bit below for the mode cores/threads but I expect this to increase either next year or 2027 and remain there into the 2030s.
The actual average cores and thread values are closer to what I am predicting. We're already seeing an average of 8 cores being recommended and we're currently sitting at an average of 14 threads being recommended due to Intel's CPUs.
The reason I see this still increasing is because, even if consumers are not able to upgrade their systems due to expensive RAM/GPUs, developers are likely to push CPU requirements to make up for these other lack of resources and even back to the AM4 platform, an 8core/16thread processor is basically the "best" gaming CPU you could recommend. As we move out of the era of recommending Intel CPUs that don't include hyperthreading, this will push both the average and mode cores and threads up.
Moving onto the highly contentious memory side of the equation, my prognosticating powers are severely hampered by lack of ability to understand the complex picture of what's currently happening and what will happen. There are too many factors and inflection points on the near horizon to be really accurate, here. So, I am going to assume a return to normality within a reasonable timeframe (1-2 years) and, thus, developer recommended requirements are unlikely to be truly affected.
I'm still expecting the most required quantity of system memory to be 16 GB going into 2026 but for that to increase by 2027 to 32 GB. I'm currently listing 24 GB as the half-measure third most required quantity and that's really only because those module configurations exist and you're going to see some games test against them.
However, after 2027, I don't expect this to change any time soon. 32 GB is such a large value that I believe games would struggle to utilise it in most scenarios (certain genres being an exception) and the next step up is so huge, I don't imagine 48 GB or 64 GB being required any time sooner than 2035 - if ever, within the next 15 years.
We'd need to experience a complete paradigm shift in how computing is performed and programmes are designed (or for web and browser designers to infiltrate gaming dev houses ;) ).
VRAM, on the other hand...
I'm mostly predicting the continued slow growth of VRAM and that's primarily because of AMD and NVidia's stingy behaviour on the lower-end cards and consumers absolute refusal (and also sometimes ability!) to get cards with more VRAM.
Now, that sentiment is changing and, over the last year, as I noted above, we've seen a broader push-back and refusal of the 8 GB VRAM cards on offer. But they're still there, and manufacturers are going to want to keep trying to push them - especially with the current DRAM availability crisis.
3 GB GDDR7 modules are still missing in action, despite being 2-3 years late from their intended start of manufacture and I really think the application of those onto existing memory controller widths is going to address the situation. I also think that they would address the DRAM shortage, too. Stop making 2 GB modules and you get "more" memory per wafer of GDDR6... Problem solved?
Well, yes and no.
You see the current rumours from MLID and KeplerL2 point to AMD's low-end GPU cores supporting LPDDR5 - which, to the best of my knowledge, doesn't and has no roadmap to supporting higher than 2 GB modules in the spec. This is pretty bad for AMD's side of the equation because it means the only route to providing higher memory capacities is to double the number of memory modules on their bus. That would add cost and also cause issues with pure numbers of modules required to be procured in the current environment.
The higher end GPU cores are rumoured to support GDDR7 so, they can benefit from increased capacity without requiring more modules to be purchased. Thus, more wafer efficient.
Nvidia's lower end chips support GDDR7 (with the exception of the RTX 5050 (GB207) and it's likely that this will continue next generation or for GDDR7 use to be extended to the lowest GPU design, as well. This would allow Nvidia to increase VRAM capacity without having to utilise more memory modules.
Of course, Nvidia could further reduce the memory bus width, still managing to increase or maintain bandwidth and slightly increasing VRAM capacity from 8 GB to 9 GB. I have a feeling that this might be their course of action in the low end...
These choices will all result in further performance stagnation at the lower end of the product stack, further lengthening those bands I addressed previously in this blogpost.
So, we're looking at an uncertain time ahead for the GPU market and, as a result, gaming.
| Passmark does not adequately rate the RDNA4 architecture at this point in time. I am hoping the situation will improve, otherwise I will be forced to find an alternative... Maybe you can suggest one? |
Shifting over to the compute performance of the GPU, I'm expecting for the current trend to continue. Games aren't going to get lighter to run and engines and graphics APIs are all moving towards heavier features. This will result in the "recommended" experience targeting more performant GPUs.
If I look at the trend line, anyone with an RTX 4070 Super and above level of performance will be in good stead for the next five years - potentially ignoring VRAM considerations. On AMD's side, that's an RX 9070 or RX 7900 GRE but taking into account increased RT demands, more appropriately the RX 7900 XTX.
Summing Up...
I spend a decent amount of time on Reddit, helping those with PC builds and troubleshooting build problems. From what I see, a lot of gamers who have updated their PCs to new(ish) hardware within the last two years will be fine for the vast majority of games over the next five years, at 1440p. Those who didn't maximise the GPU upgrade will likely have an easier path to do so at a later point in time when, and if, they need it.
Aside from the current bumps in the road for RAM and GPU performance (and potentially soon to be pricing - again!) I don't think consumers of PC games are in that much of a terrible position going into 2030. Sure, they won't necessarily be playing games at maximum settings but then that doesn't make a bad game good, does it?
Players will adapt but more importantly, devs are likely to keep lower-end hardware in mind for their new releases.
Of course, hardware manufacturers are here to help force developers hands - whether they like it or not!
The fact of the matter is, aside from both AMD and Nvidia giving lower-end GPUs worse performance and VRAM uplifts, there is another wave of downward pressure on game developers from implementing more demanding games: consoles and customised handheld hardware.
First up we have the handhelds: Steam Deck - 8 CUs (RDNA2), Switch 2 equivalent to an RTX 2050, ROG Ally variants between 12 - 16 CUs (RDNA3) then we have the Xbox Series S - which is forever holding back the platform - 20 CUs (RDNA2), and finally, the latest in a long line of disappointments, the Steam Machine - 28 CUs (RDNA3).
From my own estimations, the best of these (for compute) will be around the performance of the RX 6600 - a GPU from 2021 and the worst an RX 6500 (with more RAM). That's pretty dire...
As the market gets flooded with more and more low-spec, expensive hardware, it pulls the average hardware performance lower, meaning that developers need to aim lower with the expected performance envelopes of consumers to reach the broadest market for their expensive to produce games.
Sure, we can wax lyrical about the scalability of graphics in modern engines but there are limits and these products have the potential to limit the types of games that could be made. The majority of the handhelds won't have an impact on this but Switch 2 will - it will reach a large enough market that it can pull real weight. Similarly, the Steam Machine has the potential to pull down the average performance of consumer hardware depending on how many units get sold into which market.
But that's a blogpost for another time...
Coming back to the current data I figured it would be interesting to see how my polling is doing over time. Due to the way I decide which games qualify (i.e. I keep track of major releases and popular games - mostly with strong graphical qualities) it can skew the results somewhat - though I try to avoid this as much as possible. In fact, you can see that I've generally increased the number of titles polled over time (2020 was pretty dire because of all the delayed titles which were then pushed to 2021). But the overall trend is that I'm polling more games - whether that's due to more games being released which "qualify" or whether it's because I've become more vigilant since I am tracking these things throughout the year instead of just before compiling this post.
It's interesting to see how the number of CPU and GPU SKUs that are listed per year changes.
We had a lull in the GPU requirements during the PS4 Pro/Xbox One X period which began climbing again once we entered the PS5/Xbox Series generation. We saw this same thing during the end of the PS3/Xbox 360 generation and the beginning of the PS4/XBO generation, despite having fewer games polled per year. The reason for this is that the performance of the PC hardware in this period was much greater than that of the consoles and so a broader range of GPU hardware was able to run the games of the time. Additionally, we had a lot of hardware releases throughout the preceding years. Then, once we reached the PS4 Pro and One X release, game requirements started shooting up and older hardware just didn't meet the requirements.
However, I don't think that the reason for increasing GPU SKUs is the same now. Sure, we do have some of that performance overlap - as I mentioned above in the banding discussion - but what I believe is a stronger force is that games are requiring less powerful hardware, overall.
That's not necessarily a bad thing but it does address the fairly common refrains of "developers need to optimise more" (they're already doing so!) and "they're going to have to start optimising for older hardware" (again, they're already and have increasingly been doing so!)
![]() |
| Remember this beautiful spreadsheet? Imagine applying this to the RTX 50 and RX 9000 series... Would an RTX 5060 even be equivalent to a 50 series? |
As noted in the CPU section, the gains in single core performance are really small and game requirements are really small, too. Meaning that many more CPUs can manage to play modern titles. That's great! It also explains the increase we're seeing in CPU SKUs, too.
Though smaller in %increase than GPU, this can easily be explained through AMD's lack of single core performance compared to Intel's various generations over the late-2010s. So, when Zen started making in-roads to gaming rigs, the CPUs could only match (and in some cases fail to match) generations old Intel architectures when playing games. That sort of ended around Ryzen 5000 and Zen 3 which is where we've started seeing increased numbers of SKUs reaching the same historic highs as at the end of the PS3/360 generation of consoles.
The thing to note about where we are now, aside from AI causing issues with the ability to obtain parts and new systems, is that manufacturing process nodes are not only slowing down in terms of performance gains but also increasing in expense. At the end of the PS3/30 era, we had a HUGE node shrink from 45/40 nm to 28/22 nm to 16/14 nm in the space of 5 years from 2011 to 2016 across CPU and GPU.
If there's one positive to potentially come from all of this is that we may see a renewed focus on hardware design optimisation over node-shrink gain mentality. We've seen this with the evolution of Zen (especially 3 and 5) and also RDNA (most notably 4!). However, we've seen less of this from both Intel and Nvidia, with the former relying on trying to cram in more in the same area and the latter focussing on software to bring increased performance.
The last time we had a focus on hardware design improvements from Nvidia was the GTX 9 and 10 series (with the RTX 30 series coming a decent third place in terms of impact generational performance impact). For Intel? I'm not sure but I felt both the 12th and 14th generations were pretty big in terms of the performance core design wins (if the latter ended up being a big risk and ultimately backfiring in their face due to the degradation problems).
If we can see the manufacturers squeeze optimisation wins out of their architectures, then I think we could be on the cusp of another 2016 - 2017 period where developers are able to stretch their legs and we'll see reduced numbers of SKUs in the recommendations. If we don't see that, then I expect the numbers of SKUs to continue increasing.
Ultimately, I am pretty optimistic for PC gaming over the next five years - as long as RAM prices and GPU prices stay sane... A pretty big ask, I know!


No comments:
Post a Comment