I'm in an angry mood - so lets start 2021 with a bang! This is going to be a post about what I think is wrong about PC gaming and how we can possibly fix that.
So, buckle-up in your seats and get ready for the whiniest, most annoying post I might ever make! :D
The problem...
You already had to suffer through my problems with the UE5 PS5 demo... Oh, you didn't? Well, it's here. So, guess what? I'm back and I'm super annoyed with how things are going down with high-end game development, right now.
To be fair to EPIC, the UE5 engine is only a symptom and not the cause. However, it really isn't helping things because it's helping to steer game development in the wrong direction! So let me define the problem before I actually address specifics:
What's the big issue with consoles? They have limited resources on a hugely limited budget and must optimise problems as best they can to make themselves be as relevant as possible to as broad an audience as possible over the longest period of time as possible.What's the big issue with PC? Hardware vendors all have to fight for themselves in a broad-ish market where standards are flexible and optional; where most vendors are actually resellers or repackagers and the only incentive to improve in offerings comes from the competition.
However, saying that:
Consoles have a standardised architecture that can be optimised against, with software support that can be improved and, again, optimised against.
PCs have (mostly) standardised software support that can help with hardware utilisation.
And this is the problem that's the cause of what I'm about to discuss... due to the convergence of PC and console hardware and Microsoft's push to become an "everything, everywhere" game vendor, we have a new issue, heretofore relatively unencountered in PC gaming: standardisation of back end software support across consoles and PC.
Yeah, that all seems... wait, what? 8 GB of RAM?! Who's been recommending THAT for the last five years?! |
Game architectures are all backwards... aka, the Unreal Engine 5 is stupid!
Console hardware is, as I mentioned, always an optimisation to cost equation: what can you afford to build and what is the best way to organise those pieces of equipment/technology in order to achieve a certain level of system performance?
This is all fine and dandy. However, as I stated above, this doesn't usually involve the PC in all but the most tangential manner. (i.e. through software ports) Unfortunately, Microsoft has decided to meld the console and PC spaces through their control over APIs that apply to both console and PC software and through AMD's hardware support for those APIs. DirectX 12 Ultimate is the realisation of the last five (plus) years of effort from Microsoft and their partners in the industry in optimising the software stack for the limitations on consoles.
This is actually a great achievement!
The features in DX12 Ultimate are really a substantial and demonstrable improvement in data throughput and management over the prior DX APIs and provide support for many software and hardware implementations that enable workarounds for developing games in tandem with other, already implemented, standards.
The problem with this is that these features are backwards facing: They are addressing issues we had on old hardware. The resizable BAR feature has been available on a hardware level to CPU and GPU manufacturers since 2007, but was never used (because it wasn't needed). On a software level, it's been available since the introduciton of Windows 10 in 2015.
I've posted something very similar to this before. I am just repeating myself now but... What the hell?! |
Seriously, in the midst of the latest technological advances in PC hardware, we're worrying about I/O and implementing hardware optimisations from the HDD era? When the industry recommendations from reviewers are a 6 core 12 thread CPU; When a Gen 3 NVMe drive has sequential read throughput of 2.0 - 3.6 GB/s or a random read IOPS of 380 - 450 MB/s (given enough queue depth requests) and gen 4 will easily match or improve upon that; When SATA SSDs match the throughput of the majority of NVMe SSD random read IOPS - none of which actually saturate a gen 3 or 4 PCIe 4x lane interface: we are living in a world were I/O is not a problem on the PC platform.
Worse still, WHY are these APIs focussing on direct transfer from the system storage to the GPU? DDR3 and DDR4 have transfer speeds in excess of anything PCIe can manage. BOTH data streams ostensibly need to go through the the CPU at some level. So why the hell are we focussing on the slowest path to the GPU when 16 GB of DDR 4 is around $70 and 32 GB is around $110-120 and 1 TB of NVMe storage is approximately the same?
The answer is simple and it's what I stated above: These features are all to enable better data management within a console environment with limited system resources and zero system RAM: not to enable better data management on PC. The problem we encounter is that none of the system hardware vendors on PC are designing any pieces of hardware that enable these advances. There is no specialised silicon on any CPU, APU, motherboard or GPU that enables the same sorts of decompression and data management advances on PC that we observe in the new console designs.
The APIs and future/current game engines (including the Unreal Engine 5) are targetting problems that don't exist in modern gaming systems. This is a big problem for me and it should be a problem for you and every engine developer out there...
Yes, please go on and tell me that I/O is a problem in 2020/2021... |
Actual requirements...
The games I've been tracking have not requested 4 GB of system memory since Everspace in 2017 or 6 GB since Imperator: Rome in 2019. Those are lone exceptions in the vast majority of each yearly cohort. Look back at that graph up above and you'll see that most games have been requiring 8 GB of RAM since 2014.
This is a ridiculous situation.
Yes, Microsoft had a point that NVMe storage prices were dropping faster than RAM prices, so for a console that made sense to focus on that aspect of the system in order to save costs. However, for a PC with a $500+ graphics card inside, it is a nonsensical discussion! Every gamer and their mother is advised to get 16 GB of RAM in 2020 by any system builder or tech tuber or random enthusiast on the street, as a minimum. Yet, even the games that "recommend" 12-16 GB don't even utilise it!
So, you "recommend" 12 GB of RAM, but you'll never use it? Or is that extra for other stuff you asssume will be happening on the system in the background? |
Developers have games with 100+ GB of storage on disk. Virtually none of that data is needed within a very limited amount of time. Yes, you can argue that it's compressed data and needs the CPU to uncompress it but if an open world game as large and detailed as Cyberpunk 2077 only has an install size of 70 GB and is yet only utilising 9.something GB of system memory in a complex, system-heavy scene at 1080p, ultra with RT ultra settings? Despite requiring 12 GB in their recommendations for 1080p "high" without RT? Something is wrong with all of these game engines and at the game developers.
Are they trying to require extra overhead, just in case other programmes (and the OS) block out portions of the RAM? Why the hell are they doing that? They don't do it for CPU or GPU resources! In contrast to the situation above, my VRAM is above what is required by the game in either of the "recommended" situations I outlined above.
I've mentioned time and time again that there's zero reason for developers not to require more RAM and actually utilise it. That is WAY more efficient than requiring a high-end Gen 4 SSD... the worse part of which is that a) storage is limited and b) not all SSD controllers are created equal (see above). If you have 16 or 32 GB of RAM in your system, then (assuming 4 GB of system reserve) you can fit 12 - 28 GB of compressed data in the RAM.
Yes, you read that correctly. Compressed. Why am I saying this? Well, it's because all of the DX12 Ultimate features appear to be targetting streaming of compressed data from the system storage to the GPU. They are also targetting reduced bandwidth through managing MIP levels within individual textures - aka, ways of reducing required bandwidth per second.
How is it that doing this from a slow-ass NVMe SSD is preferable to loading a LOT of things up into system RAM and then doing the exact same thing from that amazingly fast pool of memory?
28 GB of compressed data is almost a third of an entire game install... and that's assuming complete compression of all game data. Really, in this day and age, you shouldn't be compressing everything to high hell. Desktop gaming systems can handle large install sizes.
If you're going to optimise things to be decompressed on the GPU anyway, then it's WAY faster to get it from system memory than to get it from system storage, whether that's an HDD or an SSD (of any kind).
Screw this, make sense...
The PC gaming experience is floundering because of this hyper-optimised console crap and it's getting really frustating. Requiring an SSD to cover up your lack of using system memory? It's crazy! Once games have standardised 32 GB of RAM, it's there - for EVERY game. You can't guarantee the read throughput of an SN750 black (gen 3) or an MP600 (gen 4) or ANY SSD. In fact, the move to QLC NAND for higher densities completely destroys this concept.
Worse still, taking that degradation into account, you can't even fill up your SSDs! You need to leave them at around 70% (depending on NAND utilisation, DRAM presence, controller design, etc, etc...).
In contrast, RAM is designed to function at its data transfer rate no matter what is happening elsewhere in the system or even in terms of %utilisation. In terms of system performance, it seems like a broad range but, compared to SSD performance die-off once drives become full, it's nothing to worry about! I mean, we're talking about a 12-25 GB/s data transfer rate to the GPU, more than enough to saturate the 16x gen 3 (16 GB/s) or gen 4 (32 GB/s) interface to the graphics card.
A gen 4 SSD costs $150-250 for 1 TB, for which you can fit a relatively minimal amount of games... 32 GB of 3000+ MHz of RAM can set you back $100-120. It's what we call a no-brainer! The RAM will never die, it will never wear out, it will always perform the way it's supposed to, no matter how full it is...
Just load in a quarter of your game into memory and it's there for the GPU all the time! In the background, manage the slower streaming of data to your system RAM from your storage and that's just as easy - if not easier than the current "optimisations" brought about with DX12 Ultimate and UE5.
It seems like such a simple solution. So why aren't developers doing this?
I'd love to know.
2 comments:
The stuff that worry me the most is aboud NAND degradation as you pointed out, we hadn't this issue with RAM before.
BTW do you have any comment on the PS5 die shot? Seems like it has mostly RDNA 1 stuff and some customizations but marketed as RDNA 2.
Images if you didn't see yet
https://twitter.com/FritzchensFritz
Sorry I didn't reply here before now - I presumed you were the same person who reached out on Twitter and I've been really busy with work for a while now so not much time for the blog.
I am going to be doing a post about the console dies soonish but just to clarify, RDNA 1 = RDNA 2, pretty much. Any similarities you're seeing are because the architectures are related and basically the same for the compute units, more differences in the rest of the architecture.
Difference between discrete and console integrated is lack of infinity cache on the latter and this is because discrete doesn't have high bandwidth to graphics memory whereas consoles do.
So, no. It's not marketing.
Yes, NAND degradation was the main reason I opted out of buying a PS5. I might have gotten an XSX/XSS (given that the SSDs are more replaceable) but Live! and Gamespass/all the services are not supported in my country (which is very annoying)... so, I didn't want to splurge out on that and not be able to buy games/use the service.
Post a Comment