18 March 2021

Analyse This: The Potential Performance of Xe (DG-2)...

 

After my relatively successful look at RDNA 2-based systems (consoles and the RX 6000 series) I've been thinking of turning my analytical eye towards both Nvidia's and Intel's architectures. While I'm working towards getting my Geforce ducks in a row, I figured that I'd strike whilst the iron is hot on DG-2 rumours before more information is released...


I will say upfront that this is very much vague speculation in comparison to the stuff I did surrounding RDNA/RDNA 2 performance scaling. This is even more so than usual given the dearth of actual data points for this architecture at this point in time. But let's take a quick look at what we might expect for the performance of a dedicated GPU featuring the Xe architecture.

I used Brad Chacos' excellent review of the Swift 3X laptop as the starting point for this whole exercise - at this point in time, the integrated Xe-LP (and semi-discrete DG-1 [or integrated Xe Max]) are the only parts that can give us a look at that way the Xe architecture scales. Unfortunately, due to the integrated nature of the laptop chips that are being utilised for these 96 EU (Execution Units) parts, the inherently variable nature of performance due to thermal and other constraints make analysing any benchmarking results very difficult.

As a result, I spent a lot of time averaging and trying to mitigate thermal throttling as well as differences in linked CPU clockspeed for the initial analysis I performed when trying to draw a parallel between Brad's results and those reported on Notebookcheck.net, where a lot of different laptop models have been tested. As a result, I decided to use a lot of Notebookcheck's benchmark numbers as they should be relatively easy to compare between for the discrete GPUs and they also performed testing at resolutions and quality settings that could not be easily found elsewhere.

Specifically, for this comparison, I'm using 1080p, ultra quality settings as (spoiler alert) Xe isn't available in such a high-end GPU at this point in time that we have any data points for higher resolutions.

Extrapolated performance of various Xe configurations... (With increasing confidence in the various game engines from left to right with green being the "best" data)


Even with this wealth of information, I struggled to find somewhat of a linear relationship between performance at different frequencies for the same number of EUs. Eventually, I realised that this is because the features of the Xe architecture are not evenly balanced* and it appears that compute tasks** do not scale with the same linear gradient as gaming tasks on the architecture with increasing clockspeed (i.e. there's something limiting increasing gaming performance and it's probably memory bandwidth or numbers of available EU at this lower end***).
*At this point in time with whatever firmware and software driver support is available
**As noted by measured SiSoft GPGPU performance numbers which have a much larger gradient for increasing the graphics core frequency than any game performance saw

***I'm comparing the performance of the 96 EU parts here 

In light of this supposition, I identified a number of games were the calculated linear performance scaled exactly to real-world numbers, some where that calculation was close enough and others where it was just a little off. I picked two of each and adjusted the scaling of the not-so-accurate scaling games to reproduce the real-world numbers. Then I took those linear relationships and scaled them based on EU count to get the numbers in the chart above.

This all assumes a linear relationship between performance at a given clock speed and compute resources [which also assumes there are no large memory access/data bottlenecks to contend with]) to higher clock speeds, ending at 1.8 GHz as that was the latest rumoured clock speed that I had seen.

As things stand using this method of comparison, we have a highly variable theoretical performance for the top 512 EU part - and this isn't surprising given that most of our data has come from laptop components where power and heat can be an issue that restricts performance. However, let's look at the general trend:

Going from this data, we're looking at a 512 EU Xe DG-2 part clocked at 1.8 GHz meeting the general performance target of between an RTX 2080 Super and an RTX 3070, right around the performance of the RX 6700 XT.

This result isn't deviating too much from the conclusions of several other outlets that have speculated that the full DG-2 die (assumed to be 512 EU) would be around the performance of the RTX 3070. However, what I'm seeing with the current dataset is that certain workloads will be much stronger than others on the architecture, implying that the game and game engine may have a large effect on the performance of any potential graphics cards Intel releases.

At any rate, I'm really looking forward to seeing more concrete numbers from Intel on this architecture and for them to finally enter the market. I think that their entry into graphics will contribute to the GPU price crash I'm predicting for the period of 2022-2024...

I've said elsewhere that I believe that Intel can be successful with DG-2 if they enter the market at RX 6700 XT performance at $250 or RTX 3050 Ti/RX 6600 XT performance at $200 in terms of a discrete GPU. At the end of the day, we're looking at the theoretical numbers of execution units, meaning that the majority of the dies obtained for use will likely be some cut-down variant.

I think that if Intel wants to enter the market, they need to do it at a reasonable price point and not high-end performance. Their first generation needs to be a proof of concept for gamers/users to buy into the architecture and ecosystem and gain confidence in their products (and also to let developers get used to the architecture, which DG-1 and Iris graphics are supposed to be helping with). That means coming in significantly lower than both Nvidia and AMD in terms of price and aiming for mass market, where their branding will help them gain traction. If they aimed for the high end right off the bat, then I don't believe many people would have faith to spend > $300 on an Intel graphics card.

At any rate, it'll be interesting to see how this all shakes out!

No comments:

Post a Comment