28 August 2021

The Relative Value of GPUs Over The Last 10 Years...

There's been a comparison that I've been wanting to do for a long time but I just didn't have the time or the energy to take it on. Finally, I had a bit of a break from work and recharged enough to be able to sit down and focus on this to get it done. 

Below, I'll explore the historic pricing versus the relative performance of each generation of graphics cards from AMD and Nvidia over the period of 2010 - 2020*. From that, I will draw out the generational leaps per performance tier and the relative value of each of those jumps as well as the value within each generation. This will also help us try and understand how the price/performance strategy has changed per generation for both companies.
*Yes, I'm aware that the last ten years are technically 2011 - 2021... but the current generation started in 2020, and some data I would need for 2021 doesn't exist yet!


The dataset collected for Nvidia's graphics cards...

Ground Rules...


I always find it important to lay out my methodology before getting deep into the reeds because it can head off a lot of questions before they're spoken. Plus, it helps me to clarify and recap in my own mind what I was doing perhaps over a period of days or weeks before I get into the rest of my summary. Skip this section if you just want the data...

As with my trending data, I gathered the info presented below primarily from three sources - TechPowerUp and Passmark, with some pricing and OEM card avoidance* via Wikipedia. Both TechPowerUp (from now on, TPU) and Passmark are very useful resources for comparing cards intra- and inter-generationally. However, it is important to note that every benchmark has its designed focus and designed limitations. It's for this reason that different reviewers will pick different benchmarks to make different points about the same hardware and why certain companies like to downplay the importance of certain benchmarks when their hardware isn't performing so well in them.
*I'm coming to this below...
Passmark is an excellent benchmark because it's a standardised set of data for all the cards listed here (with one or two inferences when an entry is missing). However, it's somewhat of a compute centric benchmark (at least as far as I understand it) and it can throw up results that don't mesh with actual gaming performance when looking at TPU's GPU hierarchy. This can be seen in the data for the GTX 590 and various Titan cards where they fall behind the '80' class card in that generation's product stack.

However, saying this, TPU's hierarchy is also not perfect because the data is relational to subsequent testing in the following generations - testing which shifts based on new synthetic and gaming benchmarking. There are results on TPU that, as far as I can see from newer testing performed by smaller outlets, do not hold up if they are reassessed here in 2021, e.g. the GTX 590 (again) - a card which relies on SLI.

Taking all this information into account, I'm sticking with Passmark's data as the core for this analysis as when I looked at the discrepancies and how the then performance of those cards were differing in TPU's dataset compared to how they would be expected to perform now didn't really change the overall conclusions of the analysis - i.e. the high end cards didn't magically become much better value for money. Especially because there will be some workloads where these values must be true.

The dataset collected for AMD's graphics cards...

Originally, I had wanted to include all discrete graphics cards for each vendor and each generation over the period 2010 - 2020. However, I quickly realised that there was no reason to include OEM cards because, for one, the pricing data just doesn't exist - you could never buy them individually. Data for OEM performance was also very spotty in both databases and for these two reasons I cut out every OEM card that I was able to find (thanks, Wikipedia!). There may be a few that snuck in under my radar but since I was able to find launch MSRP pricing and performance data for all cards mentioned here, this is my dataset.

Speaking about comparisons of individual cards, I talk a lot about "value" of the cards. This is purely a metric that I'm using to standardise against. It's a measure of performance per dollar, relative to the most performant card per generation. This does not mean that the "best" card to buy is the best "value" card, though that may sometimes have been the case. The best value metric doesn't take into account longevity (affected by VRAM capacity and memory bandwidth) or empirical performance at any particular display resolution or graphics quality setting in games. Instead, it's meant to be a way for us to evaluate the individual cards and also in terms of AMD and Nvidia's price/performance strategies.

Finally, when grouping cards into "classes" Nvidia really made things easy with relatively consistent naming and AMD just went through so many naming changes that it made comparisons based on name or number impossible. What I ended up doing was grouping cards to their approximate name/number/relative performance tier to the most performant card of the generation. This is also another instance where oddities like the GTX 590 make little sense to be taken at then performance because their actual relationship to the lower tier cards is non-linear and not necessarily logical. 

Following my method, the majority of cards actually line up quite nicely in each class and it draws out nice comparisons about how each company positioned their intra-generational performance each generation - and as you will see, that did change over time.

Now, with that over, onto the analysis...


Aerial View of Radeon...


This chart shows how each card rated against the most performant card in that generation to give a sort of "value" rating, with cards that gave better performance/$ at higher Y-axis values...


AMD entered into the 2010s with incredible stack segmentation. They really, REALLY, binned their chips to hell and back, spawning multitudes of GPUs for every price and performance point. This calmed down towards the second half of the decade and that was mostly because AMD didn't have products to bin, segment and/or sell coming from their graphics division and that's largely because of their financial problems which reached a head in 2016 with the release of the RX 400 series. After which, they took a rather more Nvidia-esque approach in the subsequent generations of cards because they managed to sort out their issues with yield, performance and per-unit profit.

Another thing that was apparent when looking back is that their launch cadence was out of step with Nvidia and you can make the argument that this wasn't really rectified until the launch of the RX 6000 series cards in 2020 where AMD finally had tier, price and launch timeframe "parity" with their competitor.

Looking at the graph above, we can observe a general "hump" shape, for the majority of the generations. This would be what some call, "bloody obvious", though it's nice to see some actual data shown in black and white confirming that assumption; mid-range cards tend to be the best value for money. 

Most of AMD's generations followed a "mid-range gives the best value of performance"...


This presumption holds true for six generations of the nine covered here, where, aside from some outliers such as the R9 285 and R7 360, the most perf/$ is gained towards the middle of the stack. What is a bit of a surprise to me is how much of a value the RX 400 series cards look here! I mean, I knew they were great value when they came out but not to this extent...

Coming back down to planet Earth for a moment, it's easy to realise that these high relative values are because the RX 480 was the top card, despite being a mid-range equivalent to Nvidia's stack. This highlights the dangers of using relative comparisons in a vacuum and when the dataset is relatively small*. All's we need to know right here and now is that the RX 470 was the best "value" for that generation, further confirming the presumption we made above.
*Just look at medical/social science data mishaps over the years where small sample sizes lead to overly stated claims - though often the wider reporting of those is the problem, not the actual original analysis itself...

However, there were generations which skew their "value" towards the lower-end of the product stack...

On the other hand, there were three generations where the general trend was a slope instead of a parabola... and two of those three are the latest two releases (RX 5000 and RX 6000 series). Now, I'm not sure exactly what the deal is with the HD 7000 series, though apparently the 28nm process at TSMC was late (or AMD was utilising it early, before it reached maturity) and so maybe yields were low and this was also the generation where the GCN microarchitecture (Graphics Core Next) was introduced, so perhaps those factors contributed to the higher than average prices for the high-end products. You can still see the HD 7790 and HD 7870 XT were pretty good value for money but the general trend is towards the lower end cards.

For the other two series, there really aren't any stand-out products, the value curve is more flat and the lower-end products give the most "value" to the consumer*. Now, I believe that, HD 7000 series aside, this indicates that AMD is changing strategy in how they segment their products and, as we'll see in a moment, this mirrors what Nvidia was doing from the GTX 900 series onward - charge more than a linear fashion for higher performance parts... which does not bode well for consumers wanting a good deal for a new graphics card, going forward.
*See the ground rules for an explanation of "value".

Ground Zero for Geforce...


This chart shows how each card rated against the most performant card in that generation to give a sort of "value" rating, with cards that gave better performance/$ at higher Y-axis values...


At first glance, Nvidia's overview is quite similar to AMD's. There are a lot of humps in the above graph but what is quite interesting is that Nvidia's product stack per generation was quite controlled for the whole of the decade and it wasn't until the 16/20 series (which some may desire to split apart) where things became a little crazy (though the GTX 700 and 10 series got pretty close). Nvidia appear to have had a better handle on what tiers of performance and price they wanted to target and were more confident about providing those to the public without overwhelming them. 

In contrast, AMD have become much more focussed since the RX 400 series and, in my opinion, provide a clearer picture for any consumer choosing their products. You could say that AMD have mastered what Nvidia were practicing, whereas Nvidia are moving towards trying to claw as much possible revenue out of each die through further segmentation.

It's an intersting inversion of product stack approach and it seems that it perhaps did not serve AMD so well in the past, so it's strange to see Nvidia modelling themselves after their counterpart. Speculating heavily, it might be inferred that an ultra-segmented product stack could be indicative of issues with yield, performance (or both) and lack of internal confidence of the ability to sell their products. If this is the case, AMD has become more confident, with better yielding designs and fabrication processes that have higher margins whereas the rumours are always that Nvidia's designs are expensive to produce, resulting in them needing to carefully segment their products to get the most revenue possible out of them... 


For the first half of the decade, Nvidia's generations followed a "mid-range gives the best value of performance"...


... and in that hypothetical scenario, it's doubly interesting that for the first half of the decade, Nvidia's most value orientated cards were firmly in the middle of each generation's product stack. The "60 class" cards were always historically deemed to be the best ones to buy but looking forward into the second half of the decade we see that, starting with the GTX 900 series, Nvidia's strategy switched to providing better value in the lower end cards and charging much more than the performance increases at the upper end of the stack would justify, resulting in the much derided RTX 2080 Ti, RTX 3090 and the various Titan cards.

Unfortunately for us consumers, AMD appear to have been desiring Nvidia's margins at the high-end and are copying their approach almost verbatim; an approach that is pushing average GPU prices higher and higher (ignoring temporary market conditions - such as the one we're currently in).

What is also disappointing is that the actual "best value" cards in the most recent generations are of comparably less value (compared to the most performant card each generation) than prior generations where values of 0.3 - 0.4 were commonly observed, the best value cards in the current generation are sub 0.2, meaning that the consumer is getting a lot less (relatively) for their money - good examples of which are the RX 6600 XT and RTX 3060 which are adequate 60 fps gaming cards at 1080p resolution now... but which means that their longevity is not really looking like it will be fantastic, potentially necessitating a replacement purchase sooner rather than later in order to maintain a 60 fps gaming experience on newer, yet-to-be-released titles when using high graphical settings.

For the second half, their generational pricing switched to a "lower-end gives the most value" model... or alternatively, depending on your point of view, the high-end is price gouged...



Class is Convened...


Taking a look at those historical data above, I can understand where people's learned knowledge that the various class cards are a good value/performance ratio. However, it really seems like this may have been particular cards/generations that precipitated these truisms in the collective conscious. Take, for example, the GTX 660 which had 71 % of the GTX 690, the highest percent performance of any 60 class card in Nvidia's product lineups over the 2010 - 2021 period. 

Before we get onto that topic, I feel like I need to clarify that "classes" of card really are an Nvidia thing. Looking at the data, AMD have just never been consistent in the relative performance of each card. An "80" class card for AMD in the 400 series is matched to a "60" class card from Nvidia's 10 series. So, for this analysis, I'm using the performance of Nvidia's stack each generation to pick out which card was approximately equivalent on AMD's side and calling that card as part of each class, ignoring the naming scheme that AMD applied (because otherwise it makes no sense to compare the price of a GTX 450 to a HD 5550 or the relative performance each generation when sometimes AMD's cards are 70% of the top card or 7%). 

Ultimately, though there are some surprises, Nvidia have been mostly consistent in their tiering of cards and AMD has finally aligned with that for the RX 5000 and 6000 series cards.


You'll need to open this in a new tab/window to fully appreciate it...

Moving back to our journey of discovery, Nvidia have consistently placed each class of cards at around the following performance levels relative to their highest tier card:
  • 50 class: 35 ± 5%
  • 60 class: 58 ± 8%
  • 70 class: 78 ± 10%
  • 80 class: 91 ± 8%
  • 90 class: 89 ± 10%
You may think it's weird that the 80-90 class is not necessarily the highest performing product in this analysis but that is because the 80 Ti class tends to be the most performant (when it's present, at least in the Passmark bechmark environment).

There are some standout cards for Nvidia in these tiers:
  • GTX 660 was 71% (over performing the tier)
  • GTX 960 was 43% (quite underperforming)
  • GTX 670 was 95% (very overperforming!)
  • GTX 770 was 64% (slightly underperforming)
  • GTX 980 was 81% (slighty underperforming)
  • GTX 590 was 75% if taken in today's landscape but 137% at the time with SLI performance in gaming (from TPU)... so a very powerful card which doesn't necessarily scale well in today's games/applicaitons.
  • The Nvidia Titan X* is another strange card with a 76% score here but a 97% score over on TechPowerUp**. 

So, I believe that the majority of our collective wisdom is based on the experience of consumers getting the 600, 700, 900 and 10 series cards and their competition. 
*Technically, the Titan Xp is the top most tier card but this was released later in the 10 series generation... and basically had no availability. So, I decided not to count it, treating it instead like the special "CEO" or "Lisa Su" variants of the top-end cards.

**I discussed this discrepancy in the Ground Rules section... Again, this difference doesn't really change the conclusion - the gradient on the graph I have in my spreadsheet changes from 1.1786 to 1.9286.

 
Again, best viewed from a separate window...

Matching up AMD's card's performance, we can see many gaps where they just weren't competing at all. So, I'm mostly putting this table here for the following graphs so you can see which cards are "competing" with the equivalent Nvidia cards per gen as there's no sense in trying to ask "what percentage of the performance from the top-tier card are these?" when there's no top-tier card equivalent to Nvidia's...

I will note once again, however, that the comparison is not exact because some AMD cards perform slightly worse and some slightly better than their Nvidia equivalents. This is taken into acount in the graphs below so it's relatively a non-issue as long as you take both sets of data into account when thinking about it - i.e. some generations you get better price/performance ratios than others.

Here we look at the price/performance per tier. As a bonus it provides a visualisation for the step in performance each leap between released generations...

I like putting data into visual aides and this is a great example of the benefit of doing so (though there can be exceptions!). Ideally, in these graphs we'd see flat gradients with equally-spaced dots - indicating a static price in each tier and a "guaranteed" performance. You might argue we'd prefer to see a decreasing gradient and ever increasing spaces between the dots, indicating increasing performance for decreasing cost.... but that's just not a realistic expectation for any company to really operate and stay in business.

What we do observe, when removing the outliers on AMD's side of the final two HD series which were relatively expensive for the performance, is that prices are on a general upward trend - something I've already written about. Nvidia's classes are pretty much all in line, with the exception of the 90 class which has a much larger gradient due to the kick in the face that was the 2080 Ti.

Also, along with the notable performance of the GTX 660 and 670 for those respective generations, we can see that the GTX 900 series was generally a huge leap in absolute numbers and a great bargain for that leap. The 960, 970, 980 and GTX Titan X were some of the cheapest cards in each tier with the largest step in performance at their release.

Of course, the industry and gamers don't tend to look at absolute numbers when comparing, instead often referring to ratios. In that regard, as mentioned above, the standout cards for Nvidia were the 570, 950, 660, 1060 6 GB, 970, 780, 980, 690, GTX Titan, and GTX Titan X and those huge increases are purely down to the competition from AMD in those performance tiers and around those price points.

There are only three times that Nvidia breached a 1.3-1.4x increase gen-on-gen where AMD didn't match it in the above dataset: GTX 1060 6 GB, GTX 980, and the GTX Titan X (though if you look at TPU's data you could also argue for the Nvidia Titan X as well). In the latter two cases, those cards were released following strong performance from the R9 290/290X and the HD 7990 - which AMD didn't really follow up. For the GTX 1060 6 GB, I do wonder about the databases of both TechPowerUp and Passmark. Passmark is surely underestimating the comparative performance of the RX 480 by a few hundred points and TPU is also estimating the performance below the 1060 6GB by a few percent.... which I also believe is incorrect - especially when taking into account modern titles. If we take the RX 480 at its real value of approximately equal to the GTX 1060 6 GB, then we get an improvement of 1.58x gen-on-gen, better explaining Nvidia's jump as well. Though looking at that summary from KitGurutech, the power draw of the RX 480 shows how hard AMD has been pushing their architectures since the 400 series in terms of clock speed in order to gain enough performance to compete - they have been releasing really efficient architectures which lose that efficiency as they're pushed up in the perf/freq/W curve.


Slightly worryingly, the relative performance of each 70 and 80 class card is slowly decreasing over time, meaning you're getting less performance in the stack from each vendor and this is against a backdrop of increasing prices per gen (ignoring AMD's HD 7000 series which was very highly priced)...

Speaking of AMD, the R9 200 series really was a highlight for them, where in every class except the 50 class they performed as well or better than Nvidia at a significantly cheaper price point. Unfortunately, they lost whatever competitive advantage they had after that, losing both on price and performance. Aside from that whole generation, their standout cards are RX 470, which blew the 1050 Ti out of the water, the RX 5700 XT/5600 XT which outperformed Nvidia's equivalent for cheaper and the RX 6800, which (probably due to the way the infinity cache use is scaling with the number of cores) underperforms in Passmark's benchmarking tool compared to the RTX 3070, putting it around 15% slower than where it should be (i.e. 10% faster than the 3070).


Are we actually advancing? Resolution versus fps in demanding titles...


Firm Resolutions...


The last thing to speak about in this "value" or "worth" conversation is how the cards are targetted. If we're honest, graphics cards and their performance are really relics of their time: without games to push them and without the popular resolutions of the day being present, their performance and price ratios are meaningless. In this context, I think it may be useful to look at what we are buying, as consumers.

In 2002, the Geforce Ti 4400 was the "60" series card and it was managing around 90 fps in demanding game titles. Subsequent years saw that vary between 30 - 80 fps for cards between the FX 5600 and the venerable 8600 GT. Consumers were using the resolution of 1920x768 for longer than the 10-year period until 2011 but monitor resolutions didn't really change over that time period, larger monitors were not common and pretty much all performance requirements were due to advancements in the graphics capabilities of new games and their engines. However, it is important to note that 1080p was slithering its way into the consumer market from 2006 onward but really didn't have much of a share  worldwide until approximately 2009/2010 (if my memory serves).

The switch to benchmarking at 1920x1080 as a mainstream consumer resolution in the early 2010s meant a regression in the performance in each class of card and you can argue that we're only just getting back to the 60+ fps figures we were achieving in the most demanding games in the late 00s.

Going forward, I do not believe that 1080p will be the mainstream resolution for much longer. 4K is breaking into the market in strides at this point and I think that we should be targetting to have the 60 class cards achieve 40K 30fps by 2024. You may argue that the RTX 3060* and RX 6600 XT already achieve this in a large proportion of games with slightly reduced settings (e.g. Cyberpunk) but there are those titles that struggle to break an even 30 fps in even two-year-old games (like Control). Of course, this is also before we take into account any upscaling methods like DLSS and FSR or XeSS... and demanding advanced graphical features such as ray tracing.
*In this scenario, the 12 GB of VRAM on the RTX 3060 is a genius move and very forward-looking. Of course, we know this was not the intention for this choice but I'm sure Nvidia will paint it that way at the beginning of next generation...

The 3070 and 3080 have the greatest generational linear leap of all cards in their class...


I think that the value we're getting from out GPUs right now is as good as it gets and I think that review sites can start dropping 1080p as the mainstream resolution, instead focussing on 4K. This is quite a nice surprise and it implies that we're really in good standing when it comes to value for our money. Unfortunately, this view doesn't pass the 60 fps "sniff test"... and this newly desired focus (as well as for even higher fps targets like 120 and 144 Hz) means that consumers are expecting more, and sooner, from their hardware than ever before.

Speaking of more, there was a lot made about Nvidia's claim that Ampere* was the greatest generational leap ever... and a lot made about them "lying" about it or at least, stretching the truth. However, as Gordon over at PC World is fond of saying (and I'm paraphrasing here) - lying creates lawsuits, these companies don't lie - everything they say has a truth that can be pointed to. Here in this analysis, I can confirm that the RTX 3080 and 3070 (the two cards that were revealed on the date of those claims) both have the greatest linear increase in performance. Cards in the other classes that hold this title are the GTX 950, RTX 2060 and Titan RTX.
*Note, not the Geforce 30 series. Though in the blogpost, they mention only Geforce...
It's clear to me that, overall, this current generation is a good time to upgrade but when should we, as consumers, be upgrading and "what from" or "what to"?


Each tier of performance has a price range (MSRP, of course)...


The Upgrader's Dilemma...


The common wisdom regarding chronic upgrades is to do it every other generational release. Have a GTX 1060? Upgrade to an RTX 3060. An RX 480? Then it's an RX 5600 XT for you! However, looking at the graphs above, I'm seeing that this accepted knowledge may not actually be the correct way to go about things, depending on how you think:
Are you looking at price as the deciding factor? Spending in the same range every upgrade? Or are you looking at performance? What's the acceptable performance increase for the price you're willing to spend? 1.5 times? 2 times?

Penny-pinchers...

If we're going by price, then each tier has historically had their own bracket and, as seen above, AMD has always been more expensive than Nvidia for the same performance - with the exception of the 90 class of cards. While I was aware that AMD were increasing in their cards' expense, I thought it was more on the low-end. I'm sure a lot of people reading this would think that this isn't the case as AMD has been, until recently, the discount/value brand but I think that this is because AMD ended up discounting their products, post release, in order to compete. It's difficult to find in 2021 but the last products I know that AMD cut the price of (before launch, even) were the RX 5700 XT and RX 5700 so, I'm presuming that prior AMD products had similar events post-launch because the reality cannot be that MSRP was adhered to by AMD and their partners for this result to actually make sense...

Looking at the average price when moving from second generation to the third generation, we can see that choosing the second generation card is generally cheaper than the current card you may have bought (with the 70 class cards being the exception). From the data, I see averages of 1.08-1.10x the price for an upgrade every second generation versus 1.08-1.20x every third generation for Nvidia. For AMD, I see 0.64-0.97x and 0.86-1.03x respectively. Though, I am careful to point out that there is a lot of variation in that data (as you can see below).

So, from a purely price perspective, it makes more sense to upgrade every second generation, if you're staying within the same class of card. However, that may be a false sense of economy...

While there are certainly exceptions, the vast majority of cards in each Nvidia class only achieve at least 2x the performance as the card to be replaced when leaving a space of three cards... (taken from the earliest card in each class in this analysis)

Performance hogs...

If you're going by performance, then you're generally not achieving double the performance until the third card from the generation you purchased. The big exception to this is the jump from the GTX 700 to GTX 10 series.

Keeping in the same performance bracket and upgrading every second generation doesn't net you any savings over waiting one more generation - you're usually paying 0.9x to 1.2x the price for your original card for around 1.7x to 1.9x the performance when shopping for Nvidia cards. That's pretty damn good. AMD is even more impressive with certain generations being 2.5 to 3.7x the performance for much less cost.

However, every third Nvidia card will grant you around 2.3x to 3.0x the performance for 1.0x to 1.3x the price of your original card.

Summarising that more eloquently, those are medians of 1.76x the performance for 1.12x the price when upgrading every second Nvidia generation or 2.48x the performance for 1.16x the price when upgrading every third generation. Those numbers give a ratio of 1.57 and 2.15, respectively, showing that upgrading every third generation nets you the best performance increase per $ (when inflation is taken into account) for Nvidia. 

Due to AMD's spotty history of releasing higher-end cards, they don't have the same sort of dataset that Nvidia do. However, it's clear from the 50 and 60 class cards that if you want at least double performance, wait for the third generation after your current card...

You could argue that the cost savings will increase if you wait even longer but I think every third generation is around the point where the original card will no longer be able to play the latest games at the resolution the average gamer wants to.

For AMD, things aren't so clear cut but I blame that on the relative paucity of data because of the lack of cards released. In the 50 and 60 classes, you are generally better upgrading every third generation, as with Nvidia, though AMD's prices have been increasing a bit more in recent generations...

Moving on Up...

Looking within each tier of card and within generations is all well and good but what about people who wanted to upgrade to the next tier of performance? Given that we're seeing reductions in relative performance each gen in the stack from the top performing card, does it make sense to upgrade to the same class of card gen on gen or even two generations up? Not only that but given the lack of lower tier cards this generation and the late release of cards in prior generations, what was worth the upgrade?

Each series starts with the card listed and then moves up by a generation and class of card. 

The data summary of the above graphs...


The answer appears to be a resounding, "It used to be!"

Seriously, look at those performance increases nearer to the start of the decade. Consumers were getting almost double (or double) the performance by moving up a class every generation. Now they're lucky if they get 30-40% increases at each point in the stack, with the only two exceptions being the upgrade from the GTX 1650 to the RTX 3060 and the equivalent RX 5500 XT to RX 6600 XT.

So, in reality, unless you're hurting for a premium experience, it does not make sense to upgrade through classes - it mostly makes sense to find what price and performance tier you're comfortable at then wait every three generations to get the most from your money. Choosing higher tier cards will make that wait easier but only if you're not gaming at 4K or widescreen resolutions.


Caveats... 


One aspect of this analysis that many will probably latch onto right away is that I'm comparing launch MSRP for each card. I'm well aware that market prices are not reflected here. However, removing inflated prices due to crypto bubbles and stock shortages, the AIB (i.e. third party board partners that mostly produce the items we buy) are increasingly using the cooler designs from the higher tier cards on the lower tier cards and then charging premium prices. Looking at the pricing situation like that, I'm left feeling that the mark-ups would be pretty equal on both sides of the isle ($70-120 on average, with higher mark-ups of up to $150 for specific models and vendors). Unfortunately, while those over-engineered solutions save the AIBs money in design and manufacturing costs and provide extravagant cooling solutions for the lower-end cards that they would never, ever, need... the fact is that those choices do ensure that cards can never be sold at MSRP because AIBs feel they are justified in adding the expense. 

I'm not happy with this situation.

On the other hand, one aspect of the market that I DO want to talk about and is related to long term price trends is... the chronic under-supply of the GPU industry. That will be in my next post...


Conclusions...


Wrapping up then, in summary: it seems that AMD has migrated to a business plan and product stack strategy very similar to one Nvidia began to practice with the introduction of the GTX 900 series, with the best value to performance cards only existing at the lower end of the stack.... along with reduced relative values within the stack itself. In contrast, Nvidia have moved to an ultra segmented product stack - mimicking those released by AMD in the first half of the decade. It begs the question whether Nvidia's products are as easy to profit from as AMD's are and whether they are as confident in their stack as AMD are.

Looking at the evolution of graphics card "classes", as defined by the relative performance to the highest performing card within a generation, we see that Nvidia has a pretty narrow range for each class in terms of performance, while I've reconfirmed that prices are increasing in a generally equal fashion (ignoring outliers) across both of AMD's and Nvidia's stacks.

At the same time, I noted that the relative performance of the 70 and 80 classes to the top card in each generation is slowly diminishing, granting worse relative value. Some of this can be attributed to Nvidia's product stack ultra-segmentation where space for "base", "Ti", and "Super" variants of each class is left - Nvidia is holding back performance in order to counter their competitors, if necessary. If that is not necessary, then no enhanced version is released, leaving performance on the table.

I also looked at the target "popular" consumer display resolutions for the 60 class of cards over the period 2010-2020 to see if we have value in that arena as well, discovering that the 60 class cards are very performant even at native 4K, before taking into account DLSS and FSR. Comparing this to the performance of historical 60 class cards shows the RTX 3060 and RX 6600 XT in a favourable light as cards to the point of 2002 did not match this level of flexibility across resolutions, nor during the transistion to 1080p

In passing, I confirmed that the RTX 3070 and 3080 really are the greatest linear generational leap (at least since 2010).

Next I looked at the upgrade paths available to consumers, finding that upgrading every third generation (or card within a class) tends to grant the best value/cost ratio. Upgrading by moving up a class each generation can be a useful tool to obtain extra performance but on average the performance gained by doing so is actually falling, so it is not the most cost effective thing to do (even if it was eight years ago).




Are high-end GPUs replacing SLI setups? No, they're not.

There is one last thing I'd like to address and that is an idea that was brought up in a question to Hardware Unboxed's monthly Q&A video. Namely, this person posits that high end GPUs are replacing a need in the market that was previously satiated with SLI configurations.

Despite Tim and Steve mulling the idea over and sort of tacitly acknowledging it, this could not possibly be true. For one thing, there just was never a big enough market for SLI/crossfire to make an impact in anything. The estimated number of SLI/crossfire setups in 2015 was just 300,000... worldwide. We're talking about 44M discrete GPUs shipped that year (according to John Peddie's numbers, above) which is just 1.4% of the overall market in terms of total d-GPUs used and we have no data on when those configurations were constructed - so it's likely 600,000 cards spread out over a number of years - not put together in 2015... so you'd need to divide by the total number of cards shipped over multiple years making that percentage even lower.

The argument also makes no sense from a performance "void" in the product stack either given the relatively low performance gains in SLI-enabled games. We're talking ~30-60%, if the user was lucky and that can be achieved by using a "90 class" card over a combination of two "70 class" cards. In my recollection from various forums I used to visit, most people did SLI and Crossfire with mid-range/mid-high-range cards and not the top-end card because they did it to save money, not for the best performance. Sure, those people definitely existed but I think there were more people joining HD 7850s than HD 7990s... 

The "void" in performance was always available to be taken by the top card in the stack - the main point with SLI (traditionally) was to get the top card's performance for less. If you could get two HD 7850s outperforming an HD 7990 (or even thereabouts), you'd be saving $300 in the process. If saving money was the purpose of using SLI then those price-conscious consumers are not going to be the ones in the market for the high end GPU back then or now. If price is not a factor now when buying an RX 6900 XT then it would likely not have been for buying a Fury X back in 2015 as well.

So, for me, this is nonsensical.

No comments:

Post a Comment