Update: And in the course of writing this I came across a Eurogamer article that basically says what I say only much better and more deeply with actual numbers to back it up.
Microsoft and select "friends" have, since their console reveal, come out beating the drums around the advantages of cloud gaming. So far, reading into all the hyperbole, their message is as confusing and nebulous as the Xbox One's DRM and shared game situation was. The cloud, apparently, enables games to make use of dedicated servers, better AI, physics and lighting through distributed processing power. On the surface, these all sound like great things that we should be excited about as these advances in technology should lead to better gaming experiences that are more realistic, deep and appropriately challenging to play.
As per usual, I'm not convinced.
Leaving aside the possible and probable hot air that usually accompanies these sorts of announcements (from every company that's trying to sell you their product) I'm just not seeing the benefits on the large scale.
Dedicated servers have been around forever and they're not difficult to implement. They can be expensive to maintain over a long period of time due to the permanent assignment of specific hardware to the task in question by they're easy to integrate into a game's net code (as easy as player hosting, if not slightly easier due to missing out the whole host transfer issue). However, they lose their expense to the company if the players are paying for the access - which historically doesn't tend to happen all that often for the official releases (yes, players and clans often get their own servers but usually companies start off a game launch with "official" servers. Shifting dedicated servers to the cloud is beneficial here because they can scale dynamically based on the number required at any moment in time - this results in them being cheaper to operate, though, with XBO and the Live fees they're being paid for by the players anyway... even though Microsoft is also charging the developers/publishers a fee to also run them. (Seems a bit like double-dipping to me - but what do I know?!)
A parallel downside to the player host server mechanic is that your distance and ping to the server is also important - as much as your distance (zero for the host) to the player host is. Yes, you lose the player host advantage but gain a "fastest/nearest connection" advantage that we've always experienced on the PC side of things since before 2000. Fair enough, you say, ping isn't that important (even in twitch games) as long as your latency is constant and it's not jumping up and down and around about as you're trying to play because you can manually compensate for that if you're a good player - and I agree. But let's not kid ourselves that you couldn't already do that with the player host situation.
Moving on to AI calculations: there's a really good reason why we don't tend to stick calculations for single player games on a server - especially when it's a time-sensitive game - and why we don't tend to have complex human-like bots in MP games... well, let me correct that: two reasons.
The first is latency (again): AI needs to update multiple times a second based on the entire game state it's interacting with (which will be different depending on how the programmer decided to deal with the omniscience problem) and behave according to whatever behavioural weighting and handicap measures it is laden with. That's a lot of stuff to keep track of and even worse for multiple AI opponents and to then send all this info over a network connection to a server with potentially 100+ ms ping you'll have a constant 200+ ms (higher with all the processing on the server end) delay for things happening in your game. It's going to be really noticeable and not going to work for games and mechanics that require immediate results.
I'm pretty sure you'd be able to run up behind an enemy and wait and watch as it sits there doing nothing before starting to respond to you, in which time you've already killed it. So, given all the power in the new consoles there's not really much reason why you'd shift all this calculation into the cloud because you already get pretty good AI from mid-range PC hardware without trying and there's very few people who, in their right minds, would want to add in another layer of latency to the player experience.
The second reason is that really good AI is hard and computationally expensive. There's a reason why many games still don't have good AI but have almost photorealistic graphics and that's because there are very real diminishing returns on the time and effect (and thus cost) in implementing a really smart AI. So, instead engineers/programmers make use of behavioural short hands and cognitive cheats. These cost relatively little in terms of processing power and also help standardise the actions of every class of enemy in a game which results in a more fun and rewarding experience for the players... but that's a different topic. Many games have the AI know everything about their environment (including player health and position) but make them selectively amnesic when choosing what action to take instead of having a cognitive function that analyses the pretend audio/visual interactions with the AI, for instance. There are times when this sort of partitioning breaks down such as the infamous FarCry sniper bots or how when enemies have the memory of a goldfish in many games or even, more recently, in The Last of Us - which has the AI completely ignore any friendly NPC except for the one you're controlling when you're being stealthy.
The point here is that you gain either nothing or very little in real terms from shifting AI calculations into the cloud because this smoke and mirrors act that developers use to make games fun and interesting doesn't require all that much processing power anyway and, as I pointed out, developers probably aren't going to start spending years developing some really complex AI when the games just do not require it. It's a waste of energy.
Physics and lighting calculations also suffer from the same reasons against shifting AI into the cloud - no one wants to wait whilst their objects in game are hanging in mid-air due to a bit of lag to take a moment to decide how they're going to move once the server updates their vectors and acceleration. Similarly, lighting techniques make use of many short cuts that cut down on required processing power and storage of this information. Sending that to the cloud won't help and there's really not any benefit gained from raytracing a scene on the cloud in "real time" and sending it through the pipes of the internet as opposed to having performed that during production and storing the information in the game files on the disc or HDD.
Not even mentioning that graphics card architecture is highly parallelised already to deal with very quick geometry, lighting and physics calculations so it may as well do the job it was designed to do, eh?
One potentially positive aspect of the cloud was highlighted by the developers of Forza 5 - Drivatars. The premise is to take your driving data and analyse it to determine your habits, then apply this to a "ghost" in your acquaintances' games for them to race against. The problem with this has nothing to do with speed or latency as it can be an asynchronous process but difficulty. I'm very sketpical of this feature because this will need, I think, a real person to analyse this data so that it's not just some big horrible mess of a car throwing itself off of the track all the time or, through selection via an algorithm (genetic maybe?), being unbeatable because it's picked all the best parts of a player's race on a particular track.
Imagine one of your friends is awesome at the racing game - his Drivatar will absolutely trounce you each and every time you play against it and, unless you can turn the feature off (I predict you will be able to or Drivatars will be in their own little mode) this would frustrate the player and hinder their advancement in the singleplayer portion of the game. Because of this frustration and waste of time developing a very complex feature I predict that Drivatars are going to be a trumped up version of "ghost" cars that are saved by the player at the end of a race and then uploaded - against which your friends will play against. That's actually relatively easy to implement as you'd have to apply the characteristics of the "recorded lap" to a normal AI (which is how the current racing AIs are constructed) so that it would know what to do when it was shoved out of position or whatever.
So why do I think, skeptical as I am, that this is potentially good? Well, for one, the developers are getting feedback datasets on real people's driving and this can be pushed into updates to the game's or future game's AI to result in better and more realistic opponents. Imagine all the thrills of similarly skilled people racing against each other in an MP game but instead applied to an SP game. This is, IMO, very likely and is quite exciting for the future development of the medium than the present and future "cloud" applications.