The humans symbolise the functions, the jellyfish symbolise quantum entanglement... per Scott Webb |
There has been a lot of philosophical talk about our place in the universe and whether what we experience is real within the last few years – you can even drag in determinism and consciousness into this debate to round everything off! Of course, these discussions have been going on for centuries as the idea fades into and out of popularity; the farther back you go, the language to describe these ideas becomes more limited, though. Rene Descartes didn’t understand anything about computers when he was thinking about the nature of reality and consciousness… but the issues, to my mind, are all intertwined.
One idea of simulation is that there are layers of simulation to account for: i.e. the top level (if you can even describe or envision such an absurdity if you subscribe to the idea of multiverses due to the lack of connection of quanta in our complete universe, assumed beyond the observed universe) has multitudes of simulations which each one of those, themselves, has multitudes of sub-simulations and ad infinitum.
Sean Carroll postulates that at each lower level, the complexity of the next sub-level will be decreased because it is impossible to create more than you are able to utilise. i.e., you can’t create a 5 dimensional space inside of a 4 dimensional space and you can’t use the entire energy of the universe to make a universe that requires more energy to operate.
You are fundamentally limited in your operation of the simulation.
It then follows that there must, at some point become a being/civilisation that cannot harness the required complexity to create another simulation that has enough complexity to create any further simulations. That makes complete sense to me. Carroll's logical conclusion is that we may probably be in this lowest sub-simulation but that we can make simulations (maybe not a universe up to this point in time but we're getting closer year-on-year) and thus the premise must be false and we are probably not a simulation. I tend towards a more attacking stance on the logic because this argument isn't as air tight as I would like...
So, according to this line of thinking, there is an upper limit on the number of simulation tiers in the hierachy...
However, this then makes the opposite true as well: no civilisation can create a simulation within which a further sub-simulation can be created because the complexity of the sub-simulation must also be simulated by the super-simulation, leading to a cascade of complexity requirements that reach infinity. Since, in any simulation, time and thus cause is completely reversible then all states of the simulation from beginning to end (if it has such a thing) can exist at the same time and thus all simulations and simulation requirements exist at the same instant and, therefore, any simulation would immediately cease to exist as it would effectively ‘crash’ through overloading its available complexity/resources.
In a simple computer analogy you could think of it like this: You take a computer (say a Windows 10 PC) and run a virtual machine on it of a Windows 7 PC (using tech from the launch period of the OS) and within that you run a VM of a Windows XP PC and within in that you run a Windows 95 PC and inside of that you run a VM of Win 3.1 with DOSSHELL…
Now, aside from that being absurd and you probably can’t go any further back (I don’t think VMs existed that far back!) your modern Win 10 PC has either died a horrible death or is close to it. Trying to run a normal programme whilst all these nested sub-environments are running on the system will be an exercise in futility and/or frustration. The system will inevitably crawl to a halt at some point (i.e. at the nth level, a system will exceed the requirements of running n levels - what number 'n' will be is determined by the computational power, complexity and available resources of the system in question).
This can also be expressed in terms of time as a resource of simulation – each successive simulation has less complexity and thus less resources to run a further sub-simulation, resulting in a slower simulation of that sub-simulation in comparison to the host of the simulation in question. To put that a bit more clearly: we can simulate a two-body system very accurately but over a relatively long period of time when in actual fact, reality is able to produce the same ‘result’ in an instant.
Maybe I’m missing some fundamental idea here but I think this conclusively arrests the idea that we exist as or inside a simulation… This idea also limits the number of environments that we can exist within.
- We are the civilisation that can create simulations which are unable to create sub-simulations
- We are a sub-civilisation living in a simulation that is unable to create simulations
Since we are able to create simulations of seemingly *unlimited quantity then we must, logically, inhabit non-simulated universe.
*Unlimited in the sense that literally every computer on Earth is creating a simulation as soon as it’s turned on and we can create further and more complex simulations by running more complex and demanding programmes.
It's all in your head... per Scott Webb |
However, up there in the title I also addressed another point that I think is related. While we may not be living in a hierarchy of simulations and while we may not be simulated in a classical sense, could the universe itself be a simulator?
I’ve previously addressed why I think the holographic storage of our information is an incorrect assumption but the effects of relativity might point out that space-time and entropic advancement are like a calculation. Where space-time is densely curved, an observer within that region will find their observation of time flowing as it would for an observer in a less densely curved region. However, relativistically speaking, the two processes are not equal. In these cases, (to an observer outside of the system [read:universe]) time progresses more slowly in densely curved regions of space-time. This applies even when an object is accelerated towards the speed of light because it, itself, is causing the increased deformation of space-time through this acceleration as it gains energy/mass (thus we have time dilation).
So back to the computer/simulation analogy above: when we simulate a simple two-body environment from first principles our computers do okay. However, as you increase the number of bodies in the simulation, the complexity of the operations and calculations required to advance time become onerous and thus the simulation runs at a slower pace. This is the view of the computer scientist standing at their terminal.
Back with those observers in different frames of reference, the observer in the two-body system will observe the flow of time at the same rate as the observer in the many-body system even though if both systems were running side-by-side they would each observe the other running at a different relative speed to their own environment.
In this way, space-time can not be differentiated from simultaneously running simulations in direct contact with one another, able to transfer information between each other. The only variable that governs the simulation complexity is the amount of mass/energy in a region and the only variable that informs the advancement of the simulation is change in entropy.
You could then view singularities as areas were someone ‘divided by zero’ or infinity. Which, in effect, you could argue is true as space-time curves to infinity. Any term within the system that gets sent to the calculation/operator that is ‘divided by zero’ is immediately removed from the interactions with the surrounding calculations because it effectively ceases to exist – the structure that allows the transfer of information is severed at the event horizon of a black hole.**
What governs the transfer of information? What is the ledger or memory register that keeps track of this? Quantum entanglement would be a good candidate and the fact that we observe this phenomenon is, in my opinion, a good indication that this analogy works well.
Maybe this is the reason why mathematics works so well at describing the universe and its physical properties. Maybe that’s why maths exists at all! The energy of the universe is just information and space-time is a complex equation that operates on the information based on the interaction of that information. Entropy is the end result of these calculations and the process of getting there results in flow of time.
So while we may not be a simulation of a higher civilisation, we are a simulation in that there is no such thing as reality – the universe came from nothing and can return to nothing. We are a self-contained bubble that is self-consistent and of constant total energy. To an outside observer (if it would even be possible to observe something that you could not interact with – because that would require you to be able to put energy into or take it out of the system) the universe would be timeless and homogeneous but within the system we exist differently.
Mathematicians must be very pleased with themselves because, in a sense, they are closest to ‘god’.
Just don’t divide by zero…
**I'm just going to ignore conservation of information for the time being because I haven't quite worked out how that may work in this regime... other than quantum entanglement. As long as some matter exists outside of singularities and it has interacted with the matter that has fallen into the singularity the total information of the system is retained.
No comments:
Post a Comment