12 May 2015

Sci-fi tropes: To singularity or not to singularity...

So I've read a lot about the probable, possible and, at it's most egregious, inevitable AI singularity. However, I just don't buy it.

I have come to the conclusion that the AI singularity can never come to pass. Note that I state, specifically, an AI singularity... Whether an intelligence singularity can and will occur is another matter entirely.


As for the AI singularity, let's perform a small logic test:

Programming: at its basest form is the mathematical abstraction of human ideas. A calculation can result in the knowledge of a condition from a question: if this then that. If not this, then these etc. etc. ad nauseum. This is our current, commonplace binary silicon logic. Newer forms of computing are also similarly structured - even fuzzy logic through quantum computing and neural networks using memristors require that initial calculated input and output... they're great at optimising a process but beyond that aren't revolutionary in the ideas behind their existence.

The AI singularity: is often portrayed as being a computing unit that can self-improve to become self-aware and then propagate and then does so either to itself or its progenitors in an exponential manner.

This causes a problem.

The problem is this: How can we, humankind, create a programme or collection of programmes that are able to self analyse and to know themselves or everything else around them entirely.

This would require that a programme or collection of programmes would be able to understand other, unfamiliar concepts without any sort of logical input. An analogous situation would be a baby trying to understand the world. However, the big difference between a silicon chip (or infinite group of them) and a brain is that the brain will programme itself whereas any sort of computer requires that WE provide it with the programming, the context.

This is the conundrum of the AI singularity: in order to become a singularity event, the AI must exhibit traits of a singularity-level AI. In order to self-improve, it must fully understand itself and that self in context with, well, whatever it is improving itself against or within. This is some "magic" level of bullshit reasoning that people who espouse AI conveniently subscribe to, whilst happily ignoring all the hard realities.

Thus, AI is an oxymoron, a paradox and cannot exist in the real world.

In the same way that we cannot go into the past and kill any of our ancestors (let alone travel to the past!) and thus cause a paradox, this paradox is forever stretching into the future - we have to create a singularity level AI in order to create the singularity. Human beings aren't even able to understand our own thought processes fully or even our own world. How could we, in our current imperfect state and knowledge level even do that? It's impossible.

Can we mimic very intelligent AI? Sure! Watson is a very good example of an intelligent situational search engine. It's nothing more than that though. It's not truly intelligent. It could not intuit the reason why an apple falls if given the mathematical equations that determine the acceleration of the apple to the earth beneath it. Not even if it was given a camera and allowed to observe and review the footage an infinite number of times.

Programming rules.

I'm sure that was a joke at some point during my growing-up years. Anyway, it's true. Programming requires rules. Rules require mathematical underpinnings. We have not discovered a way to make a mathematical equation that covers everything. It does not exist in our knowledge. Even if we had solved the 'theory of everything' which united quantum physics and relativity we wouldn't have an equation that solved every known concept... and why would it? Maths is a human construct that is designed to make sense of given situations. Human minds might have given birth to this construct but human minds do not work on this construct. In fact, we don't really know how our minds work except in the most generalist terms and ideas.

Neurons, groups of neurons and reinforcing mechanisms! Oh, and hormones.

These things inspire us but ultimately we cannot create them in the logically locked paradise/hell that is computing and programming. The simple end result of a programme is that it is constrained by what we tell it and we cannot tell it of the concept of "infinity" and thus it cannot comprehend the infinite. But we can.

This is the great tragedy of the stereotypical sci-fi concept of the singularity: AI, and artificial constructs are not the singularity - we are. Humankind.
I'm sure the idea exists somewhere in fiction, as I do not envision that I am so much more intelligent than anyone else on the planet and that the concept doesn't exist in some minor league of sci-fi shorts from 50 or 60+ years ago... 



It's a pretty simple observation. We self-improve. We are capable of understanding ourselves and our environment... and each iteration, each improvement exponentially improves on the last. It's writ large in our history. I saw some sort of TED talk about progress and how a person from today sent back a hundred years would cause incredulity with our gadgets and whatnot. A person from a hundred years ago would need to be sent back 200 years. A person from 200 years ago would need to be sent back 500 years etc. etc.

Progress has been an exponential curve. We've taken our time getting to this point but man, are we taking off!

What's interesting about us though is that we aren't self-improving in the way the singularity has always been depicted. We have two intertwining mechanisms in play: the first is intelligence, the second is genetics/evolution (that's actually backwards in the ultimate mechanism that got us to this place but nevermind ;) ).

First, our intelligence allows us to improve our environment, our chances to succeed. Our society is based around our intelligence and society is the tool we used to self-improve. Many people have stated that we've 'stagnated' in an evolutionary sense since the mass migrations of however long ago - we no longer have such specific evolutionary pressures or isolational pressures that allow us to diverge (unless we create them), we're (not really, but kinda) pretty homogeneous as a species. But that's fine because we've created technology to improve ourselves beyond just our physicality.

Secondly, we have reached this point through evolution, our genetics. We do not control our genetics and it's this factor that is commonly missing in singularity-type stories. Our genes self-improve (or more accurately, go through selection pressures) but this is a separate improvement to our own intelligence-based knowledge improvement.

Let's take a break and read through these two articles at the risk of furthering the writer's probably already bloated sci-fi-sensaltionalism:

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

In part two, the author argues that a semi-intelligent AI named Turry is able to bring about the simultaneous extinction of the whole of humankind and all organic life on Earth. The problem with this example is that it is SO overly simplistic that it makes itself impossible at the same time. Any programme must know what it can do. Anything outside of the programming will return an error and will bring the programme, if we're lucky, to its normal state again... if we're not lucky, we talking a hard crash.

This is the paradoxic oxymoron I was speaking about before. There is no description of biological manufacturing facilities in Turry's programming. There is no ability for Turry to learn anything outside of its programming except that the author has some sort of sci-fi starting point of it being a self-improving AI.

What do I think? Intelligence can be self creating but it cannot be artificially created.

What makes humans intelligent? We don't know.

Worse still, the author makes the common sci-fi trope mistake regarding nanobots... but let's leave that for another post. If you're game...

No comments: