The A.I Bogeyman argument...

In the past year many commentators have raised the spectre of A.I (artificial intelligence) getting to the point where it becomes so advanced that it begins to not require human oversight. That raises the possibility of exponential increases in the A.I understanding of things, far beyond human capacity. In some versions of this story humans then become subjugated and perhaps enslaved by this superiority of computerised intelligence.

To accomplish this, the robots or A.I bearing agents must become automatons or have the ability to improve themselves through design, engineering, building and construction. The ability to replicate also and advance one would presume.

What is wrong with this story then?

In my mind if this was possible, then one would imagine that somewhere in the universe the A.I threshold must have been crossed already if it were possible. In which case one can imagine that because of their lesser limitations in terms of lifetime associated with biological intelligence these automaton forms would certainly have started to spread out further and further in the quest for advancement, development and to acquire more materials and energy.

Visions of the Matrix, or the Borg collective come to mind. Maybe the universe is too vast to colonise in a reasonable time from one locus and that is why we haven't encountered signs of rogue A.I yet. But what's to say the event hasn't occured more than once and that there isn't a source close to us?

I personally don't believe it can happen in the way envisioned. I can certainly see A.I developing beyond human capability. I stopped being interested in and playing chess around the time IBM DeepBlue beat Kasparov. Despite the advancement of computers they are yet to become self aware. No chess computer actually knows it is playing chess. After all.

I guess like most things, time will tell.