Home > philosophy > A Brief Blip

A Brief Blip

A while back I listened to a podcast interview with Swedish philosopher Nick Bostrom. I have commented on his intriguing ideas regarding the possibility that we are actually living in a computer simulation and not a real universe before (in a post “Not Crazy Enough” on 2012-11-06) and while his latest musings aren’t quite as far reaching they are still really interesting.

In this interview he talked about the probable outcome of future advances in artificial intelligence and how that is likely to lead to disaster for humans. The idea that creating a super-intelligence (one significantly beyond human abilities) would be the last invention ever required is an old and well known one in science fiction. Once an intelligence capable of inventing further more advanced intelligences is created the situation rapidly escalates out of control as machine intelligence evolves faster than biology ever could.

But when will artificial intelligence reach this point, if ever? According to Bostrom a survey of experts gave a median answer of 2045 but it should be noted there was a large spread in the answers, so this is far from certain. One major question affecting the answer is: does existing technology scale or do we need something fundamentally different? Many current efforts in AI involve simulating a brain in software on a digital computer. This may not be the right approach and a new type of (analog) thinking machine might be required instead. If that is true then the 2045 timeframe is probably too optimistic – or should that be pessimistic?

But surely the point is not if but when this will happen. At some point, by whatever means (maybe something totally unheard of at this time) a super-human intelligence will be created. So should AI researchers be considering the consequences of their research even now? Should there be safeguards put in place to protect the creators from their creations?

This idea has been examined in science fiction for years, the most well known example being Asimov’s laws of robotics. The first (and most relevant) law states: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This seems fine but what does it really mean? Would a super-intelligence interpret this as meaning that all humans need to be (metaphorically, I hope) wrapped in cotton-wool and be prevented from engaging in any possible dangerous activity? Would an even better solution be to not allow new humans to be born and therefore remove any possible chance of harm? Who knows how a super-intelligence would think.

One objection to these doomsday predictions is that super-intelligences might not be given access to the real world: they might be computers instead of robots. But will this only delay the inevitable? How long would it take a super-intelligent computer to figure out a way to influence the real world?

Presumably it would have some interaction with the real world, through its human operators, or through a network like the internet. And the “next big thing” on the internet will be “the internet of things” where everything will be connected, making influencing the real world even easier. And if that doesn’t work there is always spamming, hacking, denial of service attacks, and blackmail as possible methods of influence. So it surely wouldn’t be that difficult for something so smart to find a way to “take over the world”.

And maybe that’s why we don’t see signs of intelligence elsewhere in the universe through studies such as SETI. We might represent a tiny transitional period in the evolution of life and intelligence. Maybe a typical time-line is 3 billion years of primitive pre-life and unicellular life, then half a billion years of increasing complex multicellular life, then a hundred thousand years of intelligence but without any real technology, then a few hundred years of more advanced technology, then synthetic life takes over for the rest of time.

Maybe a technological civilisation like ours is just a brief blip on the Universe’s vast timeline and the chance of seing that tiny period of evolution between non-technological life and synthetic life is very low. Maybe the next stage happens so quickly after the technology stage begins and is so strange and unlike what we know that we wouldn’t even know what to look for.

Or this all could be idle speculation and there might not be anything to worry about. Or maybe the whole universe is just a simulation anyway!

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: