Artificial Intelligence: should we or shouldn’t we?

Thought i'd put this in here.
 
With comments recently mad by Stephen Hawking that creating an artificial intelligence could be the biggest and last mistake the human race makes i'm curious - what does blitz think :)?
 

http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html
 

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".
 
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
 
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

i should probably start this thread by spelling intelligence correctly. how do i change the thread title?

 

eh, mods please spare me the embarrassment.

1 Like

I vote yes for sexbots!

Edit the first post then use the full editor.

Should we - no.

Nope

Meh, my mobile phone is already smarter than me.

This just makes me want to play through the Mass Effect series again.

1 Like

Surely self-awareness comes as a result of glands?  Nerve-endings, the sensation of pain and then avoidance of pain.

Call me a big fat idiot if you like, but surely you could write code from here 'til the end of time without it ever becoming self-aware?

And another thing, why did the machines use humans as batteries in the Matrix?  Wouldn't cows have been more effeicient?  You would only need a VR field of grass and not have to worry about that whole pesky human resistance thing.

1 Like

And another thing, why did the machines use humans as batteries in the Matrix?  Wouldn't cows have been more effeicient?  You would only need a VR field of grass and not have to worry about that whole pesky resistance thing.

 

Check out the mini-episodes.

There was a war.  We lost.

Cows don't enterintoit.

Edit the first post then use the full editor.

cheers

Clearly no one has read any books or watched any TV on what might happen when robots become self-aware.

 

Nuclear war!

 

If that happens, I hope I find John Connor first.

Yes we should

 

 

 

* this should be a poll

Yes we should

 

 

 

* this should be a poll

done

Also, surely the moment of self-awareness, if it ever happens at all, will appear when whatever the system is is pretty feeble from a tactical point of view.

When something programmed to choose between apple and orange suddenly out of the blue says, 'hang on, why not banana?' then it won't be too hard to shut it down.

 

Look, computers aren't my thing, so some clever people might have some good reasons why that won't be possible.

I just find it hard to believe that from the moment of self-awareness, as unlikely as that seems to me in the first place, some system will become an invincible genius.  More like a dribbling three year-old going 'Mine.  Want sweetie.'

If robots start taking hallucinogens, then we’re in trouble.

Surely if electrical goods became sentient they would instantly realise the best way to end the human race would be to shut down.

They’ll use poisonous gases.

And they’ll poison our ■■■■■.

1 Like

Wim: the ability to 'feel pain' is not the definition of intelligence used for AI. The Turing Test, which is accepted as the way of demonstrating true artificial intelligence, is that if you were to hold a conversation with it you would not be aware that it is a robot. After all, pretty much everything we do can be broken down to a decision tree or matrix, and it's just that our wetware operates about 40 times faster and 100 times more in parallel than the fastest parallel computers. 

 

Intelligent Agents can already 'learn' - update their behaviour and rules based on inputs based on outputs - the same as humans can. So once we have the processing power and speed it isn't inconceivable to think that a computer running the right software can learn like a human, as fast as a human, and make decisions based off those learnings, and then you will have something approximating intelligence. 

 

The next thing will be to see if AI 'jumps' once it's smart and powerful enough; it starts to want to learn of it's own free will, and wants to create. Or dream.  

 

Also: if 'pain' is the interpretation of electrical signals, then for an AI 'pain' would be felt by the 'lack' of signals - when a processor is disconnected - or too much signal.