Artificial Intelligence: Should we or shouldn't we


#21

I would like for humanity to learn to use it's own intelligence before creating another.


#22

We may not have a choice. It may create itself, unbidden


#23

There is a distinct lack of intelligence around here on match days, artificial or otherwise. 

Memo to head; make sure brain is in gear before engaging mouth. 


#24

Wim: the ability to 'feel pain' is not the definition of intelligence used for AI. The Turing Test, which is accepted as the way of demonstrating true artificial intelligence, is that if you were to hold a conversation with it you would not be aware that it is a robot. After all, pretty much everything we do can be broken down to a decision tree or matrix, and it's just that our wetware operates about 40 times faster and 100 times more in parallel than the fastest parallel computers. 

 

Intelligent Agents can already 'learn' - update their behaviour and rules based on inputs based on outputs - the same as humans can. So once we have the processing power and speed it isn't inconceivable to think that a computer running the right software can learn like a human, as fast as a human, and make decisions based off those learnings, and then you will have something approximating intelligence. 

 

The next thing will be to see if AI 'jumps' once it's smart and powerful enough; it starts to want to learn of it's own free will, and wants to create. Or dream.  

 

Also: if 'pain' is the interpretation of electrical signals, then for an AI 'pain' would be felt by the 'lack' of signals - when a processor is disconnected - or too much signal. 

 

Then the Turing test is silly.

Give a code enough options and it will look like individual thought.

 

The first test of AI is if it comes up with something it's not told to.

The second is if it acts on that sort of intuition in an meaningful way.


#25

 

Then the Turing test is silly.

Give a code enough options and it will look like individual thought.

 

The first test of AI is if it comes up with something it's not told to.

The second is if it acts on that sort of intuition in an meaningful way.

 

 

All human thought is just a chain of options and decisions, including intuition and invention


#26

 

 

Then the Turing test is silly.

Give a code enough options and it will look like individual thought.

 

The first test of AI is if it comes up with something it's not told to.

The second is if it acts on that sort of intuition in an meaningful way.

 

 

All human thought is just a chain of options and decisions, including intuition and invention

 

 

Yes?  And?


#27

"Give a code enough options and it will look like individual thought."

 

Yes, because that's all human thought is, when it's broken down. Options.

Once the hardware can match the speed of our brains, then we will see Computer Intelligence.

 

The question is do we contain it, put safeties in place so it doesn't kill it's maker?


#28

Did anyone else read that Stephen Hawking quote using that computer simulated voice he uses?

 

I couldn't help myself.


#29

personally, i don't think it can ever be true Intelligence if it isn't genetically and biologically driven.  Computer "wakes up", becomes "self aware" whatever. man 10 kilometers away shuts down transformer. computer "dies".  How does it gain an awareness of its own self survival requirements and then also instantly know how to meet those requirements?  I do not see how it will ever be anything - no matter how clever we make it - other than reflective of the abilities mankind chooses to grant it.

 

I'm not religious but i suspect there is far more to "intelligence" than myriad electrical pathways.

 

edit:  not that i'm up to speed on any of this stuff, i can barely drive a PC, lol.  so i'm fully prepared to be wrong.


#30

Quantum computers aren't that far away. Once we figure that out then processing power isn't an issue.


#31

How can it teach itself how to do things via trial and error?

 

You can provide a computer with a million options, but if it needs option 1,000,001 it wont work.

 

A human can eventually teach itself that extra option.


#32

I'm gonna go with others and my knowledge of computers is limited to trial and error and have no idea why things work, other than they do.

 

So my question is, if you don't teach a computer how to learn, or how to think on it's own, could or would they learn how to nonetheless ??

 

That might seem contradictory to making artificial intelligence, but still if you don't program it to do something, can it learn by itself, if again it's not taught how to.


#33

Computer says no


#34

Wim: the ability to 'feel pain' is not the definition of intelligence used for AI. The Turing Test, which is accepted as the way of demonstrating true artificial intelligence, is that if you were to hold a conversation with it you would not be aware that it is a robot. After all, pretty much everything we do can be broken down to a decision tree or matrix, and it's just that our wetware operates about 40 times faster and 100 times more in parallel than the fastest parallel computers. 
 
Intelligent Agents can already 'learn' - update their behaviour and rules based on inputs based on outputs - the same as humans can. So once we have the processing power and speed it isn't inconceivable to think that a computer running the right software can learn like a human, as fast as a human, and make decisions based off those learnings, and then you will have something approximating intelligence. 
 
The next thing will be to see if AI 'jumps' once it's smart and powerful enough; it starts to want to learn of it's own free will, and wants to create. Or dream.  
 
Also: if 'pain' is the interpretation of electrical signals, then for an AI 'pain' would be felt by the 'lack' of signals - when a processor is disconnected - or too much signal.

 
Then the Turing test is silly.
Give a code enough options and it will look like individual thought.
 
The first test of AI is if it comes up with something it's not told to.
The second is if it acts on that sort of intuition in an meaningful way.

Didn't we already get to that point in the 80s with Max Headroom?

#35

Doesn't matter whether we should or we shouldn't.  We will.

 

There has never been, and never will be, a technology that humanity has invented but decided not to build, no matter how bad the potential side-effects might be


#36

Agreed.  Some of the ■■■■■ that demtel used to sell was an abomination to humanity.  The Amazing Mouli was nearly the end of us all.


#37

 

 

Wim: the ability to 'feel pain' is not the definition of intelligence used for AI. The Turing Test, which is accepted as the way of demonstrating true artificial intelligence, is that if you were to hold a conversation with it you would not be aware that it is a robot. After all, pretty much everything we do can be broken down to a decision tree or matrix, and it's just that our wetware operates about 40 times faster and 100 times more in parallel than the fastest parallel computers. 
 
Intelligent Agents can already 'learn' - update their behaviour and rules based on inputs based on outputs - the same as humans can. So once we have the processing power and speed it isn't inconceivable to think that a computer running the right software can learn like a human, as fast as a human, and make decisions based off those learnings, and then you will have something approximating intelligence. 
 
The next thing will be to see if AI 'jumps' once it's smart and powerful enough; it starts to want to learn of it's own free will, and wants to create. Or dream.  
 
Also: if 'pain' is the interpretation of electrical signals, then for an AI 'pain' would be felt by the 'lack' of signals - when a processor is disconnected - or too much signal.

 
Then the Turing test is silly.
Give a code enough options and it will look like individual thought.
 
The first test of AI is if it comes up with something it's not told to.
The second is if it acts on that sort of intuition in an meaningful way.

Didn't we already get to that point in the 80s with Max Headroom?

 

 

IIRC Max Headroom was the computer consciousness of a human whose last sight was a clearance barrier of (something like) 1.8 metres.

The computer read this as viewer figures of 1.8 million and thought he was a star, hence his ridiculous confidence.


#38

The point is not that you create the decision tree, you create the infrastructure and algorithms for "it" to create its own. We can already do this to varying degrees of complexity, but it's still a WIP.

 

But we shouldn't confuse "intelligence" and "self awareness". Storage, processing power, and the right algorithms can (I believe) create "intelligence" - the ability to process inputs and determine outputs. And a sufficiently advanced algorithm WILL be able to create outputs that are not part of a pre-determined (through programming OR training) decision tree.

 

Self awareness is another kettle of fish, and I think Wimerra1 was on the right path. Intelligence can be created where one form of input (text, audio, visual - whatever) is used to determine outputs. Self awareness will need a whole lot more in the way of inputs, not just a single input type such as that used for the Turing Test.

 

We can already create systems that do not provide pre-determined output. Artificial neural networks can be "trained". True AI is not just about building a big/fast enough neural network. It's about knowing what to feed into that network, how to segment those inputs, interface between the segments, feedback loops, etc. Self-awareness is our brains providing their own input, and I haven't seen anything to indicate we are close to knowing how to do that yet (in a meaningful way).

 

The basics are "simple", and we understand them. But I'm not convinced we are anywhere near creating self-aware intelligence though, that does not need external direction. The "singularity" guys (Ray Kurzweil, etc) I think are a bit loopy. Intelligent, but not "smart", if you get me...

 

I should say here that I was a keen tracker of the Singularity movement a few years ago. It was interesting, even if I didn't buy into it. I'm a few years out of date though, and I was just an interested observer with some IT knowledge, not someone formally studying or researching. There is a lot of interesting stuff out there though.


#39

"Give a code enough options and it will look like individual thought."

 

Yes, because that's all human thought is, when it's broken down. Options.

Once the hardware can match the speed of our brains, then we will see Computer Intelligence.

 

The question is do we contain it, put safeties in place so it doesn't kill it's maker?

 

Yeah, but nah.

You need to tell a computer that a banana is a fruit in the first place before it can offer it as an alternative to apples and oranges.

Then it has to 'decide' whether a banana is the better option,  How does it do that?

Never mind that while it may well know the technical definition of a fruit, it will never, can never, understand why a pumpkin and a watermelon are not in any way practically similar.

 

Can you even make a computer understand that a banana as we know it, a seedless, sterile plant is edible at all without telling it?

 

Humans can, because they have extelligence.

Computers don't, and never will.

And even if they do, we'll friggin' notice, really, really early.


#40

 

"Give a code enough options and it will look like individual thought."

 

Yes, because that's all human thought is, when it's broken down. Options.

Once the hardware can match the speed of our brains, then we will see Computer Intelligence.

 

The question is do we contain it, put safeties in place so it doesn't kill it's maker?

 

Yeah, but nah.

You need to tell a computer that a banana is a fruit in the first place before it can offer it as an alternative to apples and oranges.

Then it has to 'decide' whether a banana is the better option,  How does it do that?

Never mind that while it may well know the technical definition of a fruit, it will never, can never, understand why a pumpkin and a watermelon are not in any way practically similar.

 

Can you even make a computer understand that a banana as we know it, a seedless, sterile plant is edible at all without telling it?

 

Humans can, because they have extelligence.

Computers don't, and never will.

And even if they do, we'll friggin' notice, really, really early.

 

At the moment a computer would suggest an alternative without understanding.

 

i.e. Similar users to you also bought, a banana. 

 

I would have thought evolutionary principles might apply, and I'm not sure there is an advantage that an intelligent computer would have over a regular computer. Perhaps in warfare?