Artificial Intelligence: should we or shouldn’t we?

just get me the super hot women robots, no intelligence required.

I'm more concerned about self-replicating nanobots.

I'm more concerned about self-replicating nanobots.

As soon as they can self-replicate, evolution can take hold...

 

I'm more concerned about self-replicating nanobots.

As soon as they can self-replicate, evolution can take hold...

 

We have IVF, evolution and natural selection have already been removed from the equation if you live in a wealthy country.

I am not convinced that artificial intelligence is possible to create. But another form of life could be created. Some humans are dangerous and powerful enough to be a kind of HAL figure - human history and literature is littered with them. We could genetically engineer a new species that are more intelligent and powerful than humans.

I think it is possible and likely are destiny. I think it is in our DNA to keep pushing the boundaries, to keep discovering news things.

 

What is our consiousness? Can anyone answer that? No, it seems like our brains just run on a bunch of data and electrical signals. Maybe they figure out how to download the data from you brain, then input that data into a robot?

 

So fascinating to think about where we are headed and what our race will evolve into.

Y’all know a lot of these algorithms actually exist already?

Learning algorithms (figures out “this thing is a banana”), neural networks, problem solvers & path finders, image processing, software that does prediction/planning, polymorphic code etc etc etc all some level of AI

All over the shop in various niches, but most of us would see things daily that have some AI behind it.

The law (internationally but mainly US) is a fair way behind technology in a lot of areas - this is one of the biggies.

Legal systems aren’t anywhere near the pulse, code gets written & updated daily, courts are talking in years, legislation probably waits for test cases.

Miles behind the 8 ball.

AI 2: Benevolent Dictator

Y'all know a lot of these algorithms actually exist already?
Learning algorithms (figures out "this thing is a banana"), neural networks, problem solvers & path finders, image processing, software that does prediction/planning, polymorphic code etc etc etc all some level of AI
All over the shop in various niches, but most of us would see things daily that have some AI behind it.
The law (internationally but mainly US) is a fair way behind technology in a lot of areas - this is one of the biggies.
Legal systems aren't anywhere near the pulse, code gets written & updated daily, courts are talking in years, legislation probably waits for test cases.
Miles behind the 8 ball.

 

Do you have to tell it what a banana is first?

And does the 'knowledge' extend beyond this is answer xtb97541a?

Are you... Are you a pleasure model?

1 Like


Y'all know a lot of these algorithms actually exist already?
Learning algorithms (figures out "this thing is a banana"), neural networks, problem solvers & path finders, image processing, software that does prediction/planning, polymorphic code etc etc etc all some level of AI
All over the shop in various niches, but most of us would see things daily that have some AI behind it.
The law (internationally but mainly US) is a fair way behind technology in a lot of areas - this is one of the biggies.
Legal systems aren't anywhere near the pulse, code gets written & updated daily, courts are talking in years, legislation probably waits for test cases.
Miles behind the 8 ball.

Do you have to tell it what a banana is first?
And does the 'knowledge' extend beyond this is answer xtb97541a?
Recognising shared characteristics to figure out what sort of thing something is?
Absolutely.

When computer scientists at Google‘s mysterious X lab built a neural network of 16,000 computer processors with one billion connections and let it browse YouTube, it did what many web users might do — it began to look for cats.
[partner id="wireduk"] The “brain” simulation was exposed to 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognize pictures of cats using a “deep learning” algorithm. This was despite being fed no information on distinguishing features that might help identify one.
Picking up on the most commonly occurring images featured on YouTube, the system achieved 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats.
“Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not,” the team says in its paper, Building high-level features using large scale unsupervised learning, which it will present at the International Conference on Machine Learning in Edinburgh, 26 June-1 July.
“The network is sensitive to high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained it to obtain 15.8 percent accuracy in recognizing 20,000 object categories, a leap of 70 percent relative improvement over the previous state-of-the-art [networks].”
The findings — which could be useful in the development of speech and image recognition software, including translation services — are remarkably similar to the “grandmother cell” theory that says certain human neurons are programmed to identify objects considered significant. The “grandmother” neuron is a hypothetical neuron that activates every time it experiences a significant sound or sight. The concept would explain how we learn to discriminate between and identify objects and words. It is the process of learning through repetition.

“We never told it during the training, ‘This is a cat,‘” Jeff Dean, the Google fellow who led the study, told the New York Times. “It basically invented the concept of a cat.”

“The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,” added Andrew Ng, a computer scientist at Stanford University involved in the project. Ng has been developing algorithms for learning audio and visual data for several years at Stanford.
Since coming out to the public in 2011, the secretive Google X lab — thought to be located in the California Bay Area — has released research on the Internet of Things, a space elevator and autonomous driving.
Its latest venture, though not nearing the number of neurons in the human brain ( thought to be over 80 billion), is one of the world‘s most advanced brain simulators. In 2009, IBM developed a brain simulator that replicated one billion human brain neurons connected by ten trillion synapses.
However, Google‘s latest offering appears to be the first to identify objects without hints and additional information. The network continued to correctly identify these objects even when they were distorted or placed on backgrounds designed to disorientate.
“So far, most [previous] algorithms have only succeeded in learning low-level features such as ‘edge‘ or ‘blob‘ detectors,” says the paper.
Ng remains skeptical and says he does not believe they are yet to hit on the perfect algorithm.
Nevertheless, Google considers it such an advance that the research has made the giant leap from the X lab to its main labs.

http://www.wired.com/2012/06/google-x-neural-network/
So it's probably lucky there aren't hundreds of millions of people carrying devices around run by that company, feeding it a hell of a lot of personal information about who they know, what they do, where they go etc. Extremely lucky. And I'm glad they're not developing self-steering cars because there's a whole range of ways that could go wrong.
Because there aren't a lot of laws about what can & can't be done by companies like that and *nobody* ever reads the terms and conditions.
Now that's a way from full sci-fi just yet. Huge computing power and huge amounts of data. But that sort of stuff tends to get better over time.
Interesting titbit - the last computer designed wholly by people was the 386.

Are you... Are you a pleasure model?

250px-Kryten1.jpg

 

 

Y'all know a lot of these algorithms actually exist already?
Learning algorithms (figures out "this thing is a banana"), neural networks, problem solvers & path finders, image processing, software that does prediction/planning, polymorphic code etc etc etc all some level of AI
All over the shop in various niches, but most of us would see things daily that have some AI behind it.
The law (internationally but mainly US) is a fair way behind technology in a lot of areas - this is one of the biggies.
Legal systems aren't anywhere near the pulse, code gets written & updated daily, courts are talking in years, legislation probably waits for test cases.
Miles behind the 8 ball.

Do you have to tell it what a banana is first?
And does the 'knowledge' extend beyond this is answer xtb97541a?
Recognising shared characteristics to figure out what sort of thing something is?
Absolutely.

When computer scientists at Google‘s mysterious X lab built a neural network of 16,000 computer processors with one billion connections and let it browse YouTube, it did what many web users might do — it began to look for cats.
[partner id="wireduk"] The “brain” simulation was exposed to 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognize pictures of cats using a “deep learning” algorithm. This was despite being fed no information on distinguishing features that might help identify one.
Picking up on the most commonly occurring images featured on YouTube, the system achieved 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats.
“Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not,” the team says in its paper, Building high-level features using large scale unsupervised learning, which it will present at the International Conference on Machine Learning in Edinburgh, 26 June-1 July.
“The network is sensitive to high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained it to obtain 15.8 percent accuracy in recognizing 20,000 object categories, a leap of 70 percent relative improvement over the previous state-of-the-art [networks].”
The findings — which could be useful in the development of speech and image recognition software, including translation services — are remarkably similar to the “grandmother cell” theory that says certain human neurons are programmed to identify objects considered significant. The “grandmother” neuron is a hypothetical neuron that activates every time it experiences a significant sound or sight. The concept would explain how we learn to discriminate between and identify objects and words. It is the process of learning through repetition.
“We never told it during the training, ‘This is a cat,‘” Jeff Dean, the Google fellow who led the study, told the New York Times. “It basically invented the concept of a cat.”
“The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,” added Andrew Ng, a computer scientist at Stanford University involved in the project. Ng has been developing algorithms for learning audio and visual data for several years at Stanford.
Since coming out to the public in 2011, the secretive Google X lab — thought to be located in the California Bay Area — has released research on the Internet of Things, a space elevator and autonomous driving.
Its latest venture, though not nearing the number of neurons in the human brain ( thought to be over 80 billion), is one of the world‘s most advanced brain simulators. In 2009, IBM developed a brain simulator that replicated one billion human brain neurons connected by ten trillion synapses.
However, Google‘s latest offering appears to be the first to identify objects without hints and additional information. The network continued to correctly identify these objects even when they were distorted or placed on backgrounds designed to disorientate.
“So far, most [previous] algorithms have only succeeded in learning low-level features such as ‘edge‘ or ‘blob‘ detectors,” says the paper.
Ng remains skeptical and says he does not believe they are yet to hit on the perfect algorithm.
Nevertheless, Google considers it such an advance that the research has made the giant leap from the X lab to its main labs.

http://www.wired.com/2012/06/google-x-neural-network/
So it's probably lucky there aren't hundreds of millions of people carrying devices around run by that company, feeding it a hell of a lot of personal information about who they know, what they do, where they go etc. Extremely lucky. And I'm glad they're not developing self-steering cars because there's a whole range of ways that could go wrong.
Because there aren't a lot of laws about what can & can't be done by companies like that and *nobody* ever reads the terms and conditions.
Now that's a way from full sci-fi just yet. Huge computing power and huge amounts of data. But that sort of stuff tends to get better over time.
Interesting titbit - the last computer designed wholly by people was the 386.

 

 

I don't mean to sound overly critical, but surely that quote is just silly?

The computer hasn't invented the concept of a cat, it's recognised something that comes up a lot, cat(ha!)egorises it and can tell you when it hits something else that fits that category.

It's a very good sorter.

I had a set of plastic coin trays at the bank that did a similar job.

That's nowhere near thought.

Or am I missing something?

I'd be interested to know if the computer would call this a cat.

 

mwXXfA7RYyQFTyopVwPfu6A.jpg

 

Or this.

 

2912466844.jpg

hey, I never said it was good, just that it was happening.

And governments need to think pretty carefully about what things are being done, before they’re completely sidelined by tech & data companies.

 

Are you... Are you a pleasure model?

250px-Kryten1.jpg

 

 

I believe you are after this image;

 

gelf.jpg

do i get my own T800

Yes for olivia wilde in the tron sequel

I watched the original Robocop a month or two ago. Jesus Christ it was awful. 

My iPad has more intelligence than some who post on Blitz.

You know who I mean!

1 Like