AI researchers think "Rascals" can pass Turing tes

Page 1 of 1 [ 8 posts ] 

singularitymadam
Sea Gull
Sea Gull

User avatar

Joined: 24 Aug 2007
Age: 37
Gender: Female
Posts: 213
Location: I live in a Mad Max movie. It's not as fun as it sounds.

13 Mar 2008, 5:29 pm

http://www.eetimes.com/showArticle.jhtm ... =206903246

R. Colin Johnson wrote:
PORTLAND, Ore. -- Passing the Turing test--the holy grail of artificial intelligence (AI), whereby a human conversing with a computer can't tell it's not human--may now be possible in a limited way with the world's fastest supercomputer (IBM's Blue Gene), according to AI experts at Rensselaer Polytechnic Institute. RPI is aiming to pass AI's final exam this fall, by pairing the most powerful university-based supercomputing system in the world with a new multimedia group designing a holodeck, a la Star Trek.

"We are building a knowledge base that corresponds to all of the relevant background for our synthetic character--where he went to school, what his family is like, and so on," said Selmer Bringsjord, head of Rensselaer's Cognitive Science Department and leader of the research project. "We want to engineer, from the start, a full-blown intelligent character and converse with him in an interactive environment like the holodeck from Star Trek."

Currently, Bringsjord is stocking his synthetic character will all sorts of facts, figures, family trivia and personal beliefs gleaned from what he calls his "full-time guinea pig," a graduate student that has agreed to bare all for his synthetic doppelganger. The synthetic character will be able to converse with other human-controlled avatars about his educational and family history, his personal pastimes, and even his feelings and beliefs.


While I think this is totally awesome, I just don't see how it can succeed, based upon its structure.

Quote:
Rascals is based on a core theorem proving engine that deduces results (proves theorems) about the world after pattern-matching its current situation against its knowledge base.


Does this mean it is stuck in the famous frame problem? I thought most cognitive scientists understood by now that the encyclopedic knowledge base theory of intelligence is bound to fail. I don't think having the fastest super-computer in the world run it will make that much of a difference, but perhaps I'm biased because I prefer neural networks.



Orwell
Veteran
Veteran

User avatar

Joined: 8 Aug 2007
Age: 34
Gender: Male
Posts: 12,518
Location: Room 101

13 Mar 2008, 9:28 pm

R. Colin Johnson wrote:
Bringsjord's research group recently passed a milestone by programming a synthetic character to understand a "false belief." For instance, to create a false belief you could hide a stuffed bear in a cabinet in front of a child and an adult, and then when the adult leaves the room, move the bear to a closet while the child is still watching. Here, the child should know that the adult now has a false belief--that the bear is still in the cabinet.

So the computer can pass the Sally/Anne Theory of Mind Test?


_________________
WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH


singularitymadam
Sea Gull
Sea Gull

User avatar

Joined: 24 Aug 2007
Age: 37
Gender: Female
Posts: 213
Location: I live in a Mad Max movie. It's not as fun as it sounds.

14 Mar 2008, 1:10 am

Orwell wrote:
So the computer can pass the Sally/Anne Theory of Mind Test?


Apparently, yes. Although not much information is given about its performance... so it's hard to say. It's also hard to admit that a machine can be smarter than me.



Orwell
Veteran
Veteran

User avatar

Joined: 8 Aug 2007
Age: 34
Gender: Male
Posts: 12,518
Location: Room 101

14 Mar 2008, 2:56 am

singularitymadam wrote:
Orwell wrote:
So the computer can pass the Sally/Anne Theory of Mind Test?


Apparently, yes. Although not much information is given about its performance... so it's hard to say. It's also hard to admit that a machine can be smarter than me.

Yeah. If it's able to replicate normal human interaction, it'll have me beat. :oops:


_________________
WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH


Brainsforbreakfast
Pileated woodpecker
Pileated woodpecker

User avatar

Joined: 4 Mar 2006
Age: 39
Gender: Male
Posts: 179

14 Mar 2008, 8:56 am

My main gripe with the Turing's test is that its about the presentation of an AI, not so much the process.

Some intelligent programming might make a program sound like they're a self-aware human being, but unless that program can learn by itself, is able to build it's own identity without direct programing of one, then it's self aware.

So, an AI must probably pass the Turing test to be able to be self-aware, but passing it doesn't mean it is.

I might study an Artificial Inteligence course in university in 2009.. It's one of my obsessions :p



jrknothead
Veteran
Veteran

User avatar

Joined: 3 Aug 2007
Gender: Male
Posts: 1,423

14 Mar 2008, 12:06 pm

A good way to foil programs designed to pass the turing test is to use sarcasm... for example, Ai posts a fact, human disagrees with fact posted, and asks sarcastically "Where did you go to school?"

The AI will invariably respond with factual information, ie the name of the school he's supposed to have attended, instead of recognising that the question was sarcastic in nature...



ClosetAspy
Deinonychus
Deinonychus

User avatar

Joined: 16 Jan 2008
Age: 67
Gender: Female
Posts: 361

16 Mar 2008, 11:53 am

If a machine passes the Turing Test, does it mean that the machine is "intelligent" or does it mean that humans are easily fooled?

Personally, I do not think it is doing machines any favor to make them intelligent and sentient. I am serious. I think it would be a very cruel thing to do to them. After all, they are our slaves. Right now we can use them any way we want to precisely because they are unsentient and unfeeling. Even animals are given more consideration under the law. And if machines became sentient, and aware of their rights, wouldn't that create new issues? Maybe those working with AI ought to reread the first few chapters of the book of Genesis. If I remember correctly, God's creating an intelligent being after His own image didn't work out so well, it turned out those beings (us) had minds of their own and didn't go according to plan. And if God couldn't get it right, what makes us think we can? I think we should leave well enough alone in that area.



singularitymadam
Sea Gull
Sea Gull

User avatar

Joined: 24 Aug 2007
Age: 37
Gender: Female
Posts: 213
Location: I live in a Mad Max movie. It's not as fun as it sounds.

16 Mar 2008, 2:31 pm

Brainsforbreakfast wrote:
My main gripe with the Turing's test is that its about the presentation of an AI, not so much the process.

Some intelligent programming might make a program sound like they're a self-aware human being, but unless that program can learn by itself, is able to build it's own identity without direct programing of one, then it's self aware.

So, an AI must probably pass the Turing test to be able to be self-aware, but passing it doesn't mean it is.


John Searle's Chinese Room argument asserts this same perspective. You may be interested in reading it.

ClosetAspy wrote:
Personally, I do not think it is doing machines any favor to make them intelligent and sentient. I am serious. I think it would be a very cruel thing to do to them.


In all honesty, AI researchers are so far from making a sentient machine that this moral dilemma does not even factor into their considerations. Currently, most researchers are replicating small processes of parts of the mammalian brain, or replicating brains of non-sentient creatures, like cockroaches. These efforts are, for some, a way of understanding our own brains through a process of reverse-engineering. Others would say these efforts are doomed to failure.

ClosetAspy wrote:
And if machines became sentient, and aware of their rights, wouldn't that create new issues?


What rights would a machine have? I am not mocking your question, but it really is almost impossible to answer at this point. Your question is based on the assumption that a sentient machine would also have free will; I do not think this is plausible. Westerners have this absurd horror of a robot uprising, of some kind of revolt against humanity from the machines we built. Why would anyone build a machine capable of such violence? These fears are based in science fiction, and this particular branch is too fantastic to become true. More of a "colonies on Venus" idea than artificial satellites. I don't mean to be rude, but it seems the people who are afraid of a robot revolution are seriously underestimating the intelligence of humans who can build artificial minds.

ClosetAspy wrote:
If a machine passes the Turing Test, does it mean that the machine is "intelligent" or does it mean that humans are easily fooled?


Any answer to this would be completely dependent upon your definition of intelligence.