AI researchers think "Rascals" can pass Turing tes
singularitymadam
Sea Gull
Joined: 24 Aug 2007
Age: 37
Gender: Female
Posts: 213
Location: I live in a Mad Max movie. It's not as fun as it sounds.
http://www.eetimes.com/showArticle.jhtm ... =206903246
"We are building a knowledge base that corresponds to all of the relevant background for our synthetic character--where he went to school, what his family is like, and so on," said Selmer Bringsjord, head of Rensselaer's Cognitive Science Department and leader of the research project. "We want to engineer, from the start, a full-blown intelligent character and converse with him in an interactive environment like the holodeck from Star Trek."
Currently, Bringsjord is stocking his synthetic character will all sorts of facts, figures, family trivia and personal beliefs gleaned from what he calls his "full-time guinea pig," a graduate student that has agreed to bare all for his synthetic doppelganger. The synthetic character will be able to converse with other human-controlled avatars about his educational and family history, his personal pastimes, and even his feelings and beliefs.
While I think this is totally awesome, I just don't see how it can succeed, based upon its structure.
Does this mean it is stuck in the famous frame problem? I thought most cognitive scientists understood by now that the encyclopedic knowledge base theory of intelligence is bound to fail. I don't think having the fastest super-computer in the world run it will make that much of a difference, but perhaps I'm biased because I prefer neural networks.
So the computer can pass the Sally/Anne Theory of Mind Test?
_________________
WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH
singularitymadam
Sea Gull
Joined: 24 Aug 2007
Age: 37
Gender: Female
Posts: 213
Location: I live in a Mad Max movie. It's not as fun as it sounds.
Apparently, yes. Although not much information is given about its performance... so it's hard to say. It's also hard to admit that a machine can be smarter than me.
Yeah. If it's able to replicate normal human interaction, it'll have me beat.
_________________
WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH
My main gripe with the Turing's test is that its about the presentation of an AI, not so much the process.
Some intelligent programming might make a program sound like they're a self-aware human being, but unless that program can learn by itself, is able to build it's own identity without direct programing of one, then it's self aware.
So, an AI must probably pass the Turing test to be able to be self-aware, but passing it doesn't mean it is.
I might study an Artificial Inteligence course in university in 2009.. It's one of my obsessions :p
A good way to foil programs designed to pass the turing test is to use sarcasm... for example, Ai posts a fact, human disagrees with fact posted, and asks sarcastically "Where did you go to school?"
The AI will invariably respond with factual information, ie the name of the school he's supposed to have attended, instead of recognising that the question was sarcastic in nature...
If a machine passes the Turing Test, does it mean that the machine is "intelligent" or does it mean that humans are easily fooled?
Personally, I do not think it is doing machines any favor to make them intelligent and sentient. I am serious. I think it would be a very cruel thing to do to them. After all, they are our slaves. Right now we can use them any way we want to precisely because they are unsentient and unfeeling. Even animals are given more consideration under the law. And if machines became sentient, and aware of their rights, wouldn't that create new issues? Maybe those working with AI ought to reread the first few chapters of the book of Genesis. If I remember correctly, God's creating an intelligent being after His own image didn't work out so well, it turned out those beings (us) had minds of their own and didn't go according to plan. And if God couldn't get it right, what makes us think we can? I think we should leave well enough alone in that area.
singularitymadam
Sea Gull
Joined: 24 Aug 2007
Age: 37
Gender: Female
Posts: 213
Location: I live in a Mad Max movie. It's not as fun as it sounds.
Some intelligent programming might make a program sound like they're a self-aware human being, but unless that program can learn by itself, is able to build it's own identity without direct programing of one, then it's self aware.
So, an AI must probably pass the Turing test to be able to be self-aware, but passing it doesn't mean it is.
John Searle's Chinese Room argument asserts this same perspective. You may be interested in reading it.
In all honesty, AI researchers are so far from making a sentient machine that this moral dilemma does not even factor into their considerations. Currently, most researchers are replicating small processes of parts of the mammalian brain, or replicating brains of non-sentient creatures, like cockroaches. These efforts are, for some, a way of understanding our own brains through a process of reverse-engineering. Others would say these efforts are doomed to failure.
What rights would a machine have? I am not mocking your question, but it really is almost impossible to answer at this point. Your question is based on the assumption that a sentient machine would also have free will; I do not think this is plausible. Westerners have this absurd horror of a robot uprising, of some kind of revolt against humanity from the machines we built. Why would anyone build a machine capable of such violence? These fears are based in science fiction, and this particular branch is too fantastic to become true. More of a "colonies on Venus" idea than artificial satellites. I don't mean to be rude, but it seems the people who are afraid of a robot revolution are seriously underestimating the intelligence of humans who can build artificial minds.
Any answer to this would be completely dependent upon your definition of intelligence.