I've not-so-recently become extremely interested in AI, to the point where I learnt to program them. I also discovered that 'real' AIs are interesting to talk to, but I can't find any good ones. For anyone who knows what I'm talking about, most chatbot AIs are programmed in AIML, and while it can be interesting for a few hours, I'd prefer an AI with actual intelligence rather than learnt responses. Basically, does anyone know of non AIML AIs that I can download that just talk?
The bold portions are the major contentious issues in AI. Firstly, if you can define what "actual intelligence" is, you might win a Nobel Prize (seriously), if not a very huge research grant at MIT or something. Secondly, every intelligence, sapiens or otherwise, is fundamentally a learnt response for all intents and purposes.
The iPhone voice recognition app SIRI works very much like a superficial construct of the human brain (machine learning). It has very few pre-programmed responses of its own, it is simply programmed to learn, just as humans are not born with any actual technical knowledge (wellll...that we know of...all babies seem equally dumb), we are born with the ability to learn knowledge.
I give the legal faculty 10 years tops before it has to start contending whether SIRI's descendants are sentient, and should thus be granted rights.
I'm not after a nobel prize here, but I'll try to define part of what I mean by actual intelligence; the ability to form conclusions and opinions from given facts. Like, I can tell it that a government is abusing human rights, and it can form the conclusion that said government is bad. I know that is oversimplifying, and it certainly isn't everything that is required for AI, but its something that I haven't found yet. I think Siri would actually be quite good at that, but it hasn't been programmed for social interactions. What I want is for an AI to form an opinion, and store the opinion, but when asked about said opinion, to not just state the opinion, but to expand on it, and develop the opinion while doing so. Thus, the bad government could become a flawed government that does both good and bad things.
AIML doesn't allow that sort of higher processing of the type humans can do.
You're probably referring to ontology, the storing of concepts, and relations between them. Using your example:
Government is abusing human rights
Abusing human rights is bad
Therefore government is bad.
Which can be:
IF <government is abusing human rights>
THEN <government is bad>
ELSE <government is not bad>
the difference between the 2 is, as you've spotted - the first is a mapping of concepts and relationships between them. The second is a representation of a logical relationship, which is overly simplistic.
The thing about ontology is rather like recursively asking the question "why". Why is abusing human rights a bad thing? We don't question that premise, we universally take it to be true, and every opinion that is related to it is based upon this premise.
I don't think SIRI was meant to perform a very elaborate social interaction function, but the technology to do that is already available to us. The good thing about neural networks, just like human beings, is that they get better with experience. Their speed of learning is limited by the limits of our technology (FLOPS) and understanding of networks. And, of course, the mechanism of learning in the first place.
Yes, you've caught on the notion that humans are capable of some kind of cognition that machines are not, but so far, no human being has exactly been able to articulate exactly what manner of cognition that is (you've alluded to this idea with the phrases "actual intelligence" and "higher processing". Is it the ability to create art? There are already machines that can compose music, create paintings. Machines are certainly not lacking in the ability to compute: your pocket calculator could churn out pi to 32 decimals, easy. So what is it? Nobody really knows. AI research has been dedicated to achieving disjoint sub-tasks, excelling at many separate, but very narrow areas, at superhuman levels. For example, Deep Blue can beat the human world champion at chess, but it can't make a cup of tea. It probably can't even spell the word "chess".
Von Neumann famously said: if you can tell me exactly what it is that a machine cannot do, I will build you a machine that will do just that. His point being: if you can articulate it, you can algorithmize it, and then you can program it. If you can't articulate it, nothing follows.
Chatbots are currently being programmed to mimic without true connotation. You know how to process "government is abusing human rights = bad" because you know what government is, you know what abusing means, you know what good and bad mean, you know what a human is, and you know how you feel about it. It's that kind of stuff that isn't getting programmed into the AI systems, and to program a system to learn all of that means having it sit there absorbing data for possibly many years (just like a human absorbing input).
Here's a great example of two AI chatbots conversing with each other: Cleverbot vs Cleverbot
{Watch the video so you'll get the references below}
The thing that's most interesting (or annoying to someone like me) is how these bots "learned" what they are parroting, because that is exactly what they are doing. They were turned on, given input from outside sources (read: users inputing lines of text), and then started creating their own relational databases for how commonly sentences with like structure occur after other sentences.
Essentially, someone said to Cleverbot, "Hi," and since that was now the only input it had in it's database (DB), it repeated, "Hi" back to the user. The user probably then asked, "How are you?" and that got stored in the DB, so Cleverbot repeated it back. Eventually some end user responded, "I'm fine," which then got added to the DB as well. So now, "Hi," "How are you," and, "I'm fine," are part of the DB, but Cleverbot has no actual concept of self or well-being, or even social chit-chat for that matter. After many user interaction sessions, Cleverbot's relational DB has determined that those three sentences are most likely to occur in in sequence, so it uses them repeatedly.
When one Cleverbot says, "I'm not a robot. I'm a unicorn," you have to understand that it is parroting what a previous user typed as a response to a Cleverbot earlier stating something most likely along the lines of, "You are a robot." To top it off, "You are a robot" would have also come from a previous end user.
Some smart-aleck somewhere back in Cleverbot's past told Cleverbot, "You are a robot."
When Cleverbot used that line on someone else, that second smart-aleck responded with, "I'm not a robot. I'm a unicorn."
And now Cleverbot is using that line back on another Cleverbot.
It's just a parroting system. Now go back and play the video again, and for each line Cleverbot says, keep in mind that an end-user previously stated that line to Cleverbot as his or her response to whatever previous line Cleverbot used on the user. If the two Cleverbots continued talking to each other forever without any new input from outside users, they would never learn any new sentences. I'd lay money on the idea that they would eventually wind up in a verbal loop of some sort.
Disclaimer: My degree is in AI research and development from the perspective of cognitive psychology and neural networks. Analyzing robot interactions and spear-heading the programming of more "real" AIs is what I would like to be doing for a living. Anyone out there hiring for someone like me?
The bold portions are the major contentious issues in AI. Firstly, if you can define what "actual intelligence" is, you might win a Nobel Prize (seriously), if not a very huge research grant at MIT or something. Secondly, every intelligence, sapiens or otherwise, is fundamentally a learnt response for all intents and purposes.
The iPhone voice recognition app SIRI works very much like a superficial construct of the human brain (machine learning). It has very few pre-programmed responses of its own, it is simply programmed to learn, just as humans are not born with any actual technical knowledge (wellll...that we know of...all babies seem equally dumb), we are born with the ability to learn knowledge.
I give the legal faculty 10 years tops before it has to start contending whether SIRI's descendants are sentient, and should thus be granted rights.
Dr Martine Rothblatt The genius who created satellite radio is actually a lawyer and is drafting rights legislation regarding cyborgs, AI, and looking at robotics as well. Ahead of her time as usual.
https://en.wikipedia.org/wiki/Martine_Rothblatt
Here's a great example of two AI chatbots conversing with each other: Cleverbot vs Cleverbot
{Watch the video so you'll get the references below}
The thing that's most interesting (or annoying to someone like me) is how these bots "learned" what they are parroting, because that is exactly what they are doing. They were turned on, given input from outside sources (read: users inputing lines of text), and then started creating their own relational databases for how commonly sentences with like structure occur after other sentences.
Essentially, someone said to Cleverbot, "Hi," and since that was now the only input it had in it's database (DB), it repeated, "Hi" back to the user. The user probably then asked, "How are you?" and that got stored in the DB, so Cleverbot repeated it back. Eventually some end user responded, "I'm fine," which then got added to the DB as well. So now, "Hi," "How are you," and, "I'm fine," are part of the DB, but Cleverbot has no actual concept of self or well-being, or even social chit-chat for that matter. After many user interaction sessions, Cleverbot's relational DB has determined that those three sentences are most likely to occur in in sequence, so it uses them repeatedly.
When one Cleverbot says, "I'm not a robot. I'm a unicorn," you have to understand that it is parroting what a previous user typed as a response to a Cleverbot earlier stating something most likely along the lines of, "You are a robot." To top it off, "You are a robot" would have also come from a previous end user.
Some smart-aleck somewhere back in Cleverbot's past told Cleverbot, "You are a robot."
When Cleverbot used that line on someone else, that second smart-aleck responded with, "I'm not a robot. I'm a unicorn."
And now Cleverbot is using that line back on another Cleverbot.
It's just a parroting system. Now go back and play the video again, and for each line Cleverbot says, keep in mind that an end-user previously stated that line to Cleverbot as his or her response to whatever previous line Cleverbot used on the user. If the two Cleverbots continued talking to each other forever without any new input from outside users, they would never learn any new sentences. I'd lay money on the idea that they would eventually wind up in a verbal loop of some sort.
Disclaimer: My degree is in AI research and development from the perspective of cognitive psychology and neural networks. Analyzing robot interactions and spear-heading the programming of more "real" AIs is what I would like to be doing for a living. Anyone out there hiring for someone like me?

Ben Goertzel recently took over Hugo De Garis's PRC AI project. He also has a sick number of other projects.
http://wp.goertzel.org/
The bold portions are the major contentious issues in AI. Firstly, if you can define what "actual intelligence" is, you might win a Nobel Prize (seriously), if not a very huge research grant at MIT or something. Secondly, every intelligence, sapiens or otherwise, is fundamentally a learnt response for all intents and purposes.
The iPhone voice recognition app SIRI works very much like a superficial construct of the human brain (machine learning). It has very few pre-programmed responses of its own, it is simply programmed to learn, just as humans are not born with any actual technical knowledge (wellll...that we know of...all babies seem equally dumb), we are born with the ability to learn knowledge.
I give the legal faculty 10 years tops before it has to start contending whether SIRI's descendants are sentient, and should thus be granted rights.
Dr Martine Rothblatt The genius who created satellite radio is actually a lawyer and is drafting rights legislation regarding cyborgs, AI, and looking at robotics as well. Ahead of her time as usual.
https://en.wikipedia.org/wiki/Martine_Rothblatt
Makes me want to work on developing Robots equipped with strong AI(sentience) who don't need any sort of rights because they will be both smarter, stronger, and better armed than humans!

Does a chess algorithm really count as an AI?
Edit: I realize that might be a stupid question. At least without a proper definition of intelligence. I doubt any chess bot would be able to pass a Turing test though. ^^
More importantly they will evolve their own programming and even their design at a supercomputer pace. The speed of the iterations will be beyond human comprehension.
Then what?



Then the computer will send a machine back in time to kill the leader of the human resistance before the war has even begun.
Does a chess algorithm really count as an AI?
Edit: I realize that might be a stupid question. At least without a proper definition of intelligence. I doubt any chess bot would be able to pass a Turing test though. ^^
More importantly they will evolve their own programming and even their design at a supercomputer pace. The speed of the iterations will be beyond human comprehension.
Then what?



Then the computer will send a machine back in time to kill the leader of the human resistance before the war has even begun.
What you're referring to is called 'narrow' AI and it exists in many forms. It is unknown if 'strong' AI can be created that has higher-than-human general intelligence.
I
Makes me want to work on developing Robots equipped with strong AI(sentience) who don't need any sort of rights because they will be both smarter, stronger, and better armed than humans!

I would regard any computer or program that cannot be turned off or halted by a genuine human being as a risk to public safety.
Would your description ov a baby learning a language radically different from this one?
_________________
Be yourself!
(I believe Deep Blue didn't really beat Kasparov alone, but with the help of humans grand masters)
Edit: I realize that might be a stupid question. At least without a proper definition of intelligence. I doubt any chess bot would be able to pass a Turing test though. ^^".
Chess pprograms are created to play chess, not to talk
More importantly they will evolve their own programming and even their design at a supercomputer pace. The speed of the iterations will be beyond human comprehension.
Then what?



This sounds great!
I believe they will develop an inteligence qualitatively diferet to ours, something that, at the beginning, we will not recognize as inteligence.
_________________
Be yourself!
I have longed for AI to make a good human like robot
since 1960 or so. It sounded so exciting and they where
so optimist of how short time it would take.
Now some 50 years later and at the end of my life
they have gotten somewhere but it does not look
promising at all.
But I am a pessimist and the optimists trust it will happen
within say 20 years or so. 200 years more realistic?
Depends on what one ask for to happen. I mean real humanoids
like those in Battlestar Galactica the "skin jobs"