Page 2 of 2 [ 19 posts ]  Go to page Previous  1, 2

Lynners
Raven
Raven

User avatar

Joined: 30 Jan 2012
Age: 40
Gender: Female
Posts: 117

26 Mar 2012, 3:46 pm

Google is my AI :)



NicoleG
Veteran
Veteran

User avatar

Joined: 25 Dec 2011
Gender: Female
Posts: 667
Location: Texas

06 Apr 2012, 11:43 pm

Neuromancer wrote:
NicoleG wrote:
The thing that's most interesting (or annoying to someone like me) is how these bots "learned" what they are parroting, because that is exactly what they are doing. They were turned on, given input from outside sources (read: users inputing lines of text), and then started creating their own relational databases for how commonly sentences with like structure occur after other sentences.)

Would your description ov a baby learning a language radically different from this one?


Not radically, but definitely consequentially.

The brain is nothing more than a pattern recognition system. For example, we learn what emotions are because we recognize the patterns that occur when certain sympathetic and parasympathetic nervous reactions take place and are detected in combination with other simultaneous inputs, such as key sounds (that formulate words, such as "anger" or "sad"), and specific occurrences of events. Babies aren't born knowing laughter and happiness, but learn it over time. In this respect, all of the inputs coming into the brain system all the time are constantly being monitored for recognizable patterns, but it's important to note that ALL the inputs are coming in ALL the time. We don't just get one line of chat text and then get to pause long enough to process that one line before issuing a response and then receiving the next line of input. I'm pretty sure there's quite a lot of people on this forum that wish the latter could be the case instead of the former. Instead, we get 10 lines of text, plus sounds, and facial expressions, and pats on the back, and bright lights, and heightened emotions that don't make sense - and all while at the grocery store checkout lane.

So, from a VERY general way of looking at things, Cleverbot IS recognizing patterns and accepting new inputs and regurgitating outputs, but when you look at the specifics, Cleverbot is doing absolutely nothing like what the human mind does on a regular basis. Is Cleverbot learning? Yes. Is Cleverbot learning meaning and understanding? No. Will I ever call Cleverbot intelligent? No, not with it's current programming.



NicoleG
Veteran
Veteran

User avatar

Joined: 25 Dec 2011
Gender: Female
Posts: 667
Location: Texas

06 Apr 2012, 11:56 pm

Also, I'll add that Cleverbot doesn't appear to be attempting to elicit any sort of predictable response from the user. It's simply programmed to "continue chatting." Humans, on the other hand, rely on predictions within the environment, and therefore our interactions with the environment continually adjust to allow for some combination of better predictions and better results, no matter how difficult making those adjustments may be. Even my most "useless" OCD action became an action performed by me because it somehow provided a more stable predictive environment for me. Note that I call it stable, rather than functional.