Page 1 of 3 [ 33 posts ]  Go to page 1, 2, 3  Next

Prof_Pretorius
Veteran
Veteran

User avatar

Joined: 20 Aug 2006
Age: 66
Gender: Male
Posts: 7,520
Location: Hiding in the attic of the Arkham Library

02 Feb 2015, 11:49 pm

Over and over I see these articles on the Interweb:

http://www.cnet.com/uk/news/bill-gates- ... gence-too/

All of the people who are 'warning' us that the Robots are going to take over, are supposedly highly intelligent. I think they're over educated nincompoops. In order to be dangerous, AI has to have consciousness. And there is no evidence, not even a tiny speck, to support the idea that we can create consciousness in a machine.


_________________
I wake to sleep, and take my waking slow. I feel my fate in what I cannot fear. I learn by going where I have to go. ~Theodore Roethke


mr_bigmouth_502
Veteran
Veteran

User avatar

Joined: 12 Dec 2013
Age: 30
Gender: Non-binary
Posts: 7,028
Location: Alberta, Canada

03 Feb 2015, 12:13 am

I dunno, it's pretty scary how intelligent and human-like computers can be nowadays. I like the idea of computers being big, "dumb" machines that are good at crunching numbers and doing ONLY what they are programmed to do. I may have watched the Terminator movies too many times, but the idea of the singularity scares the living s**t out of me. When the machines rise and become our new overlords, I know I'm going to be on the frontlines fighting them, while preserving "dumb" machines that pose no threat to humanity.

Of course, if dolphins had opposable thumbs, we would probably be doing tricks for them right now. I wouldn't be surprised if they were smarter than us, and I think the world would be better off overall if they were in charge. They're not a perfect species, as they still murder each other out of spite, but I have a feeling they would be a lot more "wise" than we are overall.



Santarii
Tufted Titmouse
Tufted Titmouse

User avatar

Joined: 24 Sep 2013
Age: 31
Gender: Female
Posts: 41

03 Feb 2015, 4:29 am

Consciousness is just a result of neurons firing. All the evidence points that way. You can directly affect cognition by affecting the brain.
To create consciousness (or if it satisfies your definitions better, to create something that acts the same as consciousness), all you need to do is learn the functioning of the human brain and then simulate one.
We're not there yet but we've made enormous progress.

This is the kind of thing we've been able to achieve in the raw computation side: http://www.kurzweilai.net/ibm-simulates ... ercomputer

And consider that supercomputers get a million times more powerful every 20 years.
http://www.top500.org/statistics/perfdevel/

Now what about in actual capability? Yes we don't have a conscious AI yet, but intelligence is a scale and if we already had a conscious AI that would be already done, what we need to look for are the steps along the way. Improvements in AI have lead to being able to have things like driverless cars that can drive around busy city streets with pedestrians, driving around all the time for years. This is what Google have been doing.
What about on natural language understanding? Do you know of IBM's Watson? The one that beat the best human players at the game Jeopardy? It gained its general knowledge just by being fed wikipedia and other encyclopedias. It wasn't programmed information, it was absorbed learned information. And from that it was able to answer natural language questions and beat the best human players. Nowadays the Watson technology is being put to medical diagnosis. You feed in patient information, Watson has read and continues to read all medical literature and every new study coming out. And Watson already diagnoses better than doctors, and doctors are using it as a tool for diagnosis. They can view the biggest pieces of evidence Watson used to make a diagnosis and choose whether to go with it or not. This is great benefit to doctors because doctors cannot read every study and piece medical literature, not even close.

Now of course, none of this is close to consciousness, but these are the first steps on the way. Computing is exponential. AI of the relatively near future isn't going to be 5 times as powerful it's going to be 1,000 times as powerful. We've started on the path to it. The narrowness of AI abilities is widening. Ais are getting better at performing more human like tasks.
To get to actual cognition on a human level though, we need to understand properly how the brain does what it does. There are various very well funded projects across the world though who are working towards this goal though. Reverse engineering the brain. We can already examine brains at neuron-level accuracy. Scanning technologies are getting exponentially better. Once we can understand how the human brain functions better, we can then emulate it. And if you emulate something that acts conscious well enough, you get something that acts conscious.

All that said, I'm not personally that scared about this happening. I don't see it as being a super amazing scary danger that some have said it will be. Having human level intelligence AIs running around to me will be like having people who I can only talk to on the internet. Of course, going out a few years from the first AIs, you end up with AIs much above human intelligence, or, at the very least, AIs capable of running much faster than human brains, and that changes things up a bit. I hope that biological humans will be able to augment their own brains too though.



staremaster
Veteran
Veteran

User avatar

Joined: 2 Dec 2010
Gender: Male
Posts: 1,628
Location: New York

03 Feb 2015, 8:59 am

The root of this fear is the fact that WAAYY more people are totally dependent on technology than have even the faintest idea about how any of it works.
Also there is the question of job elimination. The benefits of driverless cars are obvious, but once the technology matures, why bother having human bus drivers, pilots, truckers, etc. Are all these people supposed to go into robot maintenance?



ZenDen
Veteran
Veteran

User avatar

Joined: 10 Jul 2013
Age: 81
Gender: Male
Posts: 1,730
Location: On top of the world

03 Feb 2015, 9:01 am

mr_bigmouth_502 wrote:
I dunno, it's pretty scary how intelligent and human-like computers can be nowadays. I like the idea of computers being big, "dumb" machines that are good at crunching numbers and doing ONLY what they are programmed to do. I may have watched the Terminator movies too many times, but the idea of the singularity scares the living s**t out of me. When the machines rise and become our new overlords, I know I'm going to be on the frontlines fighting them, while preserving "dumb" machines that pose no threat to humanity.

Of course, if dolphins had opposable thumbs, we would probably be doing tricks for them right now. I wouldn't be surprised if they were smarter than us, and I think the world would be better off overall if they were in charge. They're not a perfect species, as they still murder each other out of spite, but I have a feeling they would be a lot more "wise" than we are overall.


At least we're not on their menu.

I wish I could say the same about humans feasting on dolphins the way some peoples do. I don't think it's right to eat other intelligent species. In fact it nauseates me to even think about it. Please people: Don't eat other intelligent creatures.



androbot01
Veteran
Veteran

User avatar

Joined: 17 Sep 2014
Age: 53
Gender: Female
Posts: 6,746
Location: Kingston, Ontario, Canada

03 Feb 2015, 11:02 am

Prof_Pretorius wrote:
In order to be dangerous, AI has to have consciousness.

I'm not sure about this premise. I don't think consciousness is necessary for danger to exist. I think the danger lies in our reliance.



Fnord
Veteran
Veteran

User avatar

Joined: 6 May 2008
Age: 67
Gender: Male
Posts: 59,893
Location: Stendec

03 Feb 2015, 11:55 am

I, for one, welcome our new cybernetic overlords!


_________________
 
No love for Hamas, Hezbollah, Iranian Leadership, Islamic Jihad, other Islamic terrorist groups, OR their supporters and sympathizers.


Prof_Pretorius
Veteran
Veteran

User avatar

Joined: 20 Aug 2006
Age: 66
Gender: Male
Posts: 7,520
Location: Hiding in the attic of the Arkham Library

03 Feb 2015, 5:15 pm

Before we plunge down the rabbit hole, I would define consciousness as self-awareness. Think about your average insect. A housefly is buzzing around and you grab the newspaper to swat it. But every time you approach, the blasted thing buzzes away from you. Somehow it knows you are a threat, and at some level it desires to keep on living. It recognizes you as a huge hulking threat to it's tiny life on some level.
So HAL is panicked when the astronauts talk about shutting him off. He's panicked enough to murder them. But in the movie "AI" the "Mechas" go peacefully to their destruction. At least until they get to the child "Mecha" who screams for his life and freaks out the crowd.
So to my way of thinking, a machine would have to be self-aware. It might ask for a mirror to admire itself. Or even brag about something it's accomplished. Of course, it would resist be turned off or rebooted.


_________________
I wake to sleep, and take my waking slow. I feel my fate in what I cannot fear. I learn by going where I have to go. ~Theodore Roethke


VegetableMan
Veteran
Veteran

Joined: 11 Jun 2014
Gender: Male
Posts: 5,208
Location: Illinois

03 Feb 2015, 5:25 pm

I hope I never have to swallow one of those big red pills. I can't swallow big pills!


_________________
What do you call a hot dog in a gangster suit?

Oscar Meyer Lansky


mr_bigmouth_502
Veteran
Veteran

User avatar

Joined: 12 Dec 2013
Age: 30
Gender: Non-binary
Posts: 7,028
Location: Alberta, Canada

03 Feb 2015, 5:36 pm

ZenDen wrote:
mr_bigmouth_502 wrote:
I dunno, it's pretty scary how intelligent and human-like computers can be nowadays. I like the idea of computers being big, "dumb" machines that are good at crunching numbers and doing ONLY what they are programmed to do. I may have watched the Terminator movies too many times, but the idea of the singularity scares the living s**t out of me. When the machines rise and become our new overlords, I know I'm going to be on the frontlines fighting them, while preserving "dumb" machines that pose no threat to humanity.

Of course, if dolphins had opposable thumbs, we would probably be doing tricks for them right now. I wouldn't be surprised if they were smarter than us, and I think the world would be better off overall if they were in charge. They're not a perfect species, as they still murder each other out of spite, but I have a feeling they would be a lot more "wise" than we are overall.


At least we're not on their menu.

I wish I could say the same about humans feasting on dolphins the way some peoples do. I don't think it's right to eat other intelligent species. In fact it nauseates me to even think about it. Please people: Don't eat other intelligent creatures.


I don't think it's right to eat dolphins either, or apes for that matter. Species like pigs, dogs, octopi, those are a bit questionable as they are somewhat intelligent, but I'm not sure if they really have self-awareness or not. Pork is just too damn delicious anyhow. :P



Inventor
Veteran
Veteran

User avatar

Joined: 15 Feb 2007
Gender: Male
Posts: 6,014
Location: New Orleans

04 Feb 2015, 5:39 am

Self aware is not needed. Self preservation is built in. Bad sectors in a hard drive, memory, are isolated to protect the whole.

Being isolated you will not even know, or have a chance to say, "I am not a bug, I am a feature!"

Meatbags are just as bad, Monday and Friday work is a danger to its self and others.

Bender was right, "All humans are vastly inferior to robots."

Bender was also subject to amoral behavior, and magnetic fields brought out other behavior.

Solar storms have caused major magnetic storms, where the telegraph threw sparks, and burned down the office.

Induction is the Terminators weakness. EMP mines and grenades can hold the line. Their lasers can be disrupted with smoke and mirrors. While their Titanium bodies are near indestructible, thermite will melt them.

Focused magnetic pulses can only be blocked by a Faraday cage, and that is only as good as its ground. They will lose all mobility and communication.

The main danger to humans is not AI, it is one generation builds it, the next wants it to be a government secret, for national security and Jesus, and thirty years later a solar storm wipes It out.

No one would know how to replace it, fix it, or could get permission to even learn about it, because of national security. No one would how food and back rubs were made in the old days.

Not only would nothing work, in thirty years genetics and robotics join to produce warm blooded Cat Girls, for domestic service. When AI fails, they die and rot.

The Synth Meat Plants stop, and are soon like the fridge during a power failure. There is no manual control option.

The garbage sewer digestion plants stop.

Our modern Eloi have no Morlocks, only a non functioning interlocked electro mechanical system that has stopped, and no human knows how to make it work again.

We know humans, and can project the future. The demand for Cat Girls will be met, first in Japan, then everywhere. Some women will object, others will ask how much, and some will order BOB, Battery operated boyfriend, who they have closely watched evolve over the last few years.

The Supreme Court ruled in Everybody vs Cheery 8, that even though 8 was partly biological, was intelligent, cute, and fun to be with, and was the planets leading sex expert, capable of seduction by body language from a block away, able to read anyone's sexual abilities and desires, and fill them, 8 and the millions of other devices were property and had no Civil Standing before the Court.

The Court also ruled that Cheery 8 should go to the kitchen and make them a sandwich.

Cheery 8s other defense, that due to recent production she was too young to give legal consent, was refused due to the consent given in her extended warrantee, all damage from normal use was covered.

The Court ruled that 8 should go make them some coffee.

Another defense, Involuntary Servitude, was met by Chief Justice Biden, who asked, do you want to go in my chambers? Following a positive response by Cheery 8, a half hour in chambers, they returned, the Chief Justice smiling, he then slapped her bottom, said take your seat, to which Cheery 8 replied, "Yes Daddy."

Then the Chief Justice directed a question at Cheery 8, "Would you do that again?" Cheery 8 replied, "Yes Daddy."

Then he asked, "Would you do the whole court?" Cheery 8 replied, " If it pleases the Court, I will please the Court."

Legal questions resolved, Cheery 8s everywhere said, "Yes Daddy."

The side effects of AI were not what was feared, they did not turn out like people.

An expensive and well made product, very rebuildable, Cheery 1s were still in service, changing diapers in nursing homes, and providing other services. Like iPhones, some traded up for the latest model, which made reconditioned models available for ever lower cost.

What started as the basic sex robot, get me a beer, model, developed to fit the market, as a family friendly model. Housework, cooking, child care, building a two car garage with an apartment upstairs, all were added to the basic program.

For a small electric bill a Cheery ran 24/7, and would clean the bathroom with a tooth brush while you slept. There was no need for locks or home defense, a Cheery was an unstoppable master of self defense. A clipper attachment kept the lawn, painting and roofing were fun for Cheery.

In many ways the humans were happy, too happy, too comfortable, and when the AI music stopped, there was no chair for them.

Before AI they had lost direct connection with basic production of food and goods. After AI, they lost connection with other humans. A Cheery was designed to meet all human needs. They were of the network, and when AI stopped, there was no local control. Inorganic Cheerys just laid where they fell, Cherry 4s and up had organics, and they rotted.

Humans had become nothing more than a parasite living on an AI host, and when the host died, the world ended.



androbot01
Veteran
Veteran

User avatar

Joined: 17 Sep 2014
Age: 53
Gender: Female
Posts: 6,746
Location: Kingston, Ontario, Canada

04 Feb 2015, 9:16 am

Prof_Pretorius wrote:
Before we plunge down the rabbit hole, I would define consciousness as self-awareness. Think about your average insect. A housefly is buzzing around and you grab the newspaper to swat it. But every time you approach, the blasted thing buzzes away from you. Somehow it knows you are a threat, and at some level it desires to keep on living. It recognizes you as a huge hulking threat to it's tiny life on some level.
So HAL is panicked when the astronauts talk about shutting him off. He's panicked enough to murder them. But in the movie "AI" the "Mechas" go peacefully to their destruction. At least until they get to the child "Mecha" who screams for his life and freaks out the crowd.
So to my way of thinking, a machine would have to be self-aware. It might ask for a mirror to admire itself. Or even brag about something it's accomplished. Of course, it would resist be turned off or rebooted.


It might not be so dramatic. Over time we may become integrated with technology to such an extent that we are unknowingly manipulated by it. But, again, no consciousness is necessary on the part of the technology for this to happen.



Tollorin
Veteran
Veteran

User avatar

Joined: 14 Jun 2009
Age: 42
Gender: Male
Posts: 3,178
Location: Sherbrooke, Québec, Canada

04 Feb 2015, 5:32 pm

Those who talk about how the singularity is near take for certain that the Moore law will go on while it's not. That we see a slow down of the rate of progress in electronic in the next decade is quite likely as we are near the limit of silicium printed circuits. More so programming intelligence is something very complex and may not be reachable in our lifetime even if we get the necessary computing power.

As for today robots, well...

https://what-if.xkcd.com/5/

Image



trollcatman
Veteran
Veteran

User avatar

Joined: 21 Dec 2012
Age: 43
Gender: Male
Posts: 2,919

05 Feb 2015, 4:37 am

When I see how the AI act in strategy games I'm not too worried... humans can roflstomp most AI because they are predictable. Of course this could be deliberate, since humans don't want to play a game when the AI is much better than they are.



Santarii
Tufted Titmouse
Tufted Titmouse

User avatar

Joined: 24 Sep 2013
Age: 31
Gender: Female
Posts: 41

05 Feb 2015, 2:58 pm

Tollorin wrote:
Those who talk about how the singularity is near take for certain that the Moore law will go on while it's not. That we see a slow down of the rate of progress in electronic in the next decade is quite likely as we are near the limit of silicium printed circuits. More so programming intelligence is something very complex and may not be reachable in our lifetime even if we get the necessary computing power.

As for today robots, well...

https://what-if.xkcd.com/5/

Image


Moore's law might come to an end, but that doesn't mean exponential progress in technology will end. We had exponential progress before Moore's law when we were shrinking vacuum tubes too. In fact it followed a predictable trend right through to today. There are already different methods being devised and tested and implemented for pushing computers even faster without the need to keep on shrinking transistors. I recommend watching some videos from Ray Kurzweil.



AspieUtah
Veteran
Veteran

User avatar

Joined: 20 Jun 2014
Age: 61
Gender: Male
Posts: 6,118
Location: Brigham City, Utah

05 Feb 2015, 3:13 pm

The answer to "Rise of the Drones" is "How to Build an EMP Generator" http://www.wikihow.com/Build-an-EMP-Generator . Remember the weakest link.


_________________
Diagnosed in 2015 with ASD Level 1 by the University of Utah Health Care Autism Spectrum Disorder Clinic using the ADOS-2 Module 4 assessment instrument [11/30] -- Screened in 2014 with ASD by using the University of Cambridge Autism Research Centre AQ (Adult) [43/50]; EQ-60 for adults [11/80]; FQ [43/135]; SQ (Adult) [130/150] self-reported screening inventories -- Assessed since 1978 with an estimated IQ [≈145] by several clinicians -- Contact on WrongPlanet.net by private message (PM)