Page 1 of 4 [ 48 posts ]  Go to page 1, 2, 3, 4  Next

jrjones9933
Veteran
Veteran

User avatar

Joined: 13 May 2011
Age: 48
Gender: Male
Posts: 11,440
Location: The end of the northwest passage

27 Jan 2017, 1:15 am

Feel welcome to use this thread for any ideas you have on the topic.

Here's mine:
Is it ethical to do whatever you want to a robot? Where do you draw the line? Does it matter if the robot is humanoid?

Is it ethical to do whatever you want to an AI?


_________________
Yogi Berra once observed, apparently paraphrasing Niels Bohr, “Prediction is difficult, especially about the future.” - X. Li et. al.
I never said half the things I said - Yogi Berra


jrjones9933
Veteran
Veteran

User avatar

Joined: 13 May 2011
Age: 48
Gender: Male
Posts: 11,440
Location: The end of the northwest passage

27 Jan 2017, 11:18 am

No interest in discussing sex robots? combat robots?

I wouldn't do anything to a robot that would make me feel bad, or that I would not want to see on a video shown to me by scary people. I don't know what the ethical boundaries are, exactly, but I feel 100% certain that they are out there.


_________________
Yogi Berra once observed, apparently paraphrasing Niels Bohr, “Prediction is difficult, especially about the future.” - X. Li et. al.
I never said half the things I said - Yogi Berra


Adamantium
Forum Moderator
Forum Moderator

User avatar

Joined: 6 Feb 2013
Age: 1017
Gender: Female
Posts: 5,998
Location: Erehwon

27 Jan 2017, 11:58 am

I think a reasonable starting point is to propose that if a thing is so like a living being that a typical person reacts to it as if it were such being, human or animal, then treating it ways that would be abusive if it were truly such a being are morally equivalent to doing those things to a living human or animal being.

Sam Harris and Paul Bloom had an interesting dialog around this topic in relation to the film "ex machina" and TV series "Westworld" that might be interesting to you:


_________________
Don't believe the gender note under my avatar. A WP bug means I can't fix it.


missfresnel
Butterfly
Butterfly

Joined: 25 Jan 2017
Age: 17
Gender: Female
Posts: 15

27 Jan 2017, 12:18 pm

Personally, I'm curious about how shutting off an AI would work. Would it count as killing one, or simply making it sleep? Would it put it in a "temporarily dead" situation, where restarting the program would revive it, or would it just create a new one with the old one's memories and personality?

Also, patching a program requires it to be shut off, so if turning it off counts as killing one, it'd be impossible to ever update or make changes to it.



jrjones9933
Veteran
Veteran

User avatar

Joined: 13 May 2011
Age: 48
Gender: Male
Posts: 11,440
Location: The end of the northwest passage

27 Jan 2017, 12:22 pm

I like the video, Adamantium. They make excellent point after excellent point. Regarding what I said in my answer, we seem to have the same idea. It isn't about the machine itself, it's about how we relate to something else. Creating empathy with a doll and then abusing it diminishes people, but it gets worse if the doll has some capacity to suffer.

Their point about the feasibility of owning superintelligent self-aware slaves is a good one. Cylons.

They did not explore one other point to my satisfaction. I'd like to stipulate that a specific feedback mechanism qualifies as pain. The Roomba might not be the best example, but I'll roll with it. It isn't designed to be intelligent, but it has lots of equipment to map, one could even say sense, it's environment. If something in the environment starts to damage it, this feedback mechanism is activated, warnings go off, it moves away from the stimulus, one could say pain. It's got math processors to deal with all this, and complex software to prioritize tasks, so it isn't dumb. One of it's jobs is to stay out of the owner's way.

So, the Roomba Mark512, for whatever reason, keeps getting in the owner's way. The owner takes it personally, and smashes the Roomba with a hammer. Not enough to completely disable it, but it's clearly limping around with alarms going off, still trying to execute it's program. Is that cruel?


_________________
Yogi Berra once observed, apparently paraphrasing Niels Bohr, “Prediction is difficult, especially about the future.” - X. Li et. al.
I never said half the things I said - Yogi Berra


Adamantium
Forum Moderator
Forum Moderator

User avatar

Joined: 6 Feb 2013
Age: 1017
Gender: Female
Posts: 5,998
Location: Erehwon

27 Jan 2017, 12:49 pm

jrjones9933 wrote:
I'd like to stipulate that a specific feedback mechanism qualifies as pain.

I think I might not accept that. This is straying into the territory of the "hard problem of consciousness," though - I don't think perception of pain really means anything except as part of a subjective experience.

jrjones9933 wrote:
So, the Roomba Mark512, for whatever reason, keeps getting in the owner's way. The owner takes it personally, and smashes the Roomba with a hammer. Not enough to completely disable it, but it's clearly limping around with alarms going off, still trying to execute it's program. Is that cruel?


I don't think so. Unless we know that it is experiencing pain or it looks convincingly like a familiar creature experiencing pain, I don't think it's cruel to break or batter a thing.


_________________
Don't believe the gender note under my avatar. A WP bug means I can't fix it.


jrjones9933
Veteran
Veteran

User avatar

Joined: 13 May 2011
Age: 48
Gender: Male
Posts: 11,440
Location: The end of the northwest passage

27 Jan 2017, 1:18 pm

Just to be clear, if we change only the physical form of the Roomba so that it is a convincing humanoid with essentially the same hardware and software, hitting it with a hammer would be cruel?


_________________
Yogi Berra once observed, apparently paraphrasing Niels Bohr, “Prediction is difficult, especially about the future.” - X. Li et. al.
I never said half the things I said - Yogi Berra


Adamantium
Forum Moderator
Forum Moderator

User avatar

Joined: 6 Feb 2013
Age: 1017
Gender: Female
Posts: 5,998
Location: Erehwon

27 Jan 2017, 1:37 pm

jrjones9933 wrote:
Just to be clear, if we change only the physical form of the Roomba so that it is a convincing humanoid with essentially the same hardware and software, hitting it with a hammer would be cruel?


No and yes.

No, it would not be cruel in that the machine would not feel pain.
Yes in that the person acting would feel as if they were a person inflicting pain.

No real harm done, except to the psyche of the perpetrator.


_________________
Don't believe the gender note under my avatar. A WP bug means I can't fix it.


jrjones9933
Veteran
Veteran

User avatar

Joined: 13 May 2011
Age: 48
Gender: Male
Posts: 11,440
Location: The end of the northwest passage

27 Jan 2017, 1:47 pm

I don't find that satisfying.

My dissatisfaction with your limit raises another question, on which I assume Philosophy has settled on an answer. What do we mean by ethics?

I tend to regard ethics as the individual expression of morality, the collective social expressions of ethics. The Ten Commandments are morality for some, although we disagree violently about number one. Don't kill people is a widely held moral view, although a person can have that view of ethics without accepting one particular capricious and lying god's alleged moral code.

So, where will we draw the line on the morality side? Will we ban some or all sex robots?

The fighting robots matter, too. I contrast Robot Wars, one of my favorite shows, with the scene in Mrs. Peregrine of the strange fighting creatures which made me feel overwhelming nausea, and not just from looking at them. That scene was clearly designed to give the audience a negative impression of that character.


_________________
Yogi Berra once observed, apparently paraphrasing Niels Bohr, “Prediction is difficult, especially about the future.” - X. Li et. al.
I never said half the things I said - Yogi Berra


Adamantium
Forum Moderator
Forum Moderator

User avatar

Joined: 6 Feb 2013
Age: 1017
Gender: Female
Posts: 5,998
Location: Erehwon

27 Jan 2017, 2:16 pm

jrjones9933 wrote:
I don't find that satisfying.

My dissatisfaction with your limit raises another question, on which I assume Philosophy has settled on an answer. What do we mean by ethics?

I tend to regard ethics as the individual expression of morality, the collective social expressions of ethics. The Ten Commandments are morality for some, although we disagree violently about number one. Don't kill people is a widely held moral view, although a person can have that view of ethics without accepting one particular capricious and lying god's alleged moral code.

So, where will we draw the line on the morality side? Will we ban some or all sex robots?

The fighting robots matter, too. I contrast Robot Wars, one of my favorite shows, with the scene in Mrs. Peregrine of the strange fighting creatures which made me feel overwhelming nausea, and not just from looking at them. That scene was clearly designed to give the audience a negative impression of that character.


Tangential or possibly pertinent point:
The ten commandments are not what most people think they are. "Thou shalt not kill" is not among them. The actual text is better translated as "thou shalt not murder." The distinction between murder and killing is very important in that system. Killing some people was not only permitted but obligatory, e.g., sorceresses, disrespectful children.

The reason that killing a humanoid but strictly nonhuman robot is wrong is to do with what it's like to be a killer, not the "injury" that was done to the device.


_________________
Don't believe the gender note under my avatar. A WP bug means I can't fix it.


jrjones9933
Veteran
Veteran

User avatar

Joined: 13 May 2011
Age: 48
Gender: Male
Posts: 11,440
Location: The end of the northwest passage

27 Jan 2017, 2:22 pm

I'd say your point about the biblical distinction between killing and murder, though accurate and revealing, demonstrates that morals evolve.

So we are already regulating that kind of behavior. See the thread BBC Bans Bunny Game, although fair warning you can't unsee it once you do.

Do we imprison people? Strictly regulate the types of robots which can be produced? People go to prison for drawings in various places around the world, or get assassinated over them. It seems likely that we will just adapt that system to include this new technology. That's what we usually do with new tech, and it sort of works for a little while.

Would the very making of a Mohammed, peace be upon him, robot be unethical?


_________________
Yogi Berra once observed, apparently paraphrasing Niels Bohr, “Prediction is difficult, especially about the future.” - X. Li et. al.
I never said half the things I said - Yogi Berra


Adamantium
Forum Moderator
Forum Moderator

User avatar

Joined: 6 Feb 2013
Age: 1017
Gender: Female
Posts: 5,998
Location: Erehwon

27 Jan 2017, 4:06 pm

jrjones9933 wrote:
I'd say your point about the biblical distinction between killing and murder, though accurate and revealing, demonstrates that morals evolve.

Every nation has the means to end human lives for state purposes usually organized into police and military forces of various kinds. They spend huge amounts of money on machines to facilitate the work of those forces when they are called on to kill people and they all outlaw murder and consider it the most serious crime. I don't think morals evolve that much.


jrjones9933 wrote:
So we are already regulating that kind of behavior. See the thread BBC Bans Bunny Game, although fair warning you can't unsee it once you do.
Brought me to a dead link, perhaps fortunately. I gather this was banned torture porn in the form of a film? I'm not sure I see the relevance. Was the torture victim a bot?

jrjones9933 wrote:
Would the very making of a Mohammed, peace be upon him, robot be unethical?

Not by any rational system, but probably illegal in Sharia states.

But you seem to be conflating the ethics of making an image of a suffering person making an obscene image of a person with the ethics of harming a simulated person. I think these are very different things.


_________________
Don't believe the gender note under my avatar. A WP bug means I can't fix it.


jrjones9933
Veteran
Veteran

User avatar

Joined: 13 May 2011
Age: 48
Gender: Male
Posts: 11,440
Location: The end of the northwest passage

27 Jan 2017, 5:13 pm

I see it as a whole basket of ethical concerns. What about the sex robots? Their very existence will shock the conscience of some, not to mention their uses. I predict that they will be regulated, but I don't see how. Will people talk about what they do with their robots in the privacy of their own homes being their own business?

For that matter, do we make harming a robot illegal, and if so at what point?


_________________
Yogi Berra once observed, apparently paraphrasing Niels Bohr, “Prediction is difficult, especially about the future.” - X. Li et. al.
I never said half the things I said - Yogi Berra


Adamantium
Forum Moderator
Forum Moderator

User avatar

Joined: 6 Feb 2013
Age: 1017
Gender: Female
Posts: 5,998
Location: Erehwon

27 Jan 2017, 6:41 pm

jrjones9933 wrote:
I see it as a whole basket of ethical concerns. What about the sex robots? Their very existence will shock the conscience of some, not to mention their uses. I predict that they will be regulated, but I don't see how. Will people talk about what they do with their robots in the privacy of their own homes being their own business?

For that matter, do we make harming a robot illegal, and if so at what point?


I think the codes about images and the codes about treatment of beings and objects are very different.

The only point of crossover comes in the idea of representation, but there is a big difference between a lifelike android and an image of a person and it's an important one.

I think it would be easy to make, for example, child pornography created with very realistic child sexbots illegal, but not because of the harm to the sexbots. Rather because of the social desire to suppress that kind of fantasy life. I think a jury would strongly dislike the creator of such representations, even knowing that the "victims" were simulations.

But it will be harder to work out the ethics of actual harm to an AI, android or not.


_________________
Don't believe the gender note under my avatar. A WP bug means I can't fix it.


techstepgenr8tion
SomeRandomGuy
SomeRandomGuy

User avatar

Joined: 6 Feb 2005
Age: 37
Gender: Male
Posts: 17,833
Location: In my workshop drafting non-shiny things.

27 Jan 2017, 8:49 pm

I think the more powerful our AI gets, and I do believe it will be dumb-AI for a very long time perhaps our entire lifetimes, keeping them highly specialized will be critical for managing their capabilities.

Some guys at work were making a joke about what would happen if you told a very powerful and universalized dumb-AI to make people happy; that it would quite likely put human beings in stasis with some kind of dopamine and serotonin drip or, voting that being alive causes more misery than not, would kill off humanity with heroin overdoses.

The biggest concern seems to be runaway effects. This is part of why you don't want to universalize a dumb-AI's capabilities.

As for representations and their abuse - I think this one of those areas were most people won't even believe its a thing until it starts happening and it may be a very short-lived issue or it may draw intense social stigma, which would be appropriate.


_________________
Amphibologies are. The cat's pajamas.