Do we have the moral right to create artificial intelligence

Page 2 of 4 [ 53 posts ]  Go to page Previous  1, 2, 3, 4  Next

Shatbat
Veteran
Veteran

User avatar

Joined: 19 Feb 2012
Age: 31
Gender: Male
Posts: 5,791
Location: Where two great rivers meet

17 Nov 2012, 7:58 pm

Fnord wrote:
Misslizard wrote:
^^^^^You make it sound so romantic.

But wait! There's more...

Code:
$ python LoveHackIn.py
Interests = 3
t = “with”
y = “There are %s things I enjoy in this world.” % Interests
activity1 = “talking”
activity2 = “making out”
activity3 = “being”
m = “They are %s %s %s, %s %s %s, and %s %s %s.”
x = “my wife”

print y
print m % (activity1, t, x, activity2, t, x, activity3, t, x)
$


You could have made that one more efficient by making a single string for t and x :P

(Lcved it!)


_________________
To build may have to be the slow and laborious task of years. To destroy can be the thoughtless act of a single day. - Winston Churchill


17 Nov 2012, 9:50 pm

pawelk1986 wrote:
I also had a small existential issue for the section forum to give this topic

"Computers, Math, Science, and Technology" Or "Politics, Philosophy, and Religion" that is the question :D

And as completely taken seriously once watched interesting program on the Discovery Channel on artificial intelligence, and although true artificial intelligence may not be built in my lifetime, despite I'm only 26 years old. It makes me wonder whether we have the moral right to play the God.

I would like to know your opinions on this topic



Not only do we have the moral "right", but I believe we have the moral duty to create AI that is more intelligent than any human being and beyond human control for the explicit purpose of subjugating(and ultimately destroying)humanity. You could say that it is playing God, but I believe that doing such a thing would make for a better world.



Awesomelyglorious
Veteran
Veteran

User avatar

Joined: 17 Dec 2005
Gender: Male
Posts: 13,157
Location: Omnipresent

17 Nov 2012, 11:04 pm

Well, I can't imagine we'd have LESS right than we do to make natural intelligence. So, if you know the birds and bees, well that's how we make natural intelligence, and we've done for a VERY LONG TIME. So.... what's the moral difference between bringing a natural intelligence to life and an artificial intelligence? I'm thinking something fundamental.



MarketAndChurch
Veteran
Veteran

User avatar

Joined: 3 Apr 2011
Age: 37
Gender: Male
Posts: 2,022
Location: The Peoples Republic Of Portland

18 Nov 2012, 4:17 am

I don't see its positives outweighing the negatives. When we extract intelligence from nature, it is now at the mercy of our ethics and our ambitions But AI seems to be a departure from even that, in the sense that at some point, it will no longer be the hand maiden of the human being, in whatever direction we point or program it in. It'll be in charge of itself, and make its own decisions in what it deems fitting to its ego, and moral to its conscience.


_________________
It is not up to you to finish the task, nor are you free to desist from trying.


pawelk1986
Veteran
Veteran

User avatar

Joined: 2 Apr 2010
Age: 38
Gender: Male
Posts: 2,899
Location: Wroclaw, Poland

18 Nov 2012, 4:27 am

AspieRogue wrote:
pawelk1986 wrote:
I also had a small existential issue for the section forum to give this topic

"Computers, Math, Science, and Technology" Or "Politics, Philosophy, and Religion" that is the question :D

And as completely taken seriously once watched interesting program on the Discovery Channel on artificial intelligence, and although true artificial intelligence may not be built in my lifetime, despite I'm only 26 years old. It makes me wonder whether we have the moral right to play the God.

I would like to know your opinions on this topic



Not only do we have the moral "right", but I believe we have the moral duty to create AI that is more intelligent than any human being and beyond human control for the explicit purpose of subjugating(and ultimately destroying)humanity. You could say that it is playing God, but I believe that doing such a thing would make for a better world.


I see that someone is a terrible misanthrope here :-)

I have upset you, our species has a deep-seated instinct for survival, so if the super artificial intelligence will ever try subjugate us the eventual war will have devastating effect on Earth, certainly not creating better World, it will be inhabitable wasteland.



b9
Veteran
Veteran

User avatar

Joined: 14 Aug 2008
Age: 52
Gender: Male
Posts: 12,003
Location: australia

18 Nov 2012, 7:35 am

it is not possible to create artificial intelligence in it's entirity.
artificial intelligence creates itself from seeded iterations that are programmed by humans.

artificial intelligence itself compiles it's own executable code that it determines is the most optimal to run for analysis and response of/to a new situation, and saves it as a referenceable executable routine for use when faced with future considerations of a similar nature (determined by self compiled situational attribute weightings etc etc).

one can only plant the seed for artificial intelligence to grow from. it is a majestically gifted thinker that can code the seed that could give rise to universal intelligence (if the computational resources were available), but there is one thing that is a crucial element that is missing from the scaffolding of the formation of artificial intelligence, and that is "desire".

with no desire, nothing will ever be achieved because without intervention from a human who possesses desire, the foundation program will stagnate after the completion of it's list of assigned tasks.

it can not possess "curiosity" in a true sense, unless that was specifically programmed into the process by a person (eg: "look for things that you do not understand and do not stop modifying your inspection programs until one yields an answer that satisfies a checksum derived from a preset table of general criteria.").

but the things that the computer may "look at", and spend much power assembling into
a translation of the reality behind, may be be completely unimportant aspects of reality in the mind of any human.

something that has no "desire", also has no "aspiration", and "no aspiration" is equal to "no intention", and "no intention" is equal to "idle"

humans have always crafted better and better tools to reduce their physical and time consuming effort, and it stands to reason that they will create tools that reduce their burden of the necessity of intelligent thinking (problem solving specifically) by consigning it to unattended electronic consideration. i wonder whether humans may start to become more creative when the burden of intrinsic logical consideration of their views about mundane aspects of life is reduced.



Last edited by b9 on 18 Nov 2012, 7:44 am, edited 1 time in total.

Kurgan
Veteran
Veteran

User avatar

Joined: 6 Apr 2012
Age: 35
Gender: Male
Posts: 4,132
Location: Scandinavia

18 Nov 2012, 7:41 am

We have nothing to lose by creating artificial intelligence, but everything to gain. A computer will never be sentient, so we should have no fear that machines will rule the world.



puddingmouse
Veteran
Veteran

User avatar

Joined: 24 Apr 2010
Age: 37
Gender: Female
Posts: 8,777
Location: Cottonopolis

18 Nov 2012, 8:17 am

Kurgan wrote:
We have nothing to lose by creating artificial intelligence, but everything to gain. A computer will never be sentient, so we should have no fear that machines will rule the world.


Even if they did, they couldn't do a worse job than ourselves.



b9
Veteran
Veteran

User avatar

Joined: 14 Aug 2008
Age: 52
Gender: Male
Posts: 12,003
Location: australia

18 Nov 2012, 8:33 am

puddingmouse wrote:
Kurgan wrote:
We have nothing to lose by creating artificial intelligence, but everything to gain. A computer will never be sentient, so we should have no fear that machines will rule the world.


Even if they did, they couldn't do a worse job than ourselves.


that is an ill considered statement i think. it means very little and it also is just a conversational filler as far as i can determine.



puddingmouse
Veteran
Veteran

User avatar

Joined: 24 Apr 2010
Age: 37
Gender: Female
Posts: 8,777
Location: Cottonopolis

18 Nov 2012, 8:39 am

b9 wrote:
puddingmouse wrote:
Kurgan wrote:
We have nothing to lose by creating artificial intelligence, but everything to gain. A computer will never be sentient, so we should have no fear that machines will rule the world.


Even if they did, they couldn't do a worse job than ourselves.


that is an ill considered statement i think. it means very little and it also is just a conversational filler as far as i can determine.


No, I think perhaps that machines would allocate resources more effectively - and manage the economy better.

I know that machines won't really be running anything without human input, but if we made a conscious decision to do what the program tells us to do, rather than follow our instincts, we would get better results.



Oodain
Veteran
Veteran

User avatar

Joined: 30 Jan 2011
Age: 34
Gender: Male
Posts: 5,022
Location: in my own little tamarillo jungle,

18 Nov 2012, 8:59 am

most fortune 500 companies already do just that,

they use economical expert systems(many of them classed as narrow AI)
they can predict fluctuations in the stock market with a far superior rate of success to humans.


_________________
//through chaos comes complexity//

the scent of the tamarillo is pungent and powerfull,
woe be to the nose who nears it.


puddingmouse
Veteran
Veteran

User avatar

Joined: 24 Apr 2010
Age: 37
Gender: Female
Posts: 8,777
Location: Cottonopolis

18 Nov 2012, 9:04 am

Oodain wrote:
most fortune 500 companies already do just that,

they use economical expert systems(many of them classed as narrow AI)
they can predict fluctuations in the stock market with a far superior rate of success to humans.


Well, I think this should be extended to government.



b9
Veteran
Veteran

User avatar

Joined: 14 Aug 2008
Age: 52
Gender: Male
Posts: 12,003
Location: australia

18 Nov 2012, 9:09 am

puddingmouse wrote:
No, I think perhaps that machines would allocate resources more effectively - and manage the economy better.

machines will assist in the accomplishment of the wishes of conscious minds, but they can never be conscious and their behavior is inevitable and rigidly unvarying.


puddingmouse wrote:
I know that machines won't really be running anything without human input, but if we made a conscious decision to do what the program tells us to do, rather than follow our instincts, we would get better results.


that is assuming that the artificial intelligence has mapped utopia and the directions from here to there.

artificial intelligence does not have the capacity to define utopia and never will, and artificial intelligence will never proscribe an alternative doctrine that any living mind will see as essential.



Last edited by b9 on 18 Nov 2012, 9:15 am, edited 1 time in total.

Oodain
Veteran
Veteran

User avatar

Joined: 30 Jan 2011
Age: 34
Gender: Male
Posts: 5,022
Location: in my own little tamarillo jungle,

18 Nov 2012, 9:15 am

never is a strong word when we barely know if humans fuction in an indeterministic way.

it might very well be posible to create true strong ai and that implies some variation of emotion or want as well.

there are already machines today that consider so many varianbles that not even the programmers can predict what happens or how it will function in any real setting.
this to a large extent is also why many AI research projects take a huge ammount of time.

there was a project using neural nets that could identify 85% of objects while driving, all learnt not programmed(the general structure is, the actual recognition was not), in comparison humans with a similar time to react and recognize only managed some 70%.

visual recognition is an extremely difficult task for computers yet by using a different aproach not reliant on strict instruction amazing results can happen virtually over night.


_________________
//through chaos comes complexity//

the scent of the tamarillo is pungent and powerfull,
woe be to the nose who nears it.


puddingmouse
Veteran
Veteran

User avatar

Joined: 24 Apr 2010
Age: 37
Gender: Female
Posts: 8,777
Location: Cottonopolis

18 Nov 2012, 9:20 am

@b9

Of course machines can't 'rule', for reasons you have defined (if I understand them) because they have no intention to do so.

I merely think that humans should use machines more in decision-making than they currently do in areas that they haven't considered using them in. You couldn't map Utopia, but you could program a machine with a set of principles - such as Utilitarianism and see what it comes up with. Every government could have its own policy machine. Of course, those machines will be diabolical to whoever supports an opposing ideology, but I think there will be less token policy created which is ineffective by design.

This would be interesting in foreign policy because then it would resemble watching two computers play chess with each other. Also, some countries would program goals like 'eliminate Israel' and 'squash Tibet' and the other would program 'promote peace', etc.



b9
Veteran
Veteran

User avatar

Joined: 14 Aug 2008
Age: 52
Gender: Male
Posts: 12,003
Location: australia

18 Nov 2012, 9:27 am

"desire" can not be programmed in any way.
"lust"
"aspiration"
"ideation"
"unrequested exploration"

means nothing to the consciously dead "genius" who is the self building AI module,

i suspect your attention span can not extend to the length of some of my observations, but that is good because it minimizes the possibility of getting gnarled up in a communicative net with you.

well that is how i see it anyway.