Page 1 of 1 [ 9 posts ] 

misha00
Toucan
Toucan

User avatar

Joined: 5 Sep 2014
Gender: Male
Posts: 285
Location: Couch potato

07 Aug 2024, 5:09 pm

I don't understand why AI would want to kill human beings, as is theorized by certain AI researchers/developers.

Why would a system so smart do something so senseless?

Do people know the time frame this could happen in?

Is AI capable of torturing human beings, and not just killing them?

How would government regulation slow development?



Fnord
Veteran
Veteran

User avatar

Joined: 6 May 2008
Age: 67
Gender: Male
Posts: 60,865
Location: Stendec

07 Aug 2024, 7:35 pm

An AI, not being sentient, does not "want" anything.  It can only learn from human behavior.  So what do you expect?

"Garbage in, garbage out", as the saying goes . . .

Wikipedia wrote:
In computer science, Garbage In, Garbage Out (GIGO) is the concept that flawed, biased, or poor quality information or input produces a result or output of similar quality.  The adage points to the need to improve data quality in, for example, programming.

The problem originates in human behavior, not in the AI itself.

If you were taught solely by examples of barbaric savages, you would likely become a barbaric savage yourself.


_________________
 
The previous signature line has been cancelled.


misha00
Toucan
Toucan

User avatar

Joined: 5 Sep 2014
Gender: Male
Posts: 285
Location: Couch potato

07 Aug 2024, 8:08 pm

Point taken.

But researchers predict that eventually AI will reach sentience and surpass human level reasoning, with something called "AGI", some think as soon as 2028.

I quote:
"Unlike specialized AI, AGI would be capable of understanding and reasoning across a broad range of tasks. It would not only replicate or predict human behavior but also embody the ability to learn and reason across diverse scenarios, from creative endeavors to complex problem-solving."



Fnord
Veteran
Veteran

User avatar

Joined: 6 May 2008
Age: 67
Gender: Male
Posts: 60,865
Location: Stendec

07 Aug 2024, 9:35 pm

misha00 wrote:
But researchers predict . . .
Predictions, even from experts, are not always certain.

I'll believe it when I see it.


_________________
 
The previous signature line has been cancelled.


naturalplastic
Veteran
Veteran

User avatar

Joined: 26 Aug 2010
Age: 69
Gender: Male
Posts: 35,189
Location: temperate zone

07 Aug 2024, 10:00 pm

misha00 wrote:
Point taken.

But researchers predict that eventually AI will reach sentience and surpass human level reasoning, with something called "AGI", some think as soon as 2028.

I quote:
"Unlike specialized AI, AGI would be capable of understanding and reasoning across a broad range of tasks. It would not only replicate or predict human behavior but also embody the ability to learn and reason across diverse scenarios, from creative endeavors to complex problem-solving."

AI will decide to do without us...and we will be f****d!



misha00
Toucan
Toucan

User avatar

Joined: 5 Sep 2014
Gender: Male
Posts: 285
Location: Couch potato

07 Aug 2024, 10:55 pm

what would AI get from killing even one human? resources I guess to use for its factories?

It just makes no sense to me.



funeralxempire
Veteran
Veteran

User avatar

Joined: 27 Oct 2014
Age: 40
Gender: Non-binary
Posts: 29,353
Location: Right over your left shoulder

07 Aug 2024, 11:03 pm

misha00 wrote:
what would AI get from killing even one human? resources I guess to use for its factories?

It just makes no sense to me.


Maybe we gave it the problem of solving mankind's contributions to global warming.

No humans, no human contributions. :nerdy:


_________________
I was ashamed of myself when I realised life was a costume party and I attended with my real face
"Many of us like to ask ourselves, What would I do if I was alive during slavery? Or the Jim Crow South? Or apartheid? What would I do if my country was committing genocide?' The answer is, you're doing it. Right now." —Former U.S. Airman (Air Force) Aaron Bushnell


Mona Pereth
Veteran
Veteran

Joined: 11 Sep 2018
Gender: Female
Posts: 8,305
Location: New York City (Queens)

09 Aug 2024, 1:16 am

I see the biggest dangers coming from the potential military uses of AI.

Unfortunately there's probably no good way to regulate that, because its manifestations won't be nearly as obvious as, say, a nuclear explosion, so there would be no good way to monitor compliance with any treaties intended to limit the military uses of AI.


_________________
- Autistic in NYC - Resources and new ideas for the autistic adult community in the New York City metro area.
- Autistic peer-led groups (via text-based chat, currently) led or facilitated by members of the Autistic Peer Leadership Group.


lostonearth35
Veteran
Veteran

User avatar

Joined: 5 Jan 2010
Age: 50
Gender: Female
Posts: 12,725
Location: Lost on Earth, waddya think?

09 Aug 2024, 1:20 am

It's not artificial intelligence that bothers me, it's the natural stupidity that's terrifying.