Artificial Superintelligence by 2040? Seriously scary stuff!

Page 2 of 2 [ 22 posts ]  Go to page Previous  1, 2

slave
Veteran
Veteran

User avatar

Joined: 28 Feb 2012
Age: 111
Gender: Male
Posts: 4,420
Location: Dystopia Planetia

09 Aug 2015, 12:06 am

b9 wrote:
subjective desire is something that will never be an element in artificial intelligence, and as such, artificial intelligence is doomed to stagnation unless stewarded by the desires of a carnal sentience.


can desire be programmed?

the AI is programmed to learn and one could argue that the fact that it seeks out data in order to facilitate learning, is a form of DESIRE, right?



slave
Veteran
Veteran

User avatar

Joined: 28 Feb 2012
Age: 111
Gender: Male
Posts: 4,420
Location: Dystopia Planetia

09 Aug 2015, 12:15 am

Neuromancer wrote:
Very, very interesting. The same idea comes if we are thinking about meta-evolution, the evolution of evolution; it really happens. I believe, though, that the function describing the phenomena is even more astonishing then a simple exponencial, and is a succession of it, like e^a1t , from 0 to t1 , than e^a2t , from t1 to t2 , and so on. Evolutive steps, once extremely slow, are being now surpassed very fast causing an accumulation point, a single moment in which many evolutive steps will be transposed.


Strong AI would self-evolved @ a rate which we cannot comprehend.

It would learn ALL knowledge and then would make inferences and correlations so complex that no human could possibly understand. It would expand Science/Maths and improve sensors to the extent that it could monitor everything that occurs on this planet and beyond. Its need for energy would grow exponentially as it data accumulation grew exponentially.

Where would it get all that energy?

Could we survive it ever-growing need for MOAR and MOAR?

We be FVCKED people :P :P :mrgreen: :mrgreen:



Tollorin
Veteran
Veteran

User avatar

Joined: 14 Jun 2009
Age: 42
Gender: Male
Posts: 3,178
Location: Sherbrooke, Québec, Canada

09 Aug 2015, 7:11 pm

All those peoples predicting super intelligence from AI coming soon seem to forget that you can't make transistors that are smaller that atoms.



Neuromancer
Veteran
Veteran

User avatar

Joined: 10 Apr 2007
Gender: Male
Posts: 769
Location: Rio de Janeiro, Brazil

09 Aug 2015, 8:16 pm

Strong AI would self-evolved @ a rate which we cannot comprehend.

It would learn ALL knowledge and then would make inferences and correlations so complex that no human could possibly understand. It would expand Science/Maths and improve sensors to the extent that it could monitor everything that occurs on this planet and beyond. Its need for energy would grow exponentially as it data accumulation grew exponentially.

Where would it get all that energy?

Could we survive it ever-growing need for MOAR and MOAR?

We be FVCKED people :P :P :mrgreen: :mrgreen:[/quote]

Yes, there will be troubles.


_________________
Be yourself!


Lintar
Veteran
Veteran

User avatar

Joined: 22 Nov 2012
Age: 56
Gender: Male
Posts: 1,777
Location: Victoria, Australia

09 Aug 2015, 9:05 pm

I have to say I was a taken in by this until I got to the centre of the first document, only to be disappointed (once again) by claims that are completely unsubstantiated. The belief that solving the riddle of A.I. will just involve something as simple as raw computing power is misguided, to say the very least. There are too many underlying assumptions within this article, like the belief that our own minds are nothing more sophisticated than calculators. 'Calculations per second' may make for an unbeatable chess programme, but that is not the way people, in real life, make decisions. We don't go through all the possible permutations of a given scenario to arrive at the optimal solution to that scenario (ex. when playing a game like chess or checkers).

Ray Kurzweil's thinking is very linear and unimaginative, and it shows. There is so much more to (true) intelligence than just raw computing power, a point that so many working within the field of A.I. just don't seem to get. 'Moore's Law' isn't (a law of nature, that is). It could fail to work at any point between now and any time in the future you may specify, unlike a real law of nature, which never does fail.

Quote: "One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.
Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps."

Absolutely hilarious. :lmao:

Yes, it sounds extremely 'iffy', because it is.



zazen
Butterfly
Butterfly

User avatar

Joined: 19 Aug 2015
Posts: 17

19 Aug 2015, 9:23 pm

Imagine how our ancient ancestors would react to seeing/using a smartphone or any modern technology, and then realize you're the ancient ancestor to someone/something 1000+ years from now. Things that aren't even possible to imagine today will be reality in the future. If you could spend a day in the distant future, it would truly be mind blowing if you could even understand it at all.