Artificial Superintelligence by 2040? Seriously scary stuff!
can desire be programmed?
the AI is programmed to learn and one could argue that the fact that it seeks out data in order to facilitate learning, is a form of DESIRE, right?
Strong AI would self-evolved @ a rate which we cannot comprehend.
It would learn ALL knowledge and then would make inferences and correlations so complex that no human could possibly understand. It would expand Science/Maths and improve sensors to the extent that it could monitor everything that occurs on this planet and beyond. Its need for energy would grow exponentially as it data accumulation grew exponentially.
Where would it get all that energy?
Could we survive it ever-growing need for MOAR and MOAR?
We be FVCKED people
Tollorin
Veteran
Joined: 14 Jun 2009
Age: 42
Gender: Male
Posts: 3,178
Location: Sherbrooke, Québec, Canada
Strong AI would self-evolved @ a rate which we cannot comprehend.
It would learn ALL knowledge and then would make inferences and correlations so complex that no human could possibly understand. It would expand Science/Maths and improve sensors to the extent that it could monitor everything that occurs on this planet and beyond. Its need for energy would grow exponentially as it data accumulation grew exponentially.
Where would it get all that energy?
Could we survive it ever-growing need for MOAR and MOAR?
We be FVCKED people [/quote]
Yes, there will be troubles.
_________________
Be yourself!
I have to say I was a taken in by this until I got to the centre of the first document, only to be disappointed (once again) by claims that are completely unsubstantiated. The belief that solving the riddle of A.I. will just involve something as simple as raw computing power is misguided, to say the very least. There are too many underlying assumptions within this article, like the belief that our own minds are nothing more sophisticated than calculators. 'Calculations per second' may make for an unbeatable chess programme, but that is not the way people, in real life, make decisions. We don't go through all the possible permutations of a given scenario to arrive at the optimal solution to that scenario (ex. when playing a game like chess or checkers).
Ray Kurzweil's thinking is very linear and unimaginative, and it shows. There is so much more to (true) intelligence than just raw computing power, a point that so many working within the field of A.I. just don't seem to get. 'Moore's Law' isn't (a law of nature, that is). It could fail to work at any point between now and any time in the future you may specify, unlike a real law of nature, which never does fail.
Quote: "One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.
Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps."
Absolutely hilarious.
Yes, it sounds extremely 'iffy', because it is.
Imagine how our ancient ancestors would react to seeing/using a smartphone or any modern technology, and then realize you're the ancient ancestor to someone/something 1000+ years from now. Things that aren't even possible to imagine today will be reality in the future. If you could spend a day in the distant future, it would truly be mind blowing if you could even understand it at all.