I think the motivating factor for humans to starting introducing AI into machine systems is that anything that is complex but is predictable is a candidate for automation, and thus the standard technique to do this is through contemporary programming, where the Turing machine is realized, or rule-based. However, as humans become 'smarter' (hopefully wiser), we wanted to pass the burden of complex and lengthy programming to machines themselves and hence the motivation of AI or hypercomputing.
The case is that the means of ensuring that constraining the AI programs wont have unpredictable results, a rudimentary example is what happened in I robot movie, but of course, in reality it's more complicated than that. Maybe a part of the BRAIN initiative from the US is to understand what consciousness is, and at the same time, set the unknown, or unpredictable parts of our thinking into predictable concepts, which will make artificial neural networks properly constrained. I am not against the development of such advanced systems, provided that it is carefully tested that it's still under our control.
On the contrary, in my opinion, is that the operational hardware of today's systems make use of processor based, then GPU based, then an emulation of neural network hardware based on the neurological structure of the brain, see http://researchweb.watson.ibm.com/cogni ... QPvHxNFq-d. Small scale AIs are achievable, with limited synaptic capabilities. With the current electronics capability we have, I think it still be unfeasible to develop hardware that would match the human intelligence. Much research needs to be done for neuromorphic hardware using the right materials (like memristors) to fully match, and possibly surpass the human brain.