There's a refreshingly contrarian view at New Scientist to all the claims that machines will soon be as intelligent as humans and that humans will eventually be almost indistinguishable, intellectually, from machines.
Noel Sharkey is a professor of artificial intelligence and robotics at the University of Sheffield and he's very skeptical that we will ever succeed in creating machines that can think like human beings. Here's part of a recent interview he gave:
What do you mean when you talk about artificial intelligence?
I like AI pioneer Marvin Minsky's definition of AI as the science of making machines do things that would require intelligence if done by humans. However, some very smart human things can be done in dumb ways by machines. Humans have a very limited memory, and so for us, chess is a difficult pattern-recognition problem that requires intelligence. A computer like Deep Blue wins by brute force, searching quickly through the outcomes of millions of moves. It is like arm-wrestling with a mechanical digger. I would rework Minsky's definition as the science of making machines do things that lead us to believe they are intelligent.
Are machines capable of intelligence?
If we are talking intelligence in the animal sense, from the developments to date, I would have to say no. For me AI is a field of outstanding engineering achievements that helps us to model living systems but not replace them. It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.
Are we close to building a machine that can meaningfully be described as sentient?
I'm an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to "believers" in the computational theory of mind, some of their arguments are almost religious. They say, "What else could there be? Do you think mind is supernatural?" But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.
The rest of the interview is at the link.
The danger many people see in the idea of machines having innate intelligence is that it blurs the distinction between human and machine. If humans are not essentially different from machines then humans become de-humanized. When humans lose their uniqueness and are considered to be "just animals" or "just machines" then there's very little reason not to manipulate them and exploit them just as we would animals or computers. Dehumanization always leads to tyranny.
Humans have dignity and worth because they're uniquely created in the image of God. They have intelligence and reason because they were endowed with these gifts by God. To the extent that a machine could be made to think it is only because it was endowed with this ability by an intelligent programmer. As Sharkey says, "It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself."RLC