Materialists have a difficult time fitting human consciousness into their worldview, because consciousness is a phenomenon which defies material explanation.
How does it happen that mere matter can produce qualia (e.g. the sensation of red or the taste of sweet)? How does matter produce a belief, a value, a doubt, gratitude, regret, or disappointment? How does material substance produce forgiveness, resentment, or wishes, hopes, and desires? How does it appreciate (e.g. beauty, music, or a book). How does it want, worry, have intentions, or understand something? How does matter come to be aware of itself and its surroundings? These are vexing questions for a materialist view of the world, yet some materialist philosophers remain unmoved by them.
Someday, they believe, computers will be able to do all that human minds can do and then we'll have a denotative example of how matter can produce the phenomena of consciousness. Indeed 57 years ago Alan Turing suggested a test for consciousness in a machine. In the Turing test, an investigator would interact with both a person and a machine, but would be blindfolded so that he did not know ahead of time which was which. If, after interacting with both of them, the investigator still couldn't tell which was the person and which was the machine, it would be reasonable to conclude that the machine, for all intents and purposes, was just as "conscious" as the person.
Proponents of Artificial Intelligence (AI) are confident that the day when the Turing test can actually be carried out is not far off, but many other philosophers are skeptical. Just because a computer can give the same answers to various questions as a person would doesn't mean that the computer experiences what the person experiences.
Philosopher John Searle illustrates the problem that AI faces with a thought experiment he published in 1980 that he calls "the Chinese Room:"
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the book the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
Searle goes on to say, "The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have."
In other words, human minds understand, they feel and experience, computers do not. Minds, whatever they are, are conscious. Machines are not.
So the big questions are, what exactly is consciousness and where did it come from? The materialist answer is simply to deny that consciousness exists. That seems a little counter-intuitive.
RLC