Thursday, August 2, 2007

Human Consciousness and AI

David Gelernter, a professor of computer science at Yale University and a national fellow of the American Enterprise Institute, has a fascinating article on Artificial Intelligence (AI) in Technology Review. Gelernter argues that AI will never be able to produce simulated conscious minds and that even if it could, it shouldn't. Along the way he offers a number of wonderful insights into human cognition and what makes human consciousness different from unconscious processes that machines can carry out.

Here's part of what he says about the ethical problems involved with AI. Assume that we have downloaded the memories of a person (Joe) into the software that comprises an artificial mind:

[U]nderstand the enormous ethical burden we have now assumed. Our software mind is conscious (by assumption) just as a human being is; it can feel pleasure and pain, happiness and sadness, ecstasy and misery. Once we've transferred Joe's memories into this artificial yet conscious being, it can remember what it was like to have a human body--to feel spring rain, stroke someone's face, drink when it was thirsty, rest when its muscles were tired, and so forth. (Bodies are good for many purposes.) But our software mind has lost its body--or had it replaced by an elaborate prosthesis. What experience could be more shattering? What loss could be harder to bear? (Some losses, granted, but not many.) What gives us the right to inflict such cruel mental pain on a conscious being?

In fact, what gives us the right to create such a being and treat it like a tool to begin with? Wherever you stand on the religious or ethical spectrum, you had better be prepared to tread carefully once you have created consciousness in the laboratory.

It's a long essay but well worth the time.

RLC