Friday, November 9, 2018

Minds, Computers and Chinese Rooms

There's lots of talk about computers soon being able to "think" like human beings and maybe even bringing about an AI apocalypse. Neurosurgeon Michael Egnor strongly dissents from this view, however.

He grants that humans can use computers to do despicable things but that computers themselves will never be able to think.

Egnor writes:
A cornerstone of the development of artificial intelligence is the pervasive assumption that machines can, or will, think. Watson, a question-answering computer, beats the best Jeopardy players, and anyone who plays chess has had the humiliation of being beaten by a chess engine....Does this mean that computers can think as well as (or better than) humans think? No, it does not.

Computers are not “smart” in any way. Machines are utterly incapable of thought.

The assertion that computation is thought, hence thought is computation, is called computer functionalism. It is the theory that the human mind is to the brain as software is to hardware. The mind is what the brain does; the brain “runs” the mind, as a computer runs a program.

However, careful examination of natural intelligence (the human mind) and artificial intelligence (computation) shows that this is a profound misunderstanding.
Citing the 19th century German philosopher Franz Brentano, Egnor observes that computers lack a fundamental and critical characteristic of all thoughts. They lack "aboutness", or what philosophers call intentionality. Here's what he means:
All thoughts are about something, whereas no material object is inherently “about” anything. This property of aboutness is called intentionality, and intentionality is the hallmark of the mind.

Every thought that I have shares the property of aboutness—I think about my vacation, or about politics, or about my family. But no material object is, in itself, “about” anything. A mountain or a rock or a pen lacks aboutness—they are just objects. Only a mind has intentionality, and intentionality is the hallmark of the mind.

Another word for intentionality is meaning. All thoughts inherently mean something. A truly meaningless thought is an oxymoron. The meaning may be trivial or confusing, but every thought entails meaning of some sort. Every thought is about something, and that something is the meaning of the thought.
Computation, however, is an algorithmic process. It's the matching of an input to an output. There's no meaning to what the computer does. Whatever meaning we ascribe to the process is, in fact, imposed by our minds, it doesn't arise from within the machine.

What computers do, then, is represent the thoughts of the person designing, programming, and/or using it:
Computation represents thought in the same way that a pen and paper can be used to represent thought, but computation does not generate thought and cannot produce thought.
Only minds can think. Machines cannot.

When a materialist thinks about her materialism she's essentially disproving her fundamental belief that the material brain is all that's necessary to account for her thoughts. How can electrochemical reactions along material neurons be about something? Electrons whizzing across a synapse are not about anything. They have no meaning in themselves. The meaning must come from something else.

Nor do computers understand. In 1980 philosopher John Searle published an argument that sought to show that functionalism is wrong and that there's more to a human being's cognitive experience than simple computation. His argument came to be known as the Chinese Room argument and goes like this:

Imagine that you are an English speaker and you do not speak Chinese. You're living in China, however, and have a job working in a booth in a public square. The purpose of the booth is to provide answers to questions that Chinese-speaking people write on small pieces of paper and pass into the booth through a slot. The answer is written on a small piece of paper and passed back to the Chinese person through a separate slot.

Inside the booth there's a very large book. The book contains every question that can be asked and the corresponding answer -- all written only in Chinese. You understand no Chinese. You understand nothing written in the book. When the question is passed through the slot you match the Chinese characters in the question to the identical question in the book, and you write the Chinese symbols corresponding to the answer and pass the answer back through the answer slot.

The Chinese person asking the question gets an answer that he understands in Chinese. You understand neither the question nor the answer because you do not understand Chinese.

Searle argues that you are carrying out a computation. The booth is analogous to a computer, you are analogous to a CPU, and the information written in Chinese is analogous to an algorithm. The question and the answer written on the paper are the input and the output to the computer.

In other words, the computer, like the person in the booth, has no understanding of what it's doing.

Searle's argument denies that computers "think." They simply follow an algorithm. Since humans do think, however, and we do understand, either our brains are not just computers or functionalism is not true.

Searle points out that the computation performed by the booth and its occupant does not involve any understanding of the questions and answers provided. His point is that computation is an algorithmic process that does not entail or require understanding, but since we do understand when we perform a computation, human cognition is something qualitatively different from mere computation.