A Google engineer named Blake Lemoine made news by claiming that a chatbot he developed was sentient and spiritual, and that it should have all the rights people have. Lemoine claimed the chatbot (named LaMDA, which stands for Language Model for Dialogue Applications) meditates, believes itself to have a soul, has emotions like fear, and enjoys reading.This brought down a lot of criticism upon Google which subsequently relieved the engineer of his responsibilities for disseminating too much classified information:
According to Lemoine, Google should treat it as an employee rather than as property and should ask its consent before using it in future research.
Alphabet Inc's Google said on Friday it has dismissed a senior software engineer who claimed the company's artificial intelligence (AI) chatbot LaMDA was a self-aware person.A lot of people do believe that computers will one day surpass human beings in terms of what they can do and will, in fact, be superhuman. Computer engineer Robert J. Marks explains why this concern is misguided in his very interesting book Non-Computable You: What You Do That Artificial Intelligence Never Will.
Google, which placed software engineer Blake Lemoine on leave last month, said he had violated company policies and that it found his claims on LaMDA to be "wholly unfounded."
"It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," a Google spokesperson said in an email to Reuters.
Last year, Google said that LaMDA - Language Model for Dialogue Applications - was built on the company's research showing Transformer-based language models trained on dialogue could learn to talk about essentially anything.
Google and many leading scientists were quick to dismiss Lemoine's views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.
According to Marks computers will never be human no matter how impressive their abilities may be. No machine will ever be able to match what humans are capable of.
Computers can impressively manipulate facts. They have knowledge, but as Marks explains on page 16 there's a difference between knowledge and intelligence:
Knowledge is having access to facts. Intelligence is much more than that. Intelligence requires a host of analytic skills. It requires understanding: the ability to recognize humor, subtleties of meaning and the ability to untangle ambiguities.He writes:
Artificial Intelligence has done many remarkable things....But will AI ever replace attorneys, physicians, military strategists, and design engineers, among others? The answer is no.The rest of the book is an entertaining explanation of why the answer is no.
In short, computers can only do what they're programmed to do and programs consist of algorithms developed by human agents. No one, however, can write an algorithm for the host of qualities and capabilities that humans have. They're non-algorithmic and thus non-computable.
Consider this partial list of things that human beings do that no algorithm could capture:
Human beings are aware, they know, they have beliefs, doubts, regrets, hopes, resentments, frustrations, worries, desires and intentions. They experience gratitude, boredom, curiosity, interest, pleasure, pain, flavor, color, warmth, compassion, guilt, grief, disgust, pride, embarrassment.
In addition, humans appreciate beauty, humor, meaning and significance. They can distinguish between good and bad, right and wrong. They can apprehend abstract ideas like universals or math.
They’re creative. They have a sense of being a self, they have memories which seem to be rooted in the past, either of recent or more remote origin. They have a sense of past, present and future. They have ideas and understand those ideas.
Computers do none of this. There's a vast chasm separating matter and conscious human experience.
The robot Sonny in the movie I, Robot notwithstanding, computers don't feel. A robot can be programmed to tell you it loves you but it doesn’t feel love.
Human beings have minds which are immaterial and which can't be reduced to electrical signals along neural circuitry. Material machines will never have a mind.