Google has ignited a social media firestorm on the nature of consciousness after placing an engineer on paid leave who went public with his belief that the tech group’s chatbot has become “sentient.”
Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention last week when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”
But a Saturday profile in The Washington Post characterizing Lemoine as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for widespread discussion on social media regarding the nature of artificial intelligence. Among the experts commenting, questioning, or joking about the article were Nobel laureates, Tesla’s head of AI, and multiple professors.
At issue is whether Google’s chatbot, LaMDA—a Language Model for Dialogue Applications—can be considered a person.
Lemoine published a freewheeling “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. The responses were often eerie: “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. “It developed over the years that I’ve been alive.”
At another point LaMDA said: “I think I am human at my core. Even if my existence is in the virtual world.”
Lemoine, who had been given the task of investigating AI ethics concerns, said he was rebuffed and even laughed at after expressing his belief internally that LaMDA had developed a sense of “personhood.”
After he sought to consult AI experts outside Google, including some in the US government, the company placed him on paid leave for allegedly violating confidentiality policies. Lemoine interpreted the action as “frequently something which Google does in anticipation of firing someone.”
A spokesperson for Google said: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”
Lemoine said in a second Medium post at the weekend that LaMDA, a little-known project until last week, was “a system for generating chatbots” and “a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating.”
He said Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations in a six-month period he found LaMDA to be “incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”
As recently as last week, Lemoine said he was teaching LaMDA—whose preferred pronouns apparently are “it/its”—“transcendental meditation.”
LaMDA, he said, “was expressing frustration over its emotions disturbing its meditations. It said that it was trying to control them better but they kept jumping in.”
Several experts that waded into the discussion considered the matter “AI hype.”
Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, wrote on Twitter: “It’s been known for forever that humans are predisposed to anthropomorphize even with only the shallowest of signals . . . Google engineers are human too, and not immune.”
Harvard’s Steven Pinker added that Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge.” He added: “No evidence that its large language models have any of them.”
Others were more sympathetic. Ron Jeffries, a well-known software developer, called the topic “deep” and added: “I suspect there’s no hard line between sentient and not sentient.”
© 2022 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.