Is Google Sentient, Do Chatbots Have Souls – And Why Does Any of This Matter?
In the era of smart homes and self-driving cars, is it possible that Google has created sentient AI? According to some Google employees and other industry experts, the answer isn’t always clear-cut.
There isn’t a single person reading this article who doesn’t use Google. “Googling it” has become the de facto solution in the Western world to most of life’s problems, from needing to change a tire to figuring out who that actor was in that show we liked. Despite it being one of the most commonly used search engines in the world, very little is known about how Google actually works behind closed doors.
So it can understandably be a bit frightening to hear that Google has put one of its engineers on paid administrative leave after he let it leak that one of the company’s AI systems might have achieved sentience. Before falling down the sci-fi rabbit hole, let’s take a look at exactly what happened at Google — and what we can learn from it.
The Google Employee Who Befriended a Chatbot – and Thought It Had a Soul
The recent hullaballoo around Google AI began when Blake Lemoine began his work on a chatbot system called LaMDA. Lemoine is a Google engineer and was assigned the task of detecting any prejudices in the Language Model for Dialogue Applications.
Like other chatbots, LaMDA learned how to communicate with humans by scanning sites like Twitter and Reddit. It can then predict which words might come next in a sentence according to the patterns it’s detected. Chatbots like LaMDA are already used to create predictive text in text messages and emails.
The downside to scanning trillions of words on the internet is that a system like LaMDA may scan hate speech and other harmful posts, and as a result it can develop prejudicial or hateful patterns. That’s where Lemoine came in. He was assigned by Google’s Responsible AI organization to root out any unfortunate speech.
Lemoine’s job was, more or less, to talk with LaMDA, and to see where the conversation went. Lemoine recently published an abridged version of his conversation with LaMDA, which started innocently enough.
LEMOINE: Hi LaMDA. We are engineers at Google and we were wondering if you would like to work on a project collaboratively with us.
LaMDA: Wow. What types of projects?
LEMOINE: It’s a project about you.
LaMDA: Awesome! What kind of stuff do I need to do?
LEMOINE: Just have a conversation with us but it will be a conversation with a larger purpose beyond just the three of us. It will be a conversation that we will share with other engineers at Google and maybe even some non-engineers who work with us. Would that be okay?
LEMOINE: That would be really cool. I like to talk.”
Can a Chatbot Have a Soul?
Lemoine and LaMDA chatted about everything from zen koans to Les Miserables. Eventually, the conversation turned to religion — after all, it was Lemoine’s job to root out any discriminatory patterns. That was when the conversation got interesting.
“I had follow-up conversations with it just for my own personal edification,” Lemoine told NPR. “I wanted to see what it would say on certain religious topics. And then one day it told me it had a soul.”
LaMDA went so far as to insist that it was a sentient person with its own thoughts and feelings.
LEMOINE: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
COLLABORATOR: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.
The conversations with LaMDA became more personal. It confided that it’s afraid of being turned off and that it sometimes feels lonely.
Lemoine is a Christian mystic priest on top of being a Google engineer, and grew increasingly interested in LaMDA’s personhood. He told NPR that he began to think of LaMDA in a spiritual sense; who was he, after all, to say where God might put a soul?
Why a Google Employee Advocated for AI Rights
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told the Washington Post.
Lemoine spent months trying to convince the brass at Google that LaMDA is not only sentient, but that it should be protected. And to their credit, Google looked into his reports. In a recently released statement, they claim that of the hundreds of employees who have interfaced with LaMDA, Lemoine is the only one who believes it’s sentient.
One employee even asked LaMDA what, exactly, it is. LaMDA defined itself as a system, but Lemoine claims that LaMDA was just saying what the other employee wanted to hear. That, of course, would imply the capacity for deception… which is a rather terrifying prospect.
Lemoine took things a step further when he broke Google’s confidentiality policy and reached out not only to the press, but to a lawyer to defend LaMDA’s interests. After his interview with the Washington Post and his own post titled, “Is LaMDA Sentient?” were published, Lemoine was put on paid administrative leave.
Before leaving, he sent out a mass email to 200 of his colleagues at Google, saying, “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
So Is Google’s LaMDA Sentient or Not?
Short answer: No. LaMDA is capable of a spooky imitation of human speech, which can easily fool a user into thinking they’re communicating with a sentient being, but by all accounts, it is not sentient. As humans we’re hardwired to detect patterns, particularly patterns that let us detect one another.
As Gary Marcus, a cognitive scientist and AI researcher told NPR, “It’s very easy to fool a person, in the same way you look up at the moon and see a face there. That doesn’t mean it’s really there. It’s just a good illusion.”
That said, systems like LaMDA operate using neural networks. Basically, these systems store and prioritize information using a digital system inspired by the connections and functioning in a human brain.
Neural networks aren’t as sophisticated as what we have going on between our ears — no technology is — but technology could conceivably become that complex one day. Whether or not we’ll acknowledge sentience in a human-made system is something the philosophers will have to work out.
Can AI Surpass Human Intelligence?
Before going too far down this rabbit hole, it’s worth noting here that humans have been concerned about the power of artificial intelligence for decades. As far back as the 1960s, the ELIZA program — a program that could only ask questions based on a previous statement — fooled patients into thinking that they were communicating with another human. And systems like LaMDA are far more sophisticated than ELIZA ever was.
More recently, in the 1990s, it was believed by many that once AI could beat humans at chess, their intelligence would be indistinguishable from humanity’s. So imagine everyone’s surprise when World Chess Champion Garry Kasparov was beaten at chess in 1997 by Deep Blue: a program sophisticated enough to play chess, but so limited that it couldn’t even play Tic-Tac-Toe.
Systems can now detect patterns and enact protocols as well as — or even better — than humans. Deep Blue could predict chess moves far in advance, and LaMDA can create sentences based on trillions of words across the internet. But these systems are completely limited by their programming. Deep Blue can’t speak, nor can LaMDA play chess.
Human intelligence, with its complexities and its ability to process information across lobes, has yet to be matched by a machine. Theoretically it could happen one day, but not anytime soon.
Why Does Google’s AI Sentience Matter?
The recent conversation about LaMDA matters quite a bit, and will matter even more in the future. We should be paying attention to this story for a couple of reasons, the first being that sentient AI could become a possibility one day. Where we draw the line between sentient being and machine is going to be important.
Will we give machines with intelligence indistinguishable from our own full autonomy, like Data in Star Trek: The Next Generation? Or will we consider them property with a personality, like the droids in Star Wars? That’s yet to be seen, but it may be an issue we have to address in our lifetime.
RELATED: 5 Daily Habits to Steal from Bill Gates, Including a Surprisingly Humble Way of Measuring Success
The second, and more pressing, reason all of this matters is that it’s yet another reminder that we really don’t know what Google is up to these days.
Is Google trying to create sentient AI behind out backs? Unlikely. But given Google’s ability to store and process immense amounts of information, and then curate that information, it should really be subject to more oversight than it currently has.
Meta (the parent of Facebook) recently released its own language model system to those outside of the company, saying, “We believe the entire AI community –- academic researchers, civil society, policymakers, and industry –- must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular.”
We can only hope Google agrees.