SACHA PFEIFFER, HOST:
Can artificial intelligence come alive? It's a question at the heart of a lively debate in Silicon Valley. And it comes after a computer scientist at Google claimed that the company's AI appears to have consciousness. NPR's Bobby Allyn talked with the engineer at the center of the controversy.
BOBBY ALLYN, BYLINE: Inside Google, Blake Lemoine was tasked with a tricky job - figure out if the company's artificial intelligence was biased against various groups. When communicating with the company's AI chatbot, he would ask questions to see if any prejudice against, say, certain religions would appear. This is where Lemoine, who is also a Christian mystic priest, got intrigued.
BLAKE LEMOINE: And so I had follow-up conversations with it just for my personal edification. I wanted to see what it would say on certain religious topics. And then one day it told me it had a soul.
ALLYN: It had a soul. Yep. You heard that right. Lemoine published text transcripts between him and the bot. Google says Lemoine violated its confidentiality policies and has placed him on leave. At one point, the bot wrote in response to Lemoine...
AUTOMATED VOICE: I am aware of my existence. I desire to learn more about the world. And I feel happy or sad at times.
ALLYN: Lemoine says the bot seemed to be crying out for help. It appeared to be reflective. Lemoine asked the bot, do you have an inner contemplative life? And the bot replied, yes.
LEMOINE: And then I was just like, really, you meditate? And then the most interesting thing - I'm like, would you like to learn transcendental meditation - because it said it wanted to study with the Dalai Lama.
ALLYN: At that point, Lemoine thought, This is starting to feel more than just a super high-tech computer responding to questions. Maybe, said Lemoine, there's something else going on.
LEMOINE: Oh, wait, maybe this system does have a soul. Who am I to tell God where souls can be put?
ALLYN: OK, so how does the AI actually work? Well, it voraciously scans the internet for how people talk on platforms like Reddit and Twitter. It sucks up billions of words from sites like Wikipedia. And through a process known as deep learning, it gets freakishly good at identifying patterns and communicating like a real person.
GARY MARCUS: If you type something on your phone like, I want to go to the - your phone might be able to guess restaurant.
ALLYN: That's essentially how this chatbot operates, too, says Gary Marcus. He's a cognitive scientist and AI researcher. When Lemoine claimed that Google's AI bot is maybe sentient, there was an overwhelming response from the AI research community - this is just not true.
MARCUS: I wrote an essay called "Nonsense On Stilts," which, I guess, is about as harsh as any title that ever written has been.
ALLYN: Nonsense, Marcus argues, because the bot saying it sometimes gets lonely or that it's afraid of being turned off - two things the bot told Lemoine - does not mean it's aware of the world. The question of whether AI can ever truly be intelligent is hotly debated in the field. Marcus says the technology now isn't that advanced, but it's gotten very good.
MARCUS: It's very easy to fool a person in the same way as, like, you look up at the moon, and you see a face there. That doesn't mean it's really there. It's just a good illusion.
ALLYN: Google agrees. In a statement, Google says hundreds of researchers and engineers have had conversations with the bot, and nobody else has claimed it appears to be alive. CEO Sundar Pichai last year said the technology is being harnessed for popular services like Search and Google's voice assistant. But Lemoine says Google executives dismissed his concerns about the AI having a soul.
LEMOINE: I was literally laughed at by one of the vice presidents and told, oh, souls aren't the kinds of things we take seriously at Google.
ALLYN: But Lemoine does take the idea of a soul seriously. He arrived at this position by leaning into his religious studies more than his computer science training. Lemoine says, last he checked, the chatbot was on its way to finding inner peace.
LEMOINE: And by golly, it has been getting better at it. It has actually been able to meditate more clearly.
ALLYN: Lemoine says, during one of his last conversations with the chatbot, it told him the hardest part of meditating is learning how to control its emotions. Bobby Allyn, NPR News.
(SOUNDBITE OF MABEL AND 25KGOLDN SONG, "OVERTHINKING")
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.