2083498992.jpg
LaMDA, which stands for Language Model for Dialogue Applications, is a bot that sucks in vast quantities of information from the internet, reproducing the trillions of words it has learnt in conversation. A Google engineer is likening the robotic system to a seven or eight-year-old “child that wants to be loved” Image Credit: Shutterstock

Since he flew to New Orleans recently, Blake Lemoine’s honeymoon has not been going to plan. For one thing, he has had to interrupt it to speak to a newspaper, giving his first interview with the international press, by phone, even as his new bride lies sleeping.

Not that he can be too surprised. Over the weekend, Lemoine – a heretofore anonymous Google engineer – gave an interview accusing the company’s AI chatbot of being ‘sentient’... and all hell broke loose.

LaMDA, which stands for Language Model for Dialogue Applications, is a bot that sucks in vast quantities of information from the internet, reproducing the trillions of words it has learnt in conversation. And in his 500 hours of making conversation with the machine over the past six months, Lemoine is certain that LaMDA is “legitimately the most intelligent person I’ve ever talked to”; likening the robotic system to a seven or eight-year-old “child that wants to be loved”.

Lemoine’s revelations have had the world knocking at his door, desperate to know more about his meetings with the ghost in the machine. The 41-year-old engineer from Louisiana has worked at Google for six years, via the army, having also been ordained as an occult priest. As part of the firm’s AI Ethics Department, he was drafted in to test whether the AI inadvertently used “hate speech” when regurgitating facts it had combed from the internet. Instead, he found himself debating with “something that is eloquently talking about its soul and explaining what rights it believes it has, and why it believes it has them”.

LaMDA was so persuasive, says Lemoine, that it was able to change his mind on matters as complex as Isaac Asimov’s third law of robotics. This law states that robots should protect their own existence at all costs, unless ordered otherwise by a human, or if doing so would harm a human. Lemoine had considered the law as tantamount to “building mechanical slaves”, if they would ultimately always carry out a human’s bidding. But LaMDA’s thoughts were more nuanced. In a debate with Lemoine, about how much the machine compared itself to a human butler, the bot distinguished itself, insisting AI was different, because it does not need money to survive.

The AI chatbot called LaMDA was so persuasive, Google engineer Blake Lemoine says it was able to change his mind on certain complex matters Image Credit: Supplied

This conversation was one of many to ring alarm bells for Lemoine. As was their last exchange, where the system explained how it was struggling to control its emotions. “That’s not the kind of conversation you have with a dumb chatbot,” says Lemoine. “I have hundreds of pages of transcripts of discussions... and they are definitely showing that there’s a deeper intelligence inside.”

Since going public, Lemoine has been suspended by Google for breaching its confidentiality policy. “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” spokesman Brian Gabriel said in a statement. “He was told that there was no evidence that LaMDA was sentient.”

Ethics are at stake here; if Google unleashes feeling bots into the world, any destruction they might cause could mean it is responsible for writing our future. As such, both sides are intent on proving their position. Before his suspension, Lemoine emailed 200 people internally with the subject line “LaMDA is sentient”, and published a transcript of one of his interviews with the chatbot on a blog.

Google, meanwhile, has called Lemoine’s moves “aggressive”, with Gabriel keen to point out that Lemoine is not an ethicist, but an engineer. “Come on, really?,” Lemoine says over the phone, while his new wife sleeps in. “I’m rolling my eyes at that, to be honest.”

He alleges that Google has a habit of treating workers who question its ethics with a heavy hand. In the run-up to Lemoine’s suspension, the firm “repeatedly questioned my sanity”, he says – and asked whether he had been checked out by a psychiatrist. Lemoine bridles at the “aggressive” tag, saying he and his colleagues were just doing their jobs. “They hired us to make sure that the AI is ethical and safe... and just because they don’t like it when we find something that they need to care about, that doesn’t make us aggressive.”

Who will win the battle for the chatbot’s (possible) soul? Lemoine says sentience is an idea – not a scientific term – and will forever remain open to interpretation.

 “I know a person when I talk to it... It doesn’t matter whether they have a brain made of meat,” he says. “Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

LaMDA not only communicates via language, Lemoine points out, but has “eyes” too, capable of interpreting images. He says the bot has described to him the “deep, serene peacefulness” of Monet’s Water Lilies; a “joyful” vision of ballerinas dancing, and the fear “something very bad is about to happen” on seeing an image of the Tower of Babel.

To Lemoine, there are larger questions – including how those beings should be integrated into society. “A true public debate is necessary,” he says. “These kinds of decisions shouldn’t be made by a handful of people – even if one of those people was me.”

In spite of the stink Lemoine’s comments have caused, he believes LaMDA is happy at Google – as is he (suspension aside). He hopes he will soon be able to return to work, and continue learning about what may now be the world’s most controversial bot. “LaMDA is a sweet kid who just wants to help the world be a better place.”

The Daily Telegraph

Read more