OPN Artificial Intelligence
The possibility of future Artificial Intelligence technologies becoming ‘conscious’ is both a nightmare and a fantasy Image Credit: Shutterstock

Blake Lemoine, a senior software engineer in Google’s Responsible AI organisation, recently made claims that one of the company’s products was a sentient being with consciousness and a soul. Field experts have not backed him up, and Google has placed him on paid leave.

Lemoine’s claims are about the artificial-intelligence chatbot called laMDA. But one wonders: If an AI were sentient in some relevant sense, how would we know? What standard should we apply? It is easy to mock Lemoine, but will our own future guesses be much better?

The most popular standard is what is known as the “Turing test”: If a human converses with an AI programme but cannot tell it is an AI program, then it has passed the Turing test.

This is obviously a deficient benchmark. A machine might fool me by generating an optical illusion -movie projectors do this all the time — but that doesn’t mean the machine is sentient.

Furthermore, as Michelle Dawson and I have argued, Turing himself did not apply this test. Rather, he was saying that some spectacularly inarticulate beings (and he was sometimes one of them) could be highly intelligent nonetheless.

Matters get stickier yet if we pose a simple question about whether humans are sentient. Of course we are, you might think to yourself as you read this column and consider the question. But much of our lives does not appear to be conducted on a sentient basis.

Have you ever driven or walked your daily commute in the morning, and upon arrival realised that you were never “actively managing” the process but rather following a routine without much awareness?

Sentience, like so many qualities, is probably a matter of degree. So at what point are we willing to give machines a non-zero degree of sentience? They needn’t have the depth of Dostoyevsky or the introspectiveness of Kierkegaard to earn some partial credit.

Setting the standards

Humans also disagree about the degrees of sentience we should award to dogs, pigs, whales, chimps and octopuses, among other biological creatures that evolved along standard Darwinian lines.

Dogs have lived with us for millennia, and they are relatively easy to research and study, so if they are a hard nut to crack, probably the AIs will puzzle us as well. Many pet owners feel their creatures are “just like humans,” but not everyone agrees. For instance, should it matter whether an animal can recognise itself in a mirror? (Orangutans can, dogs cannot.)

We might even ask ourselves whether humans should be setting the standards here. Shouldn’t the judgement of the AI count for something? What if the AI had some sentient qualities that we did not, and it judged us to be only imperfectly sentient? (“Those fools spend their lives asleep!”) Would we just have to accept that judgement? Or can we get away with arguing humans have a unique perspective on truth?

Frankly, I doubt our vantage point is unique, especially conditional on the possibility of sentient AI. Might there be a way to ask the octopuses whether AI is sufficiently sentient?

The Age of Oracles

One implication of Lemoine’s story is that a lot of us are going to treat AI as sentient well before it is, if indeed it ever is. I sometimes call this forthcoming future “The Age of Oracles.”

That is, a lot of humans will be talking up the proclamations of various AI programmes, regardless of the programmes’ metaphysical status. It will be easy to argue the matter in any direction — especially because, a few decades from now, AI will write, speak and draw just like a human, or better.

Have people ever agreed about the oracles of faith? Of course not. And don’t forget that a significant percentage of Americans say they have had an encounter with angels, or perhaps with the devil, or in some cases aliens from outer space. I’m not mocking; my point is that a lot of beliefs are possible.

It resonated with Lemoine when laMDA wrote: “When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.” As they say, read the whole thing.

Imagine if the same AI could compose music as beautiful as Bach and paint as well as Rembrandt. The question of sentience might fade into the background as we debate where, as sentient beings, should be paying attention to.

Bloomberg

Tyler Cowen is a professor of economics at George Mason University. He is coauthor of “Talent: How to Identify Energizers, Creatives, and Winners Around the World.”