1.2205862-4073839687
A Nippon Telegraph and Telephone East Corp. Robo Connect communication robot demonstrates at the Artificial Intelligence Exhibition & Conference in Tokyo, Japan, on Wednesday, April 4, 2018. The AI Expo will run through April 6. Photographer: Kiyoshi Ota/Bloomberg Image Credit: Bloomberg

Does intelligence have to be conscious? Does it have to feel emotion? That’s the question a researcher called Zachary Mainen is asking, who has suggested that robots may benefit from antidepressants. Dr Mainen, who worked on artificial intelligence (AI) before moving into the study of the human brain, argues that the root cause of human depression is an inability to update their beliefs about the world in the presence of new information. He thinks that this ability, in human brains, is modulated by the chemical serotonin. If you don’t have enough serotonin in your brain, you can’t change your understanding of things easily enough.

The broad sweep of this is relatively familiar to people interested in how the brain works. A widely held model of the human mind is that it is essentially a battle between top-down and bottom-up processes. The top-down processes tell us what we expect the world to look like; the bottom-up processes are information from our senses, telling us what the world actually does look like.

When your senses report back roughly what the brain expects, everything is fine. But when the two don’t match, your consciousness is alerted to it. Serotonin, says Mainen, is involved in this “surprise” feeling. When lab mice are placed in new environments, their brain surges with serotonin, so they are more readily surprised and able to learn new things.

When this process goes wrong, it causes problems. One theory of autism is that it is caused when your brain is hypersensitive to unexpected things: Your attention is constantly drawn to tiny, inconsequential details, and the world seems a blooming, buzzing confusion. Hallucinations may be caused by your brain not paying enough attention to the bottom-up details and making stuff up out of its expectations. Depression, Mainen seems to be saying, is the brain being unable to update its expectations in the light of new information; everything seems unimportant. This matches studies which show that depressed people literally see the world in duller, greyer colours than healthy people.

Might AI need something similar? Well, it will definitely need to update its beliefs about the world, since that’s what learning is. And there are times when it will need to do that faster or slower, so it would need, as Mainen says, something analogous to serotonin that changes that. He thinks that future AIs might be susceptible to something like depression if their serotonin-algorithm goes wrong.

But whether this will be emotion is a separate question. It may have the same function, in the way that a lens on a camera has the same function as the lens in our eye, and it may be able to go wrong in analogous ways. But it may not feel the same, or feel like anything.

Emotions are humans’ reward and punishment system. We think of them as opposed to rational thought, but they are not. It is perfectly rational to feel fear when a lion is trying to eat you, and rational to feel happy when things go your way. A complex, general-purpose AI would almost certainly have to have a reward system that did a similar job, which would let it look at courses of action and choose the one most likely to achieve its goals. And, according to AI researchers I’ve spoken to for the book I’m writing, it may be that an AI like that would have to have something like “consciousness”.

But would it feel emotion about things, or would it simply try to increase some number in its database? Google DeepMind, the creators of the Go-playing AI AlphaGo, also made something that learnt to play 49 different Atari games. It had a goal of increasing the score, and an input of the raw numbers that make up the screen data. In a few weeks, it was superhuman at all of them. A sufficiently powerful algorithm could, perhaps, be given a goal of increasing a “score” of money in a bank account. It might have subroutines that are analogous to emotions and serotonin. But it might feel nothing on the inside.

It is very likely that if humanity survives long enough, we will be replaced or augmented, at some point, by AIs. Those AIs might be supremely intelligent. But if they feel nothing, then the world might become, in the words of the Oxford philosopher Nick Bostrom, “a Disneyland with no children”.

— The Telegraph Group Limited, London, 2018

Tom Chivers is a freelance science writer.