AIDirections Online
Image Credit: iStock

Artificial intelligence is everywhere. It helps drive your car, recognises your face at the airport’s immigration checkpoint, interprets your CT scans, reads your resume, traces your interactions on social media, and even vacuums your carpet.

As AI encroaches on every aspect of our lives, people watch with a mixture of fascination, bewilderment and fear.

AI’s overthrow of humanity is a familiar trope in popular culture, from Isaac Asimov’s “I, Robot” to the “Terminator” movies and “The Matrix.” Some scholars express similar concerns.

The Oxford philosopher Nick Bostrom worries that artificial intelligence poses a greater threat to humanity than climate change, and the best-selling historian Yuval Noah Harari warns that the history of tomorrow may belong to the cult of Dataism, in which humanity willingly merges itself into the flow of information controlled by artificial systems.

People don’t tolerate this cognitive dissonance for very long. When we are faced with a fundamental challenge to our core beliefs, we tend to stick to our guns. Rather than revising our assumptions to match the facts, we tend to bend reality to fit our assumptions, especially when our world view is at stake

- Prof Iris Berent

But in truth, these doomsday scenarios are nowhere in sight. In a critical evaluation of AI, the cognitive and computer scientists Gary Marcus and Ernest Davies demonstrate that the state of the art of AI is still quite far from true intelligence.

When asked to provide a list of restaurants that are not McDonald’s, Siri still spits out a list of local McDonald’s restaurants; she just doesn’t get the “no” part of “no.”

AI shortcomings

AI can also fail to recognise familiar objects in unfamiliar contexts (a baby on the highway) or to separate associations from causes.

In short, AI still lacks “common sense.” This doesn’t bode well for the AI conspiracy. If your Tesla cannot reliably avoid an unfamiliar obstacle on the road, it is hard to see how it would take the initiative to hijack the vehicle.

Make no mistake AI does pose many real dangers to us: to our personal privacy and security and to the future of the economy. These are all very good reasons to watch it closely and regulate it aggressively.

More on the topic

Previous technological revolutions whether driven by steam, electricity or atomic energy have raised similar challenges. Yet in popular opinion, the AI risk is greater than those and different in kind.

People don’t merely worry that the new technology could cause accidents or fall into the wrong hands. With AI, people worry that it will acquire autonomous agency and outsmart and overthrow its human masters. The question is why.

In fact, humanity’s worries about being conquered by omnipotent, inanimate, man-made artefacts is much older than computer technology. In the 19th century, Mary Shelley’s Dr. Frankenstein created a humanoid monster who promptly rebelled.

Hundreds of years earlier, there was the story of the golem an automaton created out of river clay and brought to life by kabbalistic magic.

Predictably, the golem rebelled, not unlike Adam in Genesis, who was likewise created out of dust and brought to life when God breathed his spirit into Adam’s nostrils.

And then there is our fascination with tales of zombies, corpses that are reanimated through witchcraft. Tales like these suggest that our fear of AI arises not from AI itself, but from the human mind.

Psychological distinction

This fear emanates from the psychological distinction we draw between mind and matter. If you saw a ball start rolling all by itself, you’d be astonished. But you wouldn’t be the least bit surprised to see me spontaneously rise from my seat on the couch and head toward the refrigerator.

That is because we instinctively interpret the actions of physical objects, like balls, and living agents, like people, according to different sets of principles. In our intuitive psychology, objects like balls always obey the laws of physics they move only by contact with other objects.

People, in contrast, are agents who have minds of their own, which endow them with knowledge, beliefs, and goals that motivate them to move on their own accord. We thus ascribe human actions, not to external material forces, but to internal mental states.

Of course, most modern adults know that thought occurs in the physical brain. But deep down, we feel otherwise. Our unconscious intuitive psychology causes us to believe that thinking is free from the physical constraints on matter.

Extensive psychological testing shows that this is true for people in all kinds of societies. The psychologist Paul Bloom suggests that intuitively, all people are dualists, believing that mind and matter are entirely distinct.

AI violates this bedrock belief. Siri and Roomba are man-made artefacts, but they exhibit some of the same intelligent behaviour that we typically ascribe to living agents.

Age of Siri

Their acts, like ours, are impelled by information (thinking), but their thinking arises from silicon, metal, plastic and glass. While in our intuitive psychology thinking minds, animacy and agency all go hand in hand, Siri demonstrates that these properties can be severed they think, but they are mindless; they are inanimate but semi-autonomous.

People don’t tolerate this cognitive dissonance for very long. When we are faced with a fundamental challenge to our core beliefs, we tend to stick to our guns. Rather than revising our assumptions to match the facts, we tend to bend reality to fit our assumptions, especially when our world view is at stake.

So rather than admitting the possibility that machines endowed with AI can think, we ascribe to them immaterial mind and agency, and once we do, our view of AI shifts from faithful servant to rebellious menace.

That shift is internal to us and is entirely predictable. Indeed, the dissonance presented by a golem’s very existence a mix of matter and mind is frightening. And since people conflate fear with menace, they project it onto the golem, which is seen as rebellious and threatening.

Thus, the AI takeover narrative, its power and timelessness, arises directly from our core from a cognitive principle that seems to be part of human nature.

While none of this proves that the “robot rebellion” is impossible, it would be a mistake to ignore our own preset beliefs that contribute to these fears. As the ancient Greeks long ago observed, our blindness to our own psyches can exact a heavy toll.

When we focus so much of our attention on improbable scenarios, we run the risk of ignoring other problems posed by AI that are pressing and preventable.

Before we can give those very real dangers the attention they deserve, we should rein in our irrational fears that arise from within.

Iris Berent, a professor of psychology at Northeastern University, is author of “The Blind Storyteller: How We Reason About Human Nature.”

Los Angeles Times