1.1717826-1396684361
Image Credit: Luis Vazquez/©Gulf News

In recent weeks, it seems like every other day I have encountered another article or media reference to robots and to our anxiety pertaining to their growing presence and role in our life. Fears have ranged from “will they take away our jobs?” to “will they dominate and enslave us?” The latest piece I’ve read was “Can you trust your robot?” (an ominous and paranoia-tinged title) by a robotics professor in the US. In that article, he explained why human-robot interactions lack the instinctive aspects that human-human relations naturally have, because “we do not understand each other”, and more specifically “we cannot tell each other’s intentions.”

In a number of media references at the end of last year, 2015 was identified as the year when artificial intelligence (AI) became one of our prime concerns about the future. Indeed, Stephen Hawking (who seems to be on every front page these days) warned that “thinking machines pose a threat to our very existence.” He and other entrepreneurial celebrities (Elon Musk, Bill Gates, and others) signed an open letter on AI stressing that it can bring great benefits to humanity, but that it needs to be controlled in order to prevent “existential risks”.

Intelligent and dangerous robots have been a recurring feature of movies and TV series for many years. The recent Ex Machina movie presented us with one female robot that is too beautiful and fascinating to resist: while clearly made of steel, she had a personality and behaviour that was not only indistinguishable from that of a human, she was more mentally attractive and seductive than almost any human being. The movie went on to imply that in resemblance and “just better”—ness lies the danger, and indeed the rest of the story played out to disastrous effects.

We are now told that this is not just science fiction or paranoid anxiety; that future is already here. Last week, scientific reports circulated about a psychological experiment that showed humans getting sexually aroused by touching robots’ behinds. Again, the message is that robots are now playing every human role imaginable, they can do everything humans can do, oftentimes even better, but — most importantly — they cannot and must not be trusted.

Robots being fundamentally untrustworthy is also a well-known and explored theme in our literature and popular entertainment. Androids (humanoid robots) have populated novels and movies for a long time, with a variety of roles and personalities, from the obedient mechanical or electronic worker to the rebel machine, especially in a future where our presence and activities in space have become more than an occasional mission. Who could forget HAL 9000, the robot that took over the spacecraft and mission in 2001: a space odyssey because, it later turned out, it was programmed to lie to the astronauts about certain aspects of the mission, and that created inconsistencies and errors in its “brain”.

Big dilemma

In space, we have no choice but to use robots of varying degrees of sophistication. They are much cheaper than humans; they need no food, water, or oxygen; and they can perform tasks that are too demanding, boring, or extended in time; and they are “expendable”. In fact, even here on Earth robots have started to replace us on tasks that are too complex, dangerous, or requiring high and extended concentration.

And there lies the big dilemma: should we make robots as sophisticated as possible, to take full advantage of their potentials, but then run the risks of serious errors occurring (as in HAL 9000), or keep them dumb and subservient, with no risk of rebellion but with very limited capabilities?

The article about “trusting your robot” that I referred to above mentioned that Nasa has deliberately not used the big robots that it has sent to Mars to their fullest capabilities because the engineers did not “trust” the machines enough to let them take decisions on their own.

Likewise, the drones that are currently used for military operations are controlled by a dozen military personnel to keep the decision-making fully in the hands of humans.

But this reluctance to let robots do all that they can do says more about us than about the robots. After all, we’ve made them and programmed them. Not “trusting” them simply means we are not trusting ourselves, our work, and our own behaviour. There is no doubt that in such fields (space, military, etc.), mistakes can have high financial and human costs. Hence, the principle of precaution must be duly exercised.

Like our search and fascination for extra-terrestrials, the development of advanced and intelligent robots and our interaction with them reflects our deep anxieties and uncertainties about human society, present and future. Engineers and computer programmers must work hand in hand with psychologists and sociologists, not to mention ethicists and moral philosophers, to trace a path of safe and reasonable development of robotics and artificial intelligence. Humanity is at stake.

Nidhal Guessoum is a professor of physics and astronomy at the American University of Sharjah. You can follow him on Twitter at: www.twitter.com/@NidhalGuessoum.