1.1441326-4019376402
An X-47B pilot-less drone combat aircraft is pictured as it flies over the aircraft carrier, the USS George H. W. Bush, after being launched from the ship in the Atlantic Ocean off the coast of Virginia, May 14, 2013. Image Credit: Reuters

It will be an interesting year for the X-47B. The new unmanned aircraft, developed by Northrop Grumman, will be put through its paces on a US warship to check it can do all the things existing aircraft can: take off and land safely, maintain a holding pattern, and “clear the deck” for the next aircraft in just 90 seconds.

This uber-drone is perhaps the most advanced autonomous machine in existence, possibly outshining Google’s driverless car. Both examples of artificial intelligence will have spurred signatories to the open letter on Artificial Intelligence (AI) just published by the Future of Life Institute, a collective of influential thinkers on the existential risks to humanity. Signed by scientists including Professor Stephen Hawking, as well as AI entrepreneurs such as Elon Musk, it warns that AI is marching so fast towards humanlike intelligence that we have not fully contemplated its potential for both good and ill. Along with an oblique exhortation to avoid “potential pitfalls”, the letter states: “AI systems must do what we want them to do.” A longer paper asks how citizens can flourish when many jobs, including in banking and finance, will be automated. Does the future promise unalloyed leisure — or just unemployment? The letter is timely, given that the technology is outpacing society’s ability to deal with it. Who, for example, is liable if a driverless car crashes? This is unclear, even though four US states have given the legal go-ahead for testing on public roads. The UK is likely to grant similar approval this year. Legal clarity is necessary for consumer confidence.

Ethics software

And what if a driverless car, in order to avoid a potentially fatal collision, has to mount the pavement? Should it be installed with ethics software so that, given the choice between mowing down an adult or a child, it opts for the adult? These are the longer-term, more challenging questions posed by AI, and society, rather than Silicon Valley investors, should dictate how quickly they are answered. If we are to give robots morality, then whose morals should be burnt into the machines? What about sabotage?

The idea of a moral machine fascinates because in an age where machines can already do much of what humans can — drive, fly aircraft, run, recognise images, process speech and even translate — there are still capacities, such as moral reasoning, that elude them.

Indeed, as automation increases, that omission might itself be immoral. For example, if drones become our battlefield emissaries, they may have to make decisions that human operators cannot anticipate or code for. And those decisions might be “moral” ones, such as which one of two lives to save. Scientists at Bristol Robotics Laboratory showed last year that a robot trained to save a person (in reality another robot) from falling down a hole was perfectly able to save one but struggled when faced with two people heading holewards. It sometimes dithered so long that it saved neither. Surely a robot programmed to save one life is better than a perplexed robot that can save none?

So artificial morality seems a natural, if controversial, next step. In fact, five universities, including Tufts and Yale, are researching whether robots can be taught right from wrong. But this is happening in a regulatory vacuum. Ryan Calo, law professor at Washington University, has proposed a Federal Robotics Commission to oversee developments. As the industry flourishes, we must have some means of holding researchers — many of them in rich, private corporations — to account. But any scrutiny should also challenge our assumptions about the superiority of human agency. Google Chauffeur might not instinctively avoid a pedestrian but it will not fall asleep at the wheel. A robot soldier, equipped with a moral code but devoid of emotion, will never pull the trigger in fear, anger or panic. A more automated world might, in a strange way, be a more humane one.

— Financial Times

Anjana Ahuja is a science commentator and regular Financial Times contributor.