1.2253012-4198346697
Image Credit: Supplied

In 2014, the United States Army Research Laboratory published a report predicting what the battlefield of 2050 would look like. Not surprisingly, it was a scenario largely driven by technology, and the report described a sort of warfare most people associate with video games or science-fiction movies — combined forces of augmented or enhanced humans, robots operating in swarms, laser weapons, intelligence systems and cyberbots fighting in a highly contested information environment using spoofing, hacking, misinformation and other means of electronic warfare.

In one sense, this is nothing new. The way wars are fought have always changed with technology. But humans themselves don’t change so rapidly. As a retired Air Force major general with special interests in both technology and military ethics, I have a specific concern: that as new weapons technologies make soldiering more lethal, our soldiers will find it more difficult than ever to behave ethically and to abide by the long-established conventions regarding the rules of war.

As the nation with the most powerful military on earth, the United States is obligated to acknowledge these profound fundamental changes in the experience of soldiering, to confront them and to prepare to adapt.

On battlefields of the past, killing was intentional and intensely personal. In the future, the automated nature of combat, the artificial enhancement of soldiers and the speed and distances involved will threaten to undermine the warrior ethos. This applies not only to conduct between opposing soldiers, but among those fighting on the same side as well. In the past, soldiers took risks for one another. It is not at all clear how the introduction of autonomous machines will change that dynamic. Would a soldier be willing to entrust his life to a machine? Would a fellow soldier put his life or the lives of other soldiers in danger to save an important robot? Should he?

Many soldiers will no longer have to fear close contact and hand-to-hand combat because they will be able to deploy robots and unmanned vehicles at great ranges. However, fear acts as a modulator of behaviour, and by reducing it we will likely also remove constraints on unethical behaviour. If a soldier cannot see, hear and understand the context of a battlefield or a particular engagement, he is less likely to concern himself with decisions requiring such nuance.

Perhaps a machine can make a utilitarian calculation on proportionality of force, but can it make an empathetic decision? Would it be able to sense an enemy’s wavering determination, for instance, and call off an attack to prevent unnecessary loss of life? Commanders and troops on the ground, in contact with an enemy, have a “feel” for the complexities of the battlefield that cannot be reproduced by a machine.

Computers, artificial intelligence, robots and autonomous systems will create an environment too complex and fast for humans to keep up with, much less control.

Increasingly, warfare will exceed the capacity of the human senses to collect and process data. Department of Defence policy on autonomy in weapons systems says a human will always be part of the decision-making for lethal weapons, but the Pentagon nonetheless spent $2.5 billion (Dh9.2 billion) in 2017 on artificial intelligence, and every military service is funding work in autonomous systems. In 2019, the Defence Department expects to spend $9.4 billion on unmanned systems, with over $800 million for autonomy, human-machine teaming and swarm research.

Gradually, perhaps imperceptibly, automated systems will function so much more efficiently that humans will become mere bystanders. The soldier will become the slowest element in an engagement, or will simply become irrelevant. Adherence to the rules of war will become less relevant as well.

A separate set of ethical questions are raised by the technologies of human “enhancement” and augmentation, which include improving physical strength, stamina and pain tolerance, as well as using neurological implants and stimulation to restore brain function and enhance learning.

Can soldiers under the influence of behaviour-modifying drugs or electronics be held to account for their actions? If the soldier is using drugs to enhance his cognition or reduce his fear, what is the role of free will? Might a soldier who fears nothing unnecessarily place himself, his unit or innocent bystanders at risk? What about the impact of memory-altering drugs on the soldier’s sense of guilt, which might be important in decisions about unnecessary and superfluous suffering?

These are important decisions in war, and they form the basis for many of the tenets of “just war” theory. General Paul Selva, the vice-chairman of the Joint Chiefs of staff, supports “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”

The role of revising and recasting these conventions should be taking place at the highest levels of government. So far, it hasn’t. The White House’s Select Committee on Artificial Intelligence, formed in May, has not even acknowledged the major ethical issues surrounding AI that have been very publicly raised by an increasing number of scientists and technology experts like Elon Musk, Bill Gates and Stephen Hawking.

While it is important that leaders openly recognise the critical nature of these issues, the Department of Defence needs to follow up on its 2012 directive on autonomy with guidelines for researchers and commanders. It should require that both researchers and military commanders question — throughout the development process and long before the systems are ready for deployment — how the systems will be used and whether that use might violate any of laws of armed conflict and international humanitarian law.

Historically, the US has led the world in technology development, and thus our use of questionable weapons or methods will be well noted by others. Sadly, in the period after the September 11 attacks, the United States resorted to torture of enemy detainees. While most senior leaders have denounced the practice, the fact remains that the nation crossed an important moral threshold. Knowing that, future enemies — even civilised ones — may be less inhibited in employing the same methods against us. The same will be true for advanced technologies.

With new warfare technologies, we now have an opportunity to again demonstrate our leadership in human rights by ensuring that our young soldiers know how to use the new weapons in ethical and humane ways.

— New York Times News Service

Robert H. Latiff is a retired US Air Force major general and the author of Future War: Preparing for the New Global Battlefield.