Ethical questions abound as wartime AI ramps up
Paris: Artificial intelligence's move into modern warfare is raising concerns about the risks of escalation and the role of humans in decision making.
AI has shown itself to be faster but not necessarily safer or more ethical. UN Secretary General Antonio Guterres said Friday that he was "profoundly disturbed" by Israeli media reports that Israel has used AI to identify targets in Gaza, causing many civilian casualties.
Beyond the "Lavender" software in question and Israeli denials, here is a tour of the technological developments that are changing the face of war.
Imagine a full-scale conflict between two countries, and AI coming up with strategies and military plans and responding in real time to real situations. The reaction time is significantly reduced. What a human can do in one hour, they can do it in a few seconds.
read more
- AI anxiety as computers get super smart
- How Israel used AI to kill Iranian scientist Mohsen Fakhrizadeh by remote control
- Israel deploys ship-mounted C-Dome defence system for the first time
- Biden signs sweeping executive order regulating artificial intelligence
- The Vatican's top expert on AI ethics is a friar from a medieval Franciscan order
Three major uses
As seen with Lavender, AI can be particularly useful for selecting targets, with its high-speed algorithms processing huge amounts of data to identify potential threats.
But the results can only produce probabilities, with experts warning that mistakes are inevitable.
AI can also operate in tactics. For example, swarms of drones - a tactic China seems to be rapidly developing - will eventually be able to communicate with each other and interact according to previously assigned objectives.
At a strategic level, AI will produce models of battlefields and propose how to respond to attacks, maybe even including the use of nuclear weapons.
Thinking ever faster
"Imagine a full-scale conflict between two countries, and AI coming up with strategies and military plans and responding in real time to real situations," said Alessandro Accorsi at the International Crisis group.
"The reaction time is significantly reduced. What a human can do in one hour, they can do it in a few seconds," he said.
Iron Dome, the Israeli anti-air defence system, can detect the arrival of a projectile, determine what it is, its destination and the potential damage.
"The operator has a minute to decide whether to destroy the rocket or not," said Laure de Roucy-Rochegonde from the French Institute of International Relations.
"Quite often it's a young recruit, who is twenty years old and not very up-to-speed about the laws of war. One can question how significant his control is," she said.
A worrying ethical void
With an arms race under way, and clouded by the usual opacity of war, AI may be moving onto the battlefield with much of the world not yet fully aware of the potential consequences.
Humans "take a decision which is a recommendation made by the machine, but without knowing the facts the machine used", de Roucy-Rochegonde said.
"Even if it is indeed a human who hits the button, this lack of knowledge, as well as the speed, means that his control over the decision is quite tenuous."
AI "is a black hole. We don't necessarily understand what it knows or thinks, or how it arrives at these results", said Ulrike Franke from the European Council on Foreign relations.
"Why does AI suggest this or that target? Why does it give me this intelligence or that one? If we allow it to control a weapon, it's a real ethical question," she said.
Ukraine as laboratory
The United States has used algorithms, for example, in recent strikes against Houthi rebels in Yemen.
But "the real game changer is now - Ukraine has become a laboratory for the military use of AI", Accorsi said.
Since Russia invaded Ukraine in 2022 the protagonists have begun "developing and fielding AI solutions for tasks like geospatial intelligence, operations with unmanned systems, military training and cyberwarfare", said Vitaliy Goncharuk of the Defense AI Observatory (DAIO) at Hamburg's Helmut Schmidt University.
"Consequently the war in Ukraine has become the first conflict where both parties compete in and with AI, which has become a critical component of success," Goncharuk said.
One-upmanship and nuclear danger
The "Terminator", a killer robot over which man loses control, is a Hollywood fantasy. Yet the machine's cold calculations do echo a fact of modern AI - they do not incorporate either a survival instinct or doubt.
Researchers from four American institutes and universities published in January a study of five large language models (a system similar to the ChatGPT generative software) in conflict situations.
The study suggested a tendency "to develop an arms race dynamic, leading to larger conflicts and, in rare cases, to the deployment of nuclear weapons".
But major global powers want to make sure they win the military AI race, complicating efforts to regulate the field.
US President Joe Biden and China's President Xi Jinping agreed in November to put their experts to work on the subject.
Discussions also began 10 years ago at the United Nations, but without concrete results.
"There are debates about what needs to be done in the civil AI industry," Accorsi said. "But very little when it comes to the defence industry."