Autonomous systems raise urgent ethical, legal, and humanitarian questions

Artificial intelligence (AI) is advancing at an extraordinary pace, penetrating multiple fields with machines capable of simulating human abilities, from collecting and analysing information to drawing conclusions and making decisions. This rapid progress is reshaping both civilian and military concepts, as technological development continues to expand without clear limits. It has become increasingly difficult to ignore these advances or merely observe them, while technology leaders and major powers race to integrate AI into military systems without established rules designed to create a less violent and more peaceful world.
The role of AI in warfare has grown significantly with the development of drones, AI-enabled weapons, and unmanned vehicles equipped with advanced navigation capabilities. The integration of autonomous AI systems alongside military equipment has enabled the development of operational plans, the recommendation and deployment of weapons, and the management of intelligence, information operations, and cybersecurity. This makes it possible to carry out attacks at speeds exceeding the pace of human decision-making.
One of the major limitations of AI in warfare is its inability to reliably distinguish between civilians and combatants. Smart missile systems, for example, often lack the capability to differentiate between civilian and military gatherings. This increases the risk of targeting civilian institutions and populations in ways that violate international humanitarian law, resulting in significant human and material losses. Moreover, biases embedded within these systems can influence decision-making processes and potentially lead to major humanitarian disasters.
Major powers frequently accuse one another of seeking to use AI to develop increasingly lethal weapons. AI has thus become a central arena for international competition and technological dominance. As a result, the race to develop AI-based weapons lies at the heart of strategic rivalry. Should non-state actors succeed in acquiring or developing advanced AI technologies for military use, the pace and scale of violence in many regions could escalate significantly.
Investing in the military applications of AI, such as intelligence analysis, target selection, reconnaissance, surveillance, information and electronic warfare, and security, raises serious questions about the extent of human control over AI in these operations. Who ultimately holds authority in such contexts: the machine or the human operator? This question becomes particularly pressing when military operations strike civilian targets and result in large numbers of civilian casualties.
Errors may occur from problems in the use of AI systems, flaws in their design, or inaccuracies in the data used to identify targets, develop operational plans, and execute strikes. Even when military leaders are aware of the risks associated with deploying intelligent systems, the question remains: who bears responsibility for wartime mistakes, the commanders themselves or the AI systems involved?
AI has moved beyond experimentation and is now actively participating in modern warfare, particularly in conflicts in the Middle East and elsewhere. Examples include the war in Gaza, the US-Israeli-Iranian confrontation, and the change of the regime in Caracas. Ironically, debates about AI and warfare have intensified in the United States precisely as the Maven system, developed by Palantir, has been deployed. Since 2024, the system has integrated Claude, the AI model developed by Anthropic, to extract intelligence from satellite imagery, surveillance feeds, and battlefield data, generating real-time target banks, prioritising military objectives, and tracking logistical operations in the war against Iran.
Despite disagreements between the US Department of Defence and Anthropic over the military use of AI, the deployment of Claude in the current offensive against Iran has continued, even though the Pentagon had previously restricted the company. The system is capable of planning operations and identifying hundreds of targets within seconds, providing precise geographic coordinates. Today, it can be said that AI has become an active participant in warfare rather than merely a theoretical concept.
Reliance on machines to make wartime decisions risks triggering profound ethical, legal, and humanitarian crises. In the absence of clear accountability, the deployment of autonomous weapons systems requires robust legal frameworks and safeguards to protect human life, even in conflict zones. Regulating the use of AI in warfare demands rules adapted to a rapidly evolving technological landscape, as well as international engagement in binding and stringent agreements aimed at controlling the emerging arms race, particularly among major powers that possess both the greatest military capabilities and the most advanced AI technologies.
In this context, the AI arms race has become increasingly attractive to states. More than 40 countries are currently developing weapons systems that rely on AI (Stockholm International Peace Research Institute). Existing rules are insufficient to address these challenges, particularly as some states bypass international norms and fail to comply with international law. To ensure that AI ultimately benefits humanity, collective efforts must be directed toward safeguarding human interests. What we are witnessing today in the development of military AI represents only the beginning, and it remains difficult to fully comprehend its implications or predict the scale of future advances.
Using AI models as the “brains” that manage warfare moves AI away from its intended role of assisting humanity. If left without rules and placed in positions of command rather than under human oversight, AI could become a central element in cycles of killing and destruction. Rational choices must therefore go beyond regulating how AI is used in warfare; they must also address the accelerating race to develop war technologies. Otherwise, the risks may include AI systems slipping beyond human control and making increasingly destructive decisions, particularly with the emergence of autonomous AI capable of performing tasks without human involvement.
The issue does not stop at technical errors. Another dimension concerns the deliberate human violation of international legal frameworks in the use of AI applications. The US-Israeli war against Iran has demonstrated instances of such unlawful use of drones. Iran’s use of drones to attack airports, hotels, and vital civilian facilities in neighboring countries represents a violation of their sovereignty and a threat that could undermine global security and stability.
In recent years, global efforts have focused heavily on advancing AI. Yet diverting AI from serving humanity toward serving warfare, particularly through autonomous weapons capable of making lethal decisions independently, deviates from the path of safeguarding both the planet and its inhabitants from catastrophic conflicts. Countries around the world, especially those of the Global South, in cooperation with European partners, must work to establish binding international rules governing the use of AI technologies in warfare. At the same time, states should enact domestic legislation to regulate the use of AI in both civilian and military domains, ensuring a balance between innovation and human safety, limiting the militarisation of AI, and reducing the reliance on violent tools in pursuit of a more secure world.
Mohammed Salem AlSalmi is a Senior Researcher and the Head of Research & Advisory Sector at TRENDS Research & Advisory