Please register to access this content.
To continue viewing the content you love, please sign in or create a new account
Dismiss
This content is for our paying subscribers only

Opinion Columnists

Comment

AI & Warfare: Ethical challenges in the age of autonomous weapons

International treaty on LAWS, overseen by UN, must ensure accountability



Image Credit: Shutterstock

In 2024, the world grapples with multiple conflicts, each uniquely intensified by the burgeoning use of technology in warfare. A prominent example is the ongoing strife between Russia and Ukraine, where the internet is awash with footage showcasing the extensive use of unmanned vehicles, particularly frugal drones, marking a new era in combat strategies.

Recently, during the Azerbaijan-Armenia conflict, drones played a pivotal role, with Azerbaijan extensively utilising advanced UAVs for reconnaissance, target acquisition, and precision strikes. This significantly shifted the battlefield dynamics.

In parallel, a significant escalation has occurred in the Red Sea region, where the Houthi rebels in Yemen have employed drones and missiles in their maritime strategies.

Notably, US and British naval forces have intercepted numerous Houthi drones and missiles targeting commercial shipping lanes in the southern Red Sea. This represents a considerable threat to international commerce, as these attacks disrupt critical shipping routes.

Read more

Advertisement

Lethal Autonomous Weapons Systems

The use of drones by the Houthis includes sophisticated unmanned aerial vehicles capable of travelling extensive distances and unmanned surface vessels packed with explosives. These developments underline the increasing reliance on unmanned systems in contemporary conflicts.

These technologies are mere precursors to the more alarming trend towards the use of Lethal Autonomous Weapons Systems (LAWS) in modern warfare. LAWS represent a significant shift like warfare.

LAWS differ from traditional autonomous weapon systems’ advanced capability to select and engage targets without human intervention, relying on sophisticated algorithms and AI decision-making.

In contrast, traditional autonomous weapons typically require human operators for critical targeting decisions, maintaining a level of human oversight and control in the use of lethal force.

The absence of human judgement and emotional discernment in LAWS is a double-edged sword; while it eliminates hesitation and fear, it also eradicates moral and ethical considerations from the battlefield. This lack of human oversight raises grave concerns about the indiscriminate use of lethal force, especially in scenarios requiring nuanced decision-making.

Advertisement

Principles of proportionality

LAWS will be challenging the established norms of international humanitarian law and the principles of jus in bello (justice in war). One of the primary concerns is the issue of compliance with the principles of distinction and proportionality, fundamental tenets of the Geneva Conventions.

LAWS, reliant on algorithms for target identification, may lack the nuanced judgement necessary to distinguish between combatants and non-combatants, particularly in complex, fluid combat environments.

This deficiency not only risks violating the principle of non-combatant immunity but also raises questions about the system’s ability to assess the proportionality of force used, a calculation traditionally grounded in human judgement and context-awareness.

The potential for LAWS to make erroneous or ethically dubious decisions in target engagement poses a significant risk of unlawful collateral damage and unjustified civilian casualties, contravening established laws of armed conflict.

Additionally, LAWS complicates the assignment of responsibility and accountability for actions taken on the battlefield, a concept integral to both legal and ethical frameworks in warfare. The principle of command responsibility, a cornerstone of the Rome Statute of the International Criminal Court, holds military leaders accountable for the actions of their subordinates.

Advertisement

However, with LAWS, attributing liability becomes murky. The decentralised nature of decision-making in autonomous systems challenges the traditional chain of command and raises the question of who bears responsibility — the programmer, the commander who deployed the system, the manufacturer, or the AI itself?

This ambiguity undermines the accountability mechanisms essential for enforcing war crimes legislation and ensuring justice for victims of unlawful acts of war.

Furthermore, the deployment of LAWS raises critical ethical dilemmas regarding the dehumanisation of warfare. Entrusting life-and-death decisions to machines strips away the human element intrinsic to ethical decision-making in combat scenarios.

It ignores the role of compassion, mercy, and moral judgement that often plays a crucial role in human-controlled warfare, potentially leading to a colder, more mechanistic approach to conflict.

This detachment could desensitise the horrors of war and lower the threshold for entering conflicts, leading to a dangerous escalation in warfare intensity. The prospect of an arms race in autonomous weapons technology further exacerbates these concerns, posing a threat to global stability and peace.

Advertisement

Overseeing the role of AI

International bodies like the United Nations, established to foster peace and control conflicts, are lagging in addressing the rapid advancement and deployment of these technologies. The failure of these institutions to effectively govern the use of AI in warfare poses a stark warning: as the world grapples with the complex dynamics of modern conflicts, it must not fall behind in regulating and overseeing the role of AI in these confrontations.

The world urgently needs a global treaty akin to the Nuclear Non-Proliferation Treaty. This treaty must address the unique challenges posed by LAWS, ensuring strict compliance with international humanitarian law, particularly the principles of distinction and proportionality.

It should establish clear guidelines for developing, deploying, and using LAWS, ensuring accountability in the chain of command and maintaining the human element in decision-making processes. Such a treaty would not only safeguard against the unchecked proliferation of these systems but also serve as a bulwark against the erosion of ethical standards in warfare.

As the world contends with the multifaceted implications of AI in conflict, international bodies like the United Nations must lead the charge in formulating and enforcing this treaty, thus securing a future where technological advancements in warfare are balanced with the imperatives of human rights, justice, and global stability.

Aditya Sinha (X: @adityasinha004) is an Officer on Special Duty, Research, Economic Advisory Council to the Prime Minister of India. Views Personal.

Advertisement