This isn't a sci-fi film: Autonomous Weapons Systems could be a reality soon - Firstpost
You are here:

This isn't a sci-fi film: Autonomous Weapons Systems could be a reality soon

Autonomous machines — that once activated select their targets and go on a killing spree — no longer belong exclusively to the domain of dystopian science fiction. The threat from such machines is real enough for 100 states to come together and debate the matter of their ban for three consecutive years now. The use of autonomous machines could potentially change the vocabulary of warfare, just like gun powder and nuclear arsenal upon their entry into the battlefield.

In April 2013, NGOs associated with successful efforts to ban landmines and cluster munitions got together in London and issued a call to governments urging the negotiation of a treaty preventing the development, deployment and use of what are known as ‘Killer Robots’ in popular parlance. Governments, however, use a more sanitised term during their negotiations calling them Lethal Autonomous Weapons Systems (LAWS).

Representational image. Reuters

Representational image of a drone. Reuters

In July 2015, some of the world’s leading Artificial Intelligence (AI) scientists including Apple co-founder Steven Wozniak, Skype co-founder Jaan Tallin and Professor Stephen Hawking signed a letter with nearly 21,000 signatures asking for an outright ban on these autonomous weapons systems (AWS). “Autonomous weapons will become the Kalashnikovs of tomorrow,” states the letter. Apart from that, 14 Nobel Laureates including Archbishop Desmond Tutu, Jody Williams, Shirin Ebadi, Muhammad Yunus, have called for a preemptive ban on AWS.

This call from civil society triggered meetings under the aegis of the UN’s Convention on Certain Conventional Weapons (CCW), also known as the Inhumane Weapons Convention, which held its first informal governments meet in 2014. The Convention bans or restricts the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.

This year, the informal meetings were held in the week starting 11 April in Geneva under the able chairpersonship of Michael Biontino, Ambassador of Germany to the Conference on Disarmament (CoD), resulting in an adoption by consensus of a set of recommendations that will be discussed further in the fifth CCW Review Conference — a multilateral meeting held every five years — in December this year.

Do LAWS exist?

The premise on which the CCW negotiations seem to be based is that LAWS do not exist at present.

A number of delegations during the meetings stressed that although there are some existing systems that are automatic, but they cannot be termed as autonomous, that is, they lack human supervision. The existing weapons systems of missiles, drones, mine hunting, land vehicles used in combat situations still need a human being to press a button to apply force. However, an increasing number of countries including China, Israel, Russia, South Korea, the UK and the US are currently developing systems for greater autonomy in combat situations. The weapons industry has already developed semi-autonomous robots that are used for law enforcement, for instance, in Brazil and the US.

“There are a number of countries who are, we know, experimenting at this stage (with AWS) but they won’t be transparent about it. So they are all saying that this is a futuristic weapon, that nobody knows enough about it—there’s a lot of hypocrisy, quite frankly,” said Jayantha Dhanapala, former Sri Lankan diplomat and UN Under-Secretary-General for Disarmament Affairs, and currently president of Nobel Prize-winning Pugwash Conferences on Science and World Affairs, to Firstpost .

Article 36, a UK-based organisation that works on promoting public scrutiny over the development and use of weapons, counters, for instance, UK’s statement that the country does not have and will not develop LAWS. They have said that without explaining what constitutes human control, the UK has suggested a narrow and futuristic concept of LAWS that fails to address contemporary development in AWS while at the same time discouraging dialogue on existing weapons that do not operate with necessary levels of human control.

The British Ministry of Defence is investing in the development of Taranis, nicknamed Raptor, which has been testing autonomous capabilities including target location and engagement puts into question the UK’s statements that it will not develop LAWS.

“There is a second area where the whole is greater than its parts — increasingly autonomous systems working in concert with other increasingly autonomous systems. We need to  not lose sight over how connected/interactive increasingly autonomous features might further attenuate human control or intent,” said Kerstin Vignard, Deputy to the Director at the UN Institute for Disarmament Research (UNIDIR) in a statement to CCW.

UNIDIR also suggested moving away from the term LAWS to, at a minimum, reframe it as “Autonomy IN Weapons Systems” which acknowledges the “varying levels of autonomy that might be applied to the different characteristics within the same object or weapon system”. This takes the conversation away from the rather abstruse debate of whether something is autonomous (either fully autonomous, semi-autonomous or supervised autonomy) or highly automatic, and instead, focuses on functions that when increasing autonomy is applied to them raises concerns and challenges.

Pivotal to arriving at a definition of LAWS then is the idea of “meaningful human control” (MHC) or, “effective human judgment” over the machine. An intervention during the discussion suggested to simplify the concepts being discussed and to understand AWS merely as a “lack of human control”. Some delegations, like France, took the plea that since LAWS do not exist and that the technology is continuing to evolve it would be very difficult, if not impossible, to define LAWS. India suggested a “CCW-specific definition, in the context of its objectives and purposes”. Switzerland suggested defining AWS as “weapons systems that are capable of carrying out tasks governed by international humanitarian law (IHL) in partial or full replacement of a human in the use of force, notably in the targeting cycle”. Pakistan opined that focusing on MHC definitions only would not be appropriate and AWS should be defined as lethal and autonomous.

The US, on the other hand, offered a complex definition of what LAWS are stating that these are systems that once activated can target and kill without further human intervention but one which is also designed to allow commanders and operators to exercise appropriate levels of human judgment to permit an individual attack.

It appeared that some delegations were making the matter of definitions more complex and problematic than it needs to be. There seemed to be a clear division between countries that are already developing AWS and those that are not yet experimenting with them or are in incipient stages, in the position that the country took during the discussions.

Many other delegations stated that the lack of a widely-accepted definition on LAWS was not an impediment to beginning substantial work on the matter. Steve Goose, executive director of the Human Rights Watch’s Arms Division said that definitions are usually the last to be agreed upon since they are what decide the strength of a law and its scope. For instance, in the Mine Ban Treaty and the Convention on Cluster Munitions, the definitions in the treaties were elaborated at the last moment.

Richard Moyes of Article 36 recommended four key elements required for meaningful human control: predictable, reliable, transparent technology; accurate information for users on the outcome, context of use; timely human judgment and action and potential for timely intervention; and, accountability to a certain standard.

LAWS defying laws

The separation of the body from the battlefield raises both moral and legal questions. Enthusiasts for developing LAWS argue that employing machines could lower casualties among soldiers. However, this lowered threshold might encourage more armed conflicts, thereby, endangering civilians more. India supported this view and argued that a more “sanitised” war between machines lowers the threshold for using force. Proponents of killer robots have also argued that combat among machines would eliminate negative emotions, like fear, anger, vengeance etc. On the other hand, the fact that machines are also not endowed with positive emotions like compassion or sensitivity to another’s body language, could mean that the machine annihilates even a surrendering combatant. This goes against the principles of IHL as well as human rights law.

The idea of IHL is to ensure that the ways in which war is waged — its methods and means — are not unlimited. LAWS may directly corrode the three pillars on which the IHL regime stands — the principles of distinction, proportionality and precaution. Armed forces are supposed to distinguish between combatants and non-combatants, minimise civilian casualties and damage to civilian buildings especially hospitals, schools etc, and not take excessive action for expected military gain.

All of these considerations require human judgment.

The US stated that it is important not to confuse moral and legal obligations. Christof Heyns, UN special rapporteur for extrajudicial, summary, or arbitrary executions countered the US’ argument saying that human rights incorporates ethical considerations as well as law which applies during armed conflicts and law enforcement settings. The US also stated that “the adherence to ethical and moral norms will depend less on the inherent nature of a technology and more on its potential use by humans”. However, human rights experts have pointed out that the issue is not merely the misuse of such weapons. Heyns argued that control and accountability go together — if control cannot be exercised and the perpetrator cannot be held accountable, then this itself constitutes violation of the right to life. Cuba reiterating its call for preemptive ban said that LAWS would implicate different generations of human rights including the right to peace and the right to self-determination. The development of AWS could also divert attention from peace and disarmament in violation of UN Charter’s Article 26.

Most delegations maintained that machines are not equipped to execute legal judgments as required by IHL, especially in evolving and clustered scenarios as a war field. The UK was one of the few, if not the only state to argue that new international law is not necessary to prevent the development of LAWS.

Representational image. AP

Representational image. AP

An article called ‘Drone Papers’ published by The Intercept reveals that during one five-month stretch, 90 percent of  people killed by US drone strikes were unintended targets.

If human-supervised systems produce “unintended” consequences, the unpredictability and the lack of accountability of complex systems that run without human supervision would be significantly higher. It is impossible to predict how swarms of systems will behave when confronted with each other.

Moreover, when machines run without the purview of human judgment, it creates an accountability vacuum from a legal perspective for determining who is responsible for unintended outcomes and accidents. It is difficult enough to establish a chain of accountability for drones without adding an extra layer of distance in both battlefields and policing situations.

"Any such use (of LAWS) must always observe an unequivocal accountability chain. This is of crucial importance for the use of any weapons system," said Thomas Göbel, head of Conventional Arms Control Division in the German government.

Ironically for the UK — that seems to be a strong proponent of AWS — BBC reported on 18 April that a drone had hit a British Airways flight from Geneva while landing. Authorities were trying to locate accountability.

John Borrie, chief of research at UNIDIR said failures of such systems would be a question of ‘when’ not ‘if’ as part of what are called ‘normal risks’. The complex systems would be interactions between factors and variables and interactions in unexpected ways. The outcome of interactions between swarms of complex systems of warring parties belongs to the realm of  “unknown unknowns” and defies predictability in a realistic sense.

Also, LAWS, throws up problems for legal weapons review under Article 36 of the 1977 additional protocol 1 of the Geneva Conventions that ensures compliance with IHL. Article 36 does not provide any specific legal guidelines for the review of weapons but merely refers to the existing rules and provisions of international law. The respective defence ministries, with foreign affairs ministry and armed forces, mostly conduct such reviews. In reality, few states do weapons reviews that are transparent and conform thoroughly with humanitarian and human rights laws and ethical concerns. Again, the UK delegation stated that the present legal weapons review process is sufficient for regulating LAWS. Such an opaque system may in fact encourage development of new weapons that break international laws.

Pablo Kalmanovitz of the Universidad de los Andes, Colombia, said that those most at risk from AWS, eg foreign civilians, have no role in the Article 36 weapons review.

“How can an abstract legal review determine 'superfluous injury' or 'unnecessary suffering' two key aspects of IHL — if a weapon has not fully been developed and its effects yet unknown,” asked Stefan Sohm, chief of the Strategic Foundations and Political Analyses Branch at the German Ministry of Defence.

Proliferating systems

Controlling the proliferation of AWS will be a challenge. There are real risks of vertical proliferation, from States to non-State actors, and horizontal proliferation among States.

The development of such AWS could trigger an arms race with disastrous consequences for international stability. “As we know, some countries are planning swarm strategy to use in the future wars in the Asia-Pacific region (with) large autonomous weapons in maritime systems, in air and on land,” the Chinese delegation told the CCW.

“What if the military purchases such systems from civilian channels, open markets, and the designers and programmers don’t know that? Could they still be responsible for any problems?” the delegation added.

“The threshold for escalation is also brought much closer. It is a very, very important dimension and it has to be dealt with immediately,” Dhanapala told Firstpost.

Military applications for increasingly autonomous technologies in the maritime environment are already being researched and some put in place. Operation Iraqi Freedom in 2003 saw the first use of autonomous underwater vehicles for mine warfare operations. Research is also ongoing for more autonomous functions for complex operations, like the US’ Sea Hunter that will have to demonstrate compliance with maritime laws.

“If we think there are legitimate defensive functions to be accorded to weapon systems with autonomous features then we need to define and clarify the circumstances under which they would, indeed, be legitimate. I am putting this as a question not as a conclusion,” said DB Venkatesh Varma, Ambassador of India to the CoD in Geneva.

As opposed to a gradual development of AWS under military control, the possibility that an actor takes an off-the-shelf, low–cost civilian technology and weaponises it, is high. Moreover, every system that is built and that will be built is hackable.

“The technology is already at a point where it is capable of serving the needs of many people we would be concerned about even if its not ready to be integrated into the formal military of the advanced nations,” said Andrew Fursman, co-founder and CEO at IQBit, at a side-event to the CCW.

“This is not a discussion of what might be possible in the future or what a small number of rich nations could do. This is really a conversation about what anyone with $20,000 and the internet would be capable of doing in the course of a few weeks,” he added.

“The challenge for them (States) is to work expeditiously…and do exactly what we did with blinding lasers—ban a very dangerous category of weapons. It is better to keep the genie in the bottle rather than to have it escape and then try to put it back,” the veteran diplomat said.

The adopted set of recommendations will be delivered to the Review Conference which will then consider setting up a group of governmental experts (GGE) who would then meet next year to possibly thrash out a negotiating mandate.

“Time is of the essence because artificial intelligence is improving everyday and the applications of artificial intelligence to warfare is a very dangerous thing,” Dhanapala said.

Comment using Disqus

Show Comments