The advent of Artificial Intelligence (AI) in warfare, manifesting through the development of Lethal Autonomous Weapon Systems (LAWS), marks a pivotal shift in military strategy and ethics. By definition, LAWS are weapon systems capable of identifying, selecting, and engaging targets without human intervention, thereby posing moral and philosophical dilemmas. Picture a scene reminiscent of the movie Terminator, where autonomous machines, devoid of human empathy and moral reasoning, carry out missions with lethal precision, their metallic forms looming over a battlefield shrouded in smoke and despair. This scenario is not mere science fiction but a reflection of the potential reality with the advancement of LAWS, showcasing their lethality and the palpable risks involved. For philosophers like Immanuel Kant and Martin Heidegger, the use of LAWS would raise two paramount concerns: firstly, the absence of human control signifies the loss of moral accountability and ethical reasoning in conflict situations, essentially detaching the human element from the act of killing. Secondly, giving control to AI brings forth the risk of unintended consequences, as machines lack the capability to comprehend the nuances and complexities inherent in ethical and moral decision-making. The inherent limitations in utilising LAWS in conflicts lie in their inability to distinguish between combatants and non-combatants and to make proportionate responses in dynamic and evolving combat scenarios, potentially leading to catastrophic outcomes. The development and deployment of LAWS, therefore, necessitate a serious reevaluation of the philosophical, ethical, and humanitarian dimensions of modern warfare. Despite knowing the plausible risks of LAWS, the global consensus on banning the use of LAWS is absent. Media reports allude to a significant development, indicating that the United States plans to propose international norms for military AI at the United Nations. There is a palpable need for a globally recognised treaty on the minimal and responsible usage of AI in military and warfare, akin to the Nuclear Non-Proliferation Treaty (NPT). Such a treaty would not only establish legally binding norms and ethical standards but would also serve as a bastion against the uninhibited proliferation of autonomous weapons, preventing the escalation of unforeseen and uncontrollable consequences. However, achieving international consensus on norms is a notably challenging task, let alone formalising such an agreement into a prohibitive treaty. Reaching a consensus on banning LAWS is challenging due to the inherent difficulties in defining such systems, coupled with the varying interpretations and interests of countries like the US and China. The US embeds ambiguous definitions of LAWS in its military doctrines, effectively blending autonomous qualities with automation processes, making it complex to legally regulate such systems. Concurrently, China intentionally fosters ambiguity in defining LAWS, maintaining strategic flexibility in the development of intelligent weaponry, thus both nations, by employing semantic maneuvers and exploiting the ambiguities and relational understandings of autonomous systems, are effectively undermining global efforts to reach a clear, legally binding international regulation on LAWS, emphasising their geopolitical and military interests over international humanitarian efforts. Last February, the Bureau of Arms Control, Verification and Compliance of US Department of State, came up with “ Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy”. While it provides a comprehensive framework emphasizing adherence to international law and human oversight, it is characterized by several shortcomings that potentially hinder its efficacy and applicability. The document notably lacks specificity in crucial areas such as the form of human control, methodologies for minimising bias, and criteria for testing, leaving much room for interpretation and potentially compromising the clarity and consistency of its implementation. It does not delineate enforcement mechanisms or address compliance violations, thereby posing questions about the accountability and enforcement of the outlined principles. The declaration could also benefit substantially from more explicit incorporation of ethical and moral considerations and transparency stipulations in the development and deployment of military AI to address moral implications and foster trust. The inherent vagueness and the lack of provisions for flexibility in implementation could potentially inhibit its adaptability to the diverse and evolving nature of military AI capabilities and varied deployment contexts. Furthermore, since 2018, Antonio Guterres, the Secretary-General of the United Nations, has asserted that lethal autonomous weapon systems are morally reprehensible and politically unsuitable and has advocated for their banning under international legislations. In his New Agenda for Peace released in 2023, Guterres has reinforced this stance, urging states to finalise, by 2026, a binding legal framework to outlaw lethal autonomous weapon systems operating devoid of human oversight or control, and which are incompatible with international humanitarian law, while also placing regulations on all other forms of autonomous weapon systems. Regrettably, the efforts to extend the scope of the UN arms control agreement, CCW (Convention on Certain Conventional Weapons), to encompass AI have been stymied, never surpassing the debate over the definition of ‘human control’. In 2019, there was a breakthrough and there was a consensus on 11 guiding principles among states acknowledging that the development and utilisation of LAWS are not exempt from existing legal frameworks, with international humanitarian law (IHL) maintaining its full applicability to all weapons, including potential lethal autonomous ones. Adherence to IHL is crucial in its own right and serves as a fundamental criterion for evaluating the permissibility and potential humanitarian impacts of such systems. Nevertheless, there remains contention regarding the extent and manner in which current IHL regulations impose restrictions on the creation and deployment of LAWS. A pivotal point of this contention within the Group of Governmental Experts (GGE) discourse revolves around determining the necessary level and nature of human–machine interaction to ensure compliance with IHL. As also suggested by a report by SIPRI, adherence to IHL and thoughtful consideration of ethical and security concerns are pivotal in evaluating the permissibility of LAWS and the requisite level of human-machine interaction. But interpretations of what IHL necessitates can vary notably among states, particularly concerning the extent and nature of human-machine interaction needed for compliance. Thus, states must delineate their ethical assumptions, deciding whether the nature of such interaction is dictated solely by the necessity to prevent unlawful outcomes or if it is also imperative to maintain human agency and responsibility in fulfilling IHL obligations. This clarification is crucial not only for determining the necessity of human-machine interaction irrespective of the weapon system’s features and usage context but also for deliberating whether the discussions under the CCW should extend to formulating norms that uphold human agency in executing IHL obligations more broadly. Further, the principles fall short of setting clear benchmarks or standards for determining when and how IHL and ethical considerations apply, particularly given the rapid pace of technological advancement. The emphasis on “human responsibility” (guiding principle (GP)- b) and “human-machine interaction” (GP c) is open to varied interpretations, which can lead to inconsistent implementations across nations. The call for risk assessments and mitigation measures (GP g) is commendable, but without a detailed framework, it becomes a generic statement. Additionally, while the principles caution against anthropomorphizing emerging technologies (GP i), they do not explicitly address the ethical challenges of AI decision-making in combat scenarios. Lastly, the emphasis on not hampering progress in the peaceful use of autonomous technologies (GP j) is a valid point, but the line between military application and peaceful use remains nebulous, potentially leading to loopholes and exploitations. Thus, the existing framework and guidelines on LAWS are not enough. Despite these challenges, achieving an agreement on prohibiting LAWS is crucial. Beyond the clear ethical imperatives, there exist three additional reasons supporting this necessity. First, the relentless pursuit to develop LAWS would inevitably initiate a competition focused on crafting increasingly hazardous weaponry. This would not only skew the balance of power in favour of the nations that pioneer LAWS but also precipitate instability in disturbed areas, especially within the Indo-Pacific region. Second, use of LAWS has the potential for a spiral of mutual mistrust and strategic vulnerabilities. As underscored by the Offset X report, while the US strategizes to exploit the PLA’s reliance on AI for warfare, this opens a Pandora’s box. If the US seeks to undermine China’s AI systems, it is only logical for China to respond in kind, using similar techniques against the US. This reciprocal AI warfare could set off a chain reaction, eroding strategic trust and escalating risks. Such a volatile environment, where nations constantly second-guess their systems due to potential manipulations by adversaries, threatens global stability. A preventative ban on LAWS can halt this dangerous trajectory, reinforcing mutual trust and ensuring long-term international security. Finally, the rush to develop LAWS may result in the deployment of precarious systems, with consequences rippling across the globe. For example, advancements by China in autonomous and AI-enabled weapons can disrupt military equilibrium and escalate global threats, especially as tensions among powerful nations mount. The pursuit of technological supremacy might compel the deployment of unsafe, unproven, or unreliable weapon systems, intensifying risks and uncertainties in actual operational environments. Thus, the US’ move to propose norms for military AI should be seen against this backdrop. However, it is not enough. The world needs to do more. Banning LAWS is the way forward. The author is Officer on Special Duty, Research, Economic Advisory Council to the Prime Minister of India. Tweets @adityasinha004. Views expressed in the above piece are personal and solely that of the author. They do not necessarily reflect Firstpost’s views. Read all the Latest News, Trending News, Cricket News, Bollywood News, India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
LAWS are weapon systems capable of identifying, selecting, and engaging targets without human intervention, thereby posing moral and philosophical dilemmas
Advertisement
End of Article