Trending:

AI in warfare: Unleashing Pandora's box

Col Rajneesh Singh May 26, 2024, 10:33:11 IST

Although companies involved in the testing and manufacturing of artificial intelligence-powered platforms claim decisions to attack targets rest with humans, no one can deny there exists an overhang of AI-powered combat robots bypassing humans

Advertisement
As per reports, AI is also being used on the battlefields in Ukraine. Image: REUTERS
As per reports, AI is also being used on the battlefields in Ukraine. Image: REUTERS

In Greek mythology, Prometheus and his brother Epimetheus were tasked with creating all life on Earth, and Prometheus with creating humans in particular. One day, Zeus demanded humans provide him an animal sacrifice, but Prometheus tricked him into accepting bones and fat. Angry, Zeus took fire away from humans, leaving them in the dark. Prometheus was extremely caring about his creation, humans, whom he had taught agriculture, mathematics, medicine, and science. He stole fire from Zeus and brought it back to the humans, who used it not only for routine tasks like cooking food and keeping themselves warm but also for creating and shaping great many civilisations. When Zeus realised what had happened, he sent Pandora to be Prometheus’s wife. On reaching the Earth, Pandora opened her box, releasing all the evils, diseases, and hard work that plague mankind, except hope.

STORY CONTINUES BELOW THIS AD

Humans used fire, brought back by Prometheus, not only to create great civilisations but also to make weapons and bring war and misery to mankind.

Greek mythology reminds us that technology by itself is neither good nor evil. It is what humans do with it that makes it useful or brings grief. The world today faces technological and ethical dilemmas with advancements in the field of artificial intelligence (AI). A revisit to Greek mythology offers particularly interesting insights into the consequences of unrestricted technological advancements in the field of AI and its employment in war.

AI is the new fire in the hands of mankind, which has the potential to cause great destruction to life and property. It can also be used to provide warmth, make life more comfortable, and take civilisation to new heights.

Reportedly, AI is being used on the battlefields in Ukraine and Gaza.

Information about the employment of AI in wars is shrouded in mystery because of its classified nature and the misinformation and disinformation surrounding it. This article attempts to underscore some of the capabilities of AI in warfare using information available in the open domain.

Israeli-Palestinian publications +972 Magazine and Local Call have reportedly interviewed several sources with firsthand information about the Israeli AI programme, offering an insight into the way AI has been used in the war to generate targets.

Israel is reportedly using an AI target-creation platform called The Gospel, which is being used to produce “automated recommendations” for identifying and attacking targets. This system was first activated in 2021 during Israel’s 11-day war with Hamas, which was perhaps the world’s first AI -facilitated war. If the reports are to be believed, the IDF struck 11,000 targets in Gaza by November 2, 2023, and 90 per cent of the targets were AI-generated recommendations. For the sake of comparison, during the 2014 Gaza conflict, Israel struck 5,000 to 6,000 targets over 51 days of the conflict, while in the ongoing war, the IDF reportedly attacked 15,000 targets in the first 35 days of the war.

STORY CONTINUES BELOW THIS AD

In 2019, a new ‘targeting directorate’ was created to produce targets for the Israeli Defence Forces (IDF), particularly for the Israeli Air Force (IAF). In all the previous wars and conflicts, the IAF hit all known targets within the first few weeks of the war, and thereafter they ran out of targets. In the ongoing war, which followed the horrific 7 October 2023 attack by Hamas, The Gospel (dubbed ‘Habsora’ (בשורה), generated target information using a host of sources, including surveillance data using drones, space-based assets, intercepted communications, and open-source information. This data was used to generate brigade- and division-level targets by the directorate.

‘Lavender’ is another Israeli AI programme that not only generates targets but is also involved in decision-making and approving targets for strikes. In addition to Lavender, there is another system called ‘Where’s Daddy?’ which is used to track the targeted individuals and enables strikes when they enter their residences in Gaza. Thousands of civilian casualties, including women and children, occurred in the initial days of the war when the IDF struck their designated targets in their residences.

STORY CONTINUES BELOW THIS AD

The difference between The Gospel and Lavender is in the way the two programmes define their targets: The Gospel designates the buildings and other infrastructure from which the targets operate for destruction, while Lavender identifies people who are designated for neutralisation.

In all previous wars, the IDF would devote reasonable time, effort, and resources to determining the extent of likely collateral damage to innocent civilians if the designated target was hit. However, in the ongoing war, this verification was largely abandoned in favour of automation. The +972 Magazine report highlights that the Israeli army has marked tens of thousands of Gazans as suspects for assassination, using a ‘Lavender’ system with little human oversight and a permissive policy for casualties. The other targeting programme, ‘Where’s Daddy?’ reportedly has limitations and has falsely alerted IDF officers of targets entering their respective houses, leading to strikes in which families have been wiped out without IDF neutralising its intended targets. This is an unacceptable situation.

STORY CONTINUES BELOW THIS AD

AI is also being used on the battlefields in Ukraine.

As the war in Russia-Ukraine enters its third year, the battlefields in Ukraine have become testing grounds for cutting edge weapon technologies, including drones capable of carrying out tasks autonomously.

Although companies involved in the testing and manufacturing of AI powered platforms claim decisions to attack targets rest with humans, no one can deny there exists an overhang of AI powered combat robots bypassing humans.

The drones have become so central to warfighting that in February 2024, Ukraine established a new branch of the military called the Unmanned Systems Forces. This force is responsible for operations in all three domains: air, ground, and sea. Ukraine has also launched a government programme called ‘Bravel’, an interministerial venture modelled on the US’ Defence Innovation Unit (DIU), tasked with facilitating military exploitation of commercial technologies. In the last year, of all the technologies that were fielded through the Bravel programme, 700 have been approved for use by the Ukrainian armed forces, and 40 have already been deployed in operations. The programme supports startups working in the fields of medicine, logistics, and cybersecurity, but the top priority is unmanned systems.

STORY CONTINUES BELOW THIS AD

Ukraine is cognisant of its limitations in its war against Russia. Its goal is to ensure robots, not humans, fight on the battlefield.

Ukraine today may not have Arnold Schwarzenegger’s Terminator-style autonomous unmanned systems. However, Ukraine’s deputy prime minister and technology chief, Mykhailo Fedorov, claimed in an interview with The Associated Press in 2023 that autonomous killer drones are the future and Ukraine is working towards it.

The militaries around the world are racing to develop and deploy autonomous platforms with scant regard to ethical and humanitarian considerations. As the battlefields of Gaza and Ukraine have demonstrated, even untested technologies are being fielded in wars.

Technologically advanced countries have not made a serious attempt to generate consensus on even some basic questions, such as those regarding the rules of employment of such weapon systems in populated areas or the guardrails required to prevent harm to civilian populations.

As this technology proliferates and more countries acquire these systems, it will become impossible to ban their usage; however, restrictive employment of such technologies can prevent widespread collateral damage to civilian lives and properties. It is not the first time that the world will be grappling with such a situation. Post-World War II countries came together when confronted with the destructive potential of weapons of mass destruction (WMD).

STORY CONTINUES BELOW THIS AD

AI has the potential to cause destruction on the scale of WMD. Their use in warfare raises moral and ethical predicaments that can only be resolved when the nations come together before it is too late.

The author is a Research Fellow at MP-IDSA. Views expressed in the above piece are personal and solely that of the author. They do not necessarily reflect Firstpost’s views.

Home Video Shorts Live TV