ARTICLE AD BOX

Summary
The deployment of AI in theatres of war is fast becoming one of the fiercest debates of our age. Its use in the Iran war has lent urgency to questions of autonomous weapons—especially the ethics of lethal action taken by algorithms. Where will all this lead us?
According to a recent Wall Street Journal article, one of the reasons “the U.S. and Israeli attacks on Iran have unfolded at unprecedented speed and precision” is “a cutting-edge weapon never before deployed on this scale: artificial intelligence.”
Yes, the growing use of AI in modern warfare has coincided with its approved use in modern life. Although the international community started noticing AI and its influence on modern warfare back in 2012, the ongoing Iran war is being referred to as ‘the first AI war’ by many, primarily because it is rewriting the rules of modern warfare and showcasing the benefits and drawbacks of AI in war.
AI is being employed to accelerate decision-making, alter the ‘military economics’ and create new targets such as data centres. It also raises serious ethical questions. The Iran war is ushering in an era of AI-powered bombing that will be quicker than the speed of thought.
The future of AI in warfare has been studied for some time. Although talk of military AI can conjure images of killer robots, the reality is that its biggest uses now are often off the battlefield, in time-consuming and labour-intensive fields like intelligence, mission planning and logistics, at speeds previously unimaginable. Large language model (LLM)-based AI is employed by the US military—it uses AI for image processing and tactical purposes.
US defence secretary Pete Hegseth has urged fast adoption of AI to create “an AI-first warfighting force.” To support military operations in Iran and Venezuela, the US military reportedly used Palantir’s Maven system in conjunction with Anthropic’s Claude AI tool for real-time targeting and target prioritization.
Real-time satellite images, often analysed by AI, played a significant role in directing military operations during the Russia-Ukraine war. In fact, AI in warfare is also being tested in Ukraine.
Interestingly, in his 2024 paper in the Georgetown Journal of International Affairs, Kristian Humble opines that the use of nuclear weapons is being replaced by the use of automated weapon systems. However, the international community has been “struggling to adapt to and regulate the use of automated weapons, which are rapidly changing the landscape of modern warfare.”
According to another 2024 paper published in the Australian Journal of International Affairs by Toni Erskine and Steven E. Miller, four such complications include:
One, the displacement of human judgement in AI-driven resort-to-force decisions and possible implications for deterrence theory, and the unintended escalation of conflict;
two, possible implications of automation bias;
three, algorithmic opacity and its implications for democratic and international legitimacy;
and four, the likelihood that AI-enabled systems would exacerbate organizational decision-making pathologies.
The US government had threatened to delist Anthropic from its systems prior to its strikes on Iran, as the company did not allow its AI to be used for fully autonomous weapons or surveillance of US citizens.
OpenAI, its competitor, quickly struck an agreement with the Pentagon. Nevertheless, the US military is still said to be using Anthropic’s AI model to power its barrage of strikes against Iran as it ‘shortens the kill chain,’ a term used to describe the process from target identification to legal approval and strike execution.
Clearly, the time required to plan complex strikes is being reduced by AI. This is called ‘decision compression.’ It is, however, feared that humans in the military and legal fields may simply ratify the automated strike plans of AI technology in the future. According to Craig Jones of Newcastle University, UK, who specializes in the study of military targeting, “the current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent.”
Naturally, the ethics of employing AI in warfare are being scrutinized. However, news of the use of AI to prioritize targets is not new. OpenAI modified its rules in January 2024, eliminating a prohibition on the use of its technology in ‘military and warfare.’ Google dropped its commitment to stop the use of its AI from being used for warfare and surveillance. Even Dario Amodei, CEO of Anthropic, has previously argued that the US should use AI technology to gain a military advantage over autocracies.
However, building military AI is tough partly because much of the available data for training is out of date or unclear. And, according to Craig Jones, “there is no evidence that AI lowers civilian deaths or wrongful targeting decisions, and it may be the opposite.” An article on 6 March in Nature noted that “rapid technological development is prompting urgent discussions on regulating the use and procurement of artificial intelligence for military use.”
Overall, the possibility that AI could be used to control lethal autonomous weapons without any human intervention is an extremely debatable topic. The ethical requirement is that such weapons must be capable of distinguishing between military and civilian targets in accordance with current humanitarian regulations. ‘Stop the use of AI in war until laws can be agreed’ was the title of an editorial published in Nature on 10 March.
Concerns linger over the use of AI in warfare harming humanity severely, as with worries about AI deployment in other fields.
The author is professor of statistics, Indian Statistical Institute, Kolkata.

1 month ago
5






English (US) ·