ARTICLE AD BOX
Summary
The integration of AI into battlefield intelligence marks a critical inflection point, leading experts to call for global disarmament treaties to prevent the rise of fully autonomous ‘killer machines’. Mint explains the larger context and significance of using AI in warfare.
The use of foundational AI is sweeping across the current middle-east conflict, with the US engaging Claude, the artificial intelligence model built by startup Anthropic, through defence contractor Palantir. The use of AI has reportedly sharpened strikes by the US, raising conversations around AI turning into killing machines and going against pledges and constitutions by Big Tech. Is this an inflection point for AI policy worldwide? Mint explains.
How has the US engaged AI on the battlefield so far?
A report by US security research firm Soufan Center underlined that the US Department of War (formerly Department of Defense) used Anthropic’s Claude, embedded in Palantir’s battlefield intelligence suite Maven Smart System, to identify sharp targets during its strikes on Iran, called Operation Epic Fury. The uses also included simulating battlefield situations and strategies. At the same time, DoW also signed a deal with fellow AI maker OpenAI to use its GPT foundational models in future conflicts. It isn’t clear whether GPT has already been deployed and is in use, as much of wartime tools remains classified.
Did tech firms build AI models specifically aligned for war?
Not as per public disclosures. Mint’s examination of US DoW statements and public documents show that Palantir’s battlefield intelligence system was the primary interface for using Claude, which was commercially used by the latter for orchestrating attacks that have so far killed over 1,200 individuals in Iran. But Anthropic’s 84-page ‘constitution’ for Claude clearly states even now that it will not be used for “actions that are inappropriately dangerous or harmful.” OpenAI chief Sam Altman, in a 27 February statement, said the company’s DoW deal includes “human responsibility for the use of force, including for autonomous weapon systems.”
View full Image
Is this the first use of AI on battlefields?
Not really. Weapon systems such as guided missiles and drones have been using machine learning, object recognition, computer vision and more for remote strikes. But criticism has surfaced around AI use in battlefields because of the ability of models to recognize increasingly sophisticated patterns at a rapid pace, and also identify strategic weaknesses in the real world. Massive compute availability further accelerates the ability of nations to carry out surgical strikes on rivals.
What does this mean for the future of AI policies?
Geopolitics consultants have called for audits of tech companies building AI models, and possible globally binding policy agreements, akin to 1968’s nuclear non-proliferation treaty. While OpenAI has claimed it has voluntarily drawn ‘red lines’ to prevent the use of AI in cases of deliberate harm, conflicting reports on how contemporary AI, generative and computational alike, have raised questions around whether nations in leadership positions of AI, such as the US, China and even India, should work toward disarmament clauses for the use of AI in war. This, though, may not prevent the use of AI in defensive battlefield intelligence.
Can AI really become ‘killer machines’, if not controlled?
The possibility is absolutely real. Contemporary AI models are adept at learning visual patterns very fast, and their productive usage can boost agriculture and sustainable land usage. On the other hand, the same pattern recognition can, at least theoretically, identify targets that humans indicate as threats. Such usage has already played out in the ongoing conflict, leading to concerns around the rise of autonomously operational machines that can strike remotely and at will. With vast private funds and increasing revenue generation, any firm building foundational AI can provide the basis for swarms of strike drones and planes, to any country.

1 month ago
8






English (US) ·