ARTICLE AD BOX
A deepening dispute over how artificial intelligence should be used in modern warfare is now pushing the Pentagon towards an extraordinary step: treating a leading American AI firm as a security liability.
Defense Secretary Pete Hegseth is “close” to cutting business ties with Anthropic and designating the company a “supply chain risk”, a senior Pentagon official told Axios — a move that would effectively penalise not only Anthropic but also any contractor that relies on its technology.
The same official described the prospective decision to Axios in blunt terms: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
A punishment usually reserved for adversaries, now aimed at a US AI company
Pentagon's threatened “supply chain risk” designation for Anthropic AI is rarely deployed against domestic firms and is more commonly associated with entities linked to hostile foreign powers. Applying it to Anthropic would represent a sharp escalation in the Trump administration’s push to ensure AI systems used by the military are available without restrictive conditions.
It would also send an unmistakable signal to the wider technology sector: in the Defence Department’s view, national security partnerships require compliance with military operating standards — not the ethical red lines drawn by private companies.
Claude’s unique role inside classified networks makes the standoff unusually fraught
The dispute is complicated by a simple operational fact. Anthropic’s Claude is currently the only frontier AI model accessible in the US military’s classified systems, giving the firm a strategic foothold no competitor has yet matched.
Pentagon officials privately describe Claude as exceptionally effective in specialised government workflows. But the system’s presence inside sensitive networks has also heightened frustration within defence circles, where officials argue that Anthropic’s usage constraints are inconsistent with the realities of military planning.
As Axios reported on Friday, Claude was used during the Maduro raid in January — a detail that illustrates how quickly generative AI has migrated from experimentation into real-world operations.
The core conflict: whether the Pentagon must accept AI safeguards as a condition of access
At the centre of the confrontation lies a philosophical and legal dispute: whether an AI developer can dictate how its model is used once it becomes embedded in government systems.
Anthropic, led by chief executive Dario Amodei, has held firm on limiting certain categories of use. The company is reportedly prepared to loosen its terms, but wants assurances that Claude will not be used to enable mass surveillance of Americans, or to assist in developing weapons capable of firing without direct human involvement.
Pentagon officials argue that such restrictions are too rigid to be workable. They contend that modern military activity contains innumerable ambiguous scenarios, and that attempting to predefine boundaries in contractual language could constrain lawful operations in unpredictable ways.
In talks not only with Anthropic but also with OpenAI, Google and xAI, Pentagon negotiators have insisted on the right to use AI tools for “all lawful purposes.”
Pentagon frames review as a matter of warfighting readiness
The administration’s public language has become increasingly combative, portraying the issue as one of military preparedness rather than corporate governance.
Chief Pentagon spokesman Sean Parnell told Axios: “The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
The statement suggests the Pentagon sees the dispute not as a technical disagreement, but as a question of whether a private AI firm is willing to align itself fully with defence objectives.
Anthropic insists discussions remain constructive as it defends its red lines
Anthropic has tried to position itself as cooperative, while still arguing that certain restrictions are necessary given the power of modern AI systems.
An Anthropic spokesperson told Axios: “We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right.”
The spokesperson also reiterated the company’s commitment to national security work, emphasising that Claude was the first AI model deployed on classified networks — a claim that has become central to Anthropic’s standing in Washington.
The shadow issue: surveillance law was not built for generative AI
The confrontation is also exposing a regulatory vacuum. Existing US surveillance authorities were designed for earlier eras of data processing, not for AI systems capable of extracting patterns, building profiles and generating inferences at scale.
The Pentagon already has the power to collect large volumes of personal information, from social media activity to concealed carry permits. Critics have warned that AI could amplify those capabilities in ways that make oversight more difficult, while increasing the risk of civilians being targeted by automated analysis.
Anthropic’s position reflects these anxieties. Pentagon officials, however, argue that legal permissibility should be the decisive standard — and that the Defence Department cannot accept contractual conditions that pre-empt lawful missions.
A sweeping ripple effect for contractors if Anthropic is blacklisted
The most disruptive aspect of the threatened “supply chain risk” designation is not what it would do to Anthropic directly, but what it would demand of others.
If implemented, it would require the multitude of companies that supply the Pentagon to certify they do not use Claude internally. Given Anthropic’s commercial reach — the company has said eight of the 10 largest US firms use Claude — such a requirement could force widespread internal audits, rapid tool replacement, and costly compliance efforts across corporate America.
A relatively small contract, but a major political and strategic test
The Pentagon contract under threat is valued at up to $200 million. Against Anthropic’s reported $14 billion in annual revenue, the sum is not existential.
Yet officials familiar with the matter say the dispute is not fundamentally about money. It is about authority: whether the military will accept AI safeguards set by private labs, or whether labs will be compelled to accept the Defence Department’s interpretation of lawful use.
Pentagon may struggle to replace Claude despite claims rivals are ‘just behind’
A sudden break may also prove operationally inconvenient. A senior administration official told Axios that competing models “are just behind” in specialised government applications.
That gap could complicate any attempt to replace Claude quickly, particularly within classified environments where the technical and bureaucratic hurdles for new systems are high.
The signal to Silicon Valley: the Pentagon intends to dictate terms
The hardball approach towards Anthropic appears designed to shape negotiations with other AI developers.
Pentagon officials are in parallel talks with OpenAI, Google and xAI, all of which have agreed to remove safeguards for use in unclassified military systems. But none has yet achieved Claude’s level of integration in classified networks.
A senior administration official said the Pentagon expects the other firms to accept the “all lawful use” standard. Yet a source familiar with those discussions said much remains undecided.

3 hours ago
1






English (US) ·