US defence contractors are removing Anthropic's AI after Trump administration’s ‘national security risk’ ban

1 hour ago 1
ARTICLE AD BOX

U.S. defense contractors, including Lockheed Martin, are expected to remove Anthropic's AI tools from their supply chains days after President Donald Trump ordered all federal agencies to immediately cease using them.

Following this, Defense Secretary Pete Hegseth promised to designate Anthropic as a supply chain risk to national security.

Trump administration vs Anthropic

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth said in a post on X.

He, however, added that Anthropic will continue to provide the Department of War its services for a period of six months for a seamless transition.

According to Hegseth, Anthropic’s stance is fundamentally incompatible with American principles.

The Trump administration and Anthropic, the maker of AI tools, including Claude, have been at odds over the company's refusal to remove "safety guardrails" that prohibited their use in fully autonomous weapons systems and mass domestic surveillance.

What Anthropic CEO said

Anthropic CEO Dario Amodei denied the US demand, saying the company could not "in good conscience" agree to terms that would allow Claude to be used for:

Anthropic argued that using AI to monitor American citizens is incompatible with democratic values.

Anthropic also said that its current AI models are not reliable enough to make lethal targeting decisions without human oversight and that doing so would put civilians and warfighters at risk.

US used Anthropic for Iran war

Though the Trump administration has banned the use of Anthropic, the Wall Street Journal reported that its AI tools, including Claude was used by the U.S. Central Command for intelligence assessments, target identification, and simulating battle scenarios before the attack on Iran.

Claude was reportedly used via Palantir’s Maven Smart System to analyze intelligence, identify targets, and simulate battle scenarios.

How defence contractors are reacting

Following the Trump administration’s ban on the use of Anthropic, defence contractors have reportedly began complying with the order, which legal experts say could be struck down by courts.

"We will follow the president’s and the Department of ​War's direction," Lockheed Martin said in a statement to Reuters.

We expect minimal impacts," the company said, adding that ​it doesn't depend on any single AI vendor "for any portion of our work."

With huge government contracts at stake, defense contractors would be quick to comply with the Pentagon's ban, lawyers said.

"Most companies that do significant business with the government are hyper-aware of what the U.S. government wants and they're likely already taking steps to cleanse their supply chains of Anthropic," Franklin Turner, an attorney who specializes in government contracts, told Reuters.

"Regardless of the legal justification, I think the threat is ​the point ... it has already done harm, significant harm to the company," he added, referring to Anthropic.

Anthropic out OpenAI in

While Anthropic has stood its ground and has promised to challenge the ban in court, one of its main rivals, OpenAI, signed a deal to deploy its models on the Pentagon’s classified networks.

According to OpenAI CEO Sam Altman, their agreement includes similar prohibitions on autonomous weapons and mass surveillance that Anthropic had sought, though the administration has been more receptive to OpenAI.

Key Takeaways

  • The Trump administration's ban reflects growing concerns about AI's role in military and surveillance applications.
  • Defense contractors prioritize compliance with government directives to safeguard lucrative contracts.
  • Anthropic's stance on ethical AI usage poses significant challenges in the evolving landscape of military technology.
Read Entire Article