Is Anthropic in trouble? Pentagon AI war escalates as Trump halts Claude use

Anthropic refuses unrestricted AI deployment as Trump and Pentagon clash

Last updated:
Lekshmy Pavithran, Assistant Online Editor
Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026
Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026
AP

US President Donald Trump has ordered all federal agencies to immediately stop using Anthropic’s AI systems, escalating a confrontation with the startup over how its technology can be deployed by the US military.

The move follows a weekslong standoff after Anthropic refused to grant the Pentagon unrestricted rights to use its flagship model, Claude.

What is Anthropic?

Anthropic is a San Francisco–based AI startup founded in 2021 by former OpenAI researchers, including CEO Dario Amodei. The company develops advanced generative AI models, most notably Claude, designed for natural language reasoning, decision-making, and problem-solving. Anthropic has built a reputation as a “safety-first” AI lab, setting strict ethical guardrails — including blocking mass domestic surveillance and restricting fully autonomous weapons.

Its technology has attracted major government contracts, including a $200 million Pentagon deal, making Claude one of the first AI models cleared for classified US military networks. The company now finds itself at the centre of a high-stakes showdown over who controls AI, drawing global attention to the intersection of technology, national security, and ethics.

What triggered the fallout?

Anthropic signed a $200 million Pentagon contract last July to provide advanced AI for national security work. Claude became the first frontier AI model cleared for use on classified US government networks.

Tensions rose when the Department of Defense demanded that Anthropic sign a new agreement allowing Claude to be used for “all lawful purposes”.

The company refused.

What does the Pentagon actually want?

The Pentagon argues that once a supplier signs a defence contract, it cannot dictate how its product is used.

Defense Secretary Pete Hegseth reportedly compared Claude to military hardware, saying contractors do not get to control how jets or weapons systems are deployed.

In practical terms, the new demand would mean:

  • No oversight by Anthropic over specific use cases

  • No ability to review deployments in classified environments

  • No right to block certain military applications

  • Full operational control resting with the Pentagon

The department insists all use would remain lawful.

Why won’t Anthropic agree?

Anthropic is not rejecting military collaboration outright. It has worked closely with US national security agencies and has supported defence-related projects, including missile defence systems.

Chief executive Dario Amodei has repeatedly warned that the United States must stay ahead of China in the global AI race.

However, the company maintains two non-negotiable red lines:

  1. No mass domestic surveillance of American citizens

  2. No fully autonomous weapons that select and engage targets without human oversight

Anthropic allows Claude to assist in targeted strikes, foreign intelligence operations and drone missions — provided a human makes the final decision.

Amodei argues that current AI models are not reliable enough to power lethal autonomous weapons safely and that AI-driven mass surveillance poses serious risks to civil liberties.

The flashpoint: Venezuela operation

The dispute reportedly intensified after a January operation that led to the capture of Venezuelan President Nicolas Maduro. Claude was deployed via a platform operated by defence tech firm Palantir Technologies.

Internal questions within Anthropic about how the model was used allegedly alarmed defence officials, deepening mistrust.

Hegseth later signalled publicly that the Pentagon would not work with AI systems that restrict wartime deployment.

Trump’s intervention

Trump announced on Truth Social that he was directing every federal agency to “IMMEDIATELY CEASE” use of Anthropic’s technology.

He warned that if the company was not cooperative during the phase-out period, he would use the “full power of the presidency” with potential civil and criminal consequences.

The General Services Administration suspended Anthropic from its USAi chatbot platform and began removing the company from federal procurement systems.

Despite the immediate order, the Pentagon has reportedly been given up to six months to phase out Claude — underscoring how embedded the technology has become.

Legal threats and high-stakes pressure

The Pentagon set a 5:01pm deadline for compliance, threatening two major actions:

  • Invocation of the Defense Production Act, a Cold War-era law allowing the government to compel private companies to prioritise defence contracts

  • Designation of Anthropic as a “supply chain risk” — a label typically reserved for firms from adversarial nations

Hegseth has said that contractors doing business with the US military would be barred from commercial activity with Anthropic under such a designation.

Anthropic has vowed to challenge any supply chain risk label in court, calling the move intimidation and a dangerous precedent.

Industry backlash and solidarity

The confrontation has triggered rare public unity within Silicon Valley.

More than 500 employees from Google DeepMind and OpenAI signed an open letter urging companies not to yield to demands for domestic mass surveillance or autonomous killing.

OpenAI chief Sam Altman told staff his company also opposes mass surveillance and autonomous lethal weapons and is exploring Pentagon contracts that would preserve similar safeguards.

The episode has exposed a growing divide within tech. Some defence-aligned investors and founders argue AI companies should grant unrestricted military use, while others warn of eroding democratic safeguards.

The paradox at the centre

The US government’s position has drawn scrutiny because it appears contradictory:

  • Claude is described as so essential to national security that emergency powers may be invoked

  • Yet Anthropic is portrayed as a national security risk that must be cut off

As one former defence official put it, the technology is treated as existentially dangerous whether it is used or not.

What happens next?

If the Pentagon successfully compels compliance — through legal pressure, commercial blacklisting or emergency powers — it could establish a precedent that no US AI company can maintain independent safety restrictions against government demands.

If Anthropic prevails in court, it may reinforce the ability of private firms to set ethical guardrails even in defence contracts.

Beyond one company, the dispute signals a turning point in the relationship between Silicon Valley and Washington. Generative AI is largely funded and controlled by private industry, yet increasingly central to national security.

The outcome may determine not just Anthropic’s future — but who ultimately decides how advanced AI is used in warfare and surveillance.

With inputs from AFP, AP

Get Updates on Topics You Choose

By signing up, you agree to our Privacy Policy and Terms of Use.
Up Next