News

Pentagon and Anthropic clash over use of Claude AI in military operations

Claude is reportedly only commercial AI currently cleared for use in classified US Defence Department systems

February 17, 2026
Pentagon and Anthropic clash over use of Claude AI in military operations
Pentagon and Anthropic clash over use of Claude AI in military operations

The US Defence Department is locked in a growing dispute with AI developer Anthropic about how the Pentagon can use the company’s Claude artificial intelligence models, raising questions about the future of defence AI partnerships and safety standards.

The disagreement centres on usage limits that Anthropic places on its technology and the Pentagon’s push for broader military applications of AI as part of its national security strategy.

What’s driving Pentagon and Anthropic dispute?

Advertisement

The conflict began due to the Pentagon’s demand that AI models supplied to the US military be usable for all lawful purposes, including battlefield planning, weapons development and intelligence operations.

Anthropic has resisted this, maintaining strict ethics safeguards that prohibit Claude from being used in fully autonomous weapons or mass domestic surveillance, even in military settings.

Senior Defence Department officials have grown frustrated with these limitations. A Pentagon spokesperson said the department is reviewing its relationship with Anthropic and emphasised that its partners must support warfighters in any fight.

Claude, an advanced large language model developed by Anthropic, is reportedly the only commercial AI currently cleared for use in classified US Defence Department systems.

Reports indicate it was employed, through a partner contractor, during a high-profile operation in Venezuela earlier this year, a rare instance of a commercial AI tool being applied in a classified context. The exact role of Claude remains unclear, and Anthropic has declined to comment on specific operational use.

The dispute may have major implications for defence AI partnerships. The Pentagon is considering designating Anthropic a “supply chain risk", a label normally reserved for foreign adversaries, which could force other defence contractors to end ties with the company. 

Advertisement