Anthropic seeks legal pause on Pentagon supply-chain risk decision: Here’s why
Firms linked to foreign adversaries are often designated as supply chain risks which significantly limits their ability to partner with Defense Department contractors
Anthropic petitioned the Department of Defense and other federal agencies on Monday over the Trump administration’s decision to label the AI company a “supply chain risk.” The company sought an injunction from a US appeals court after the Pentagon’s declaration, this citing a pending judicial case that could cost Anthropic billions of dollars in lost revenue.
The latest proposal follows a week-long dispute over technology guardrails governing the US Anthropic’s artificial intelligence tools. This lawsuit is the latest development in an ongoing rift between the Pentagon and one of the world's leading central AI companies, occurring even as the White House efforts to bolster AI adoption across the government.
“ Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement.
According to Anthropic’s court filing, more than 100 enterprise customers have reached out to the company about the designation.
What actually happened in the Anthropic court filing?
The Pentagon issued the supply chain risk designation after negotiations to update its contract with Anthropic broke down over two red lines that Anthropic insisted the Defense Department respect.
However, the Pentagon wants to use Anthropic’s AI for “all lawful purposes,” adding that it could not allow a private company to specify how the military uses its tools in a national security emergency.
The Trump administration ordered federal agencies and military contractors to discontinue business with Anthropic after the company refused to let the Pentagon use its technology without restrictions. Trump said in a February 27 Truth Social post that Anthropic had made a “disastrous mistake” and accused the company of trying to dictate how the military operates.
Nonetheless, Anthropic’s profile rose amid the conflict. Its Claude AI app successfully surpassed OpenAI's ChatGPT in the iPhone’s App Store for the first time after the Pentagon sought to rescind the agreement with Anthropic.
-
Creators push ‘human-made’ labels as AI content floods internet
-
AI with human traits may be safer, Anthropic study finds
-
Pavel Durov: Russia’s anti-VPN measures triggered payment failure
-
Meta pauses Mercor work after major data breach
-
Sam Altman's OpenAI buys TBPN to expand communication strategy and shape AI public debate
-
DeepSeek V4 model bets on Huawei chips as demand surges
-
Quantum computing threat: Why global cybersecurity could collapse soon
-
AI cyberattacks set to outpace human hackers, experts warn
-
Why Google launched the Gemma 4 AI model: Here’s everything to know
-
Microsoft to power Japan’s AI future with massive $10B investment
-
AI won’t replace jobs, it will evolve them, says Nvidia CEO
-
From human to machine: 15% of American accept AI in leadership roles
-
From AI self-preservation to ‘peer preservation’: New study raises alarm over hidden risks
-
OpenAI caught funding child AI group without disclosure
-
New AI tool targets extremism, redirects ChatGPT users to real-world help
-
Has X disabled the ability to copy video links?
-
Experts call on Google to ban Youtube AI videos for kids
-
Apple turns 50: Tim Cook reflects on five decades of impact
