AI safety battle: Anthropic fires back at Pentagon after US military flags it ‘supply chain risk’
Defense Secretary Pete Hegseth flagged the company a “supply chain risk to national security” a designation that could derail the company's critical partnerships.
The standoff between Anthropic and Pentagon over AI safety guardrails is nowhere near subsiding.
In a recent development, the US military is hell-bent on making Anthropic pariah by labeling it “supply chain risk” as the company refused to accept Pentagon’s AI military proposal while standing firm to AI ethics.
On Friday, the Trump administration ordered all federal agencies to stop using Anthropic AI models for various purposes as OpenAI announced a partnership with the Department of War.
Moreover, Defense Secretary Pete Hegseth flagged the company a “supply chain risk to national security” a designation that could cause financial implications and derail the company's critical partnerships.
After being labeled, Anthropic hit back at the Pentagon, calling this act “legally unsound” and “dangerous precedent for any American company that negotiates with the government.”
Calling Anthropic a supply chain risk is a huge, unheard-of move. Usually, the government only does this to foreign companies, never to an American one.
Despite the imposition of punishment, Anthropic refused to change its position on its safety principles, ensuring the safe deployment of AI tools.
Issues at the heart of showdown
Anthropic rejected the Pentagon’s AI military offer because of two reasons: the first is the mass surveillance of American citizens and the second is management of fully autonomous lethal weapons.
According to CEO Dario Amodei, AI models are not reliable and efficient enough to manage these fully autonomous weapons and the lack of regulatory framework for mass surveillance raises privacy concerns.
Supply chain risk & What it means for customers and partnerships
Supply chain risk refers to a high-stake national security designation rather than a traditional logistics issue.
Taking to X, Pete Hegseth wrote, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
The decision could have wide-ranging impacts for the companies that are under contract with Pentagon and Anthropic.
Essentially, the Secretary claimed that anyone working with the military would be banned from using Claude. According to Anthropic, the Secretary does not actually have the legal power to do that.
By law, this designation only applies to actual military projects and it cannot stop a company from using Claude for its own private business or for other clients.
So, if you are an individual customer or hold a commercial contract with Anthropic, you can access Claude without being affected. But for Department of War contractors, Claude access would be affected for military-funded work.
-
Meta pauses Mercor work after major data breach
-
Sam Altman's OpenAI buys TBPN to expand communication strategy and shape AI public debate
-
DeepSeek V4 model bets on Huawei chips as demand surges
-
Quantum computing threat: Why global cybersecurity could collapse soon
-
AI cyberattacks set to outpace human hackers, experts warn
-
Why Google launched the Gemma 4 AI model: Here’s everything to know
-
Microsoft to power Japan’s AI future with massive $10B investment
-
AI won’t replace jobs, it will evolve them, says Nvidia CEO
-
From human to machine: 15% of American accept AI in leadership roles
-
From AI self-preservation to ‘peer preservation’: New study raises alarm over hidden risks
-
OpenAI caught funding child AI group without disclosure
-
New AI tool targets extremism, redirects ChatGPT users to real-world help
-
Has X disabled the ability to copy video links?
-
Experts call on Google to ban Youtube AI videos for kids
-
Apple turns 50: Tim Cook reflects on five decades of impact
-
Perplexity AI accused of exposing sensitive user data
-
Anthropic Claude AI source code leak: ‘Human error’ sparks security concerns
-
Why women fall behind in AI use, former Meta COO explains
