Britain woos Anthropic after US defence clash
UK moves to attract AI firm as tensions with Washington open new opportunity
Britain is seeking to attract AI company Anthropic to expand its presence in London after a major clash with the United States over the use of artificial intelligence in defence.
The UK government is offering incentives, including office expansion and a possible dual stock listing, aiming to capitalise on the fallout between Anthropic and the US Defence Department.
UK officials are currently developing a series of propositions that are to be put forth before Anthropic on their next visit, where they will be greeted by their CEO. United Kingdom Prime Minister Keir Starmer has thrown his weight behind this move, which sends a clear message of political backing in bringing the AI company close to the UK’s flourishing technology industry.
The propositions include building up Anthropic’s London office and dual-listing to cement their financial relationship with the British market.
The reasons for this conflict go all the way back to mid-2025, when Anthropic was the first advanced AI company to be granted access to the classified computer networks of the US government.
The American defence officials requested expanded capabilities for Claude, which included its application in surveillance activities and autonomous military operations by the end of 2025.
Anthropic prohibited its AI models from removing essential safety mechanisms. The company argued its systems were not reliable enough for lethal decision-making and should not be used for large-scale domestic monitoring.
The US government designated the firm as a national security supply-chain risk in March 2026, which resulted in its exclusion from government and defence contracts. Private contractors faced restrictions that prohibited them from working with the company.
The company filed a legal challenge against the decision because it claimed the action functioned as a punishment. The US judge granted the company temporary relief from its designation, which permitted business activities to continue until legal proceedings concluded.
The conflict has turned into an important case study that shows how AI developers and governments disagree about appropriate defence applications for their technology.
-
Creators push ‘human-made’ labels as AI content floods internet
-
AI with human traits may be safer, Anthropic study finds
-
Pavel Durov: Russia’s anti-VPN measures triggered payment failure
-
Meta pauses Mercor work after major data breach
-
Sam Altman's OpenAI buys TBPN to expand communication strategy and shape AI public debate
-
DeepSeek V4 model bets on Huawei chips as demand surges
-
Quantum computing threat: Why global cybersecurity could collapse soon
-
AI cyberattacks set to outpace human hackers, experts warn
