Google, Microsoft, xAI to give US government early access to AI models for security review
Agreement follows July 2025 Trump administration deal to review AI models, including Google, Microsoft and Grok xAI, to review for 'national security risks'
Recently, big tech giants including Google, Microsoft and Grok xAI has agreed to give access to
US governments for security review.
The world's leading software and tech platforms have agreed to give early access to new artificial intelligence AI models for national security testing as U.S. officials grow alarmed by the hacking capabilities of Anthropic’s newly unveiled Mythos.
The Center for AI Standards and Innovation at the Department of Commerce said on Tuesday that the agreement would allow it to evaluate the models before deployment and conduct research to assess their capabilities and security risks.
The agreement fulfills a pledge the Trump administration made in July 2025 to partner with technology companies to vet their AI models for "national security risks."
Microsoft will work with U.S. government scientists to test AI systems “in ways that probe unexpected behaviors," the company said in a statement. Together they will develop shared datasets and workflows for testing the company's models, the company said. Microsoft signed a similar agreement with the UK’s AI Security Institute, according to the statement.
Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, U.S. officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.
The development of advanced AI systems, including Anthropic's Mythos, has in recent weeks created a stir globally, including among U.S. officials and corporate America, over their ability to supercharge hackers.
"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," CAISI Director Chris Fall said in a statement.
The move builds on previous agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI was known as the U.S. Artificial Intelligence Safety Institute. Under former President Joe Biden, the institute focused on developing AI tests, definitions and voluntary safety standards. It was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.
CAISI, which serves as the government's main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public.
Developers frequently hand over versions of their models with safety guardrails stripped back so the center can probe for national security risks, the agency said.
Last week, the Pentagon said it had reached agreements with seven AI companies to deploy their advanced capabilities on the Defense Department's classified networks as it seeks to broaden the range of AI providers working across the military.
The Pentagon announcement did not include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military's use of its AI tools.
-
AutoScientist lets AI models train themselves faster
-
Alibaba shares fall after sharp decline in core profitability
-
Adobe Premiere is finally coming to Android this summer, Google confirms
-
Meta Connect 2026: Mark Zukerberg hints at major wearables
-
Claude Code's creator runs ‘few thousand’ AI agents overnight on his phone
-
China leads new healthcare alliance to expand its dominance across Asia-Pacific
-
How AI is making internet sound same, study reveals
-
Did Elon wanted majority of OpenAI? Altman's testimony reveals new details
