Potential threats of Anthropic’s Claude Mythos Preview under scrutiny
UK financial regulators and cybersecurity officials have reportedly launched urgent discussions to evaluate potential threats posed by Anthropic’s latest AI model, Claude Mythos Preview, according to a report by the Financial Times. The Bank of England, the Financial Conduct Authority (FCA), and HM Treasury are collaborating with the National Cyber Security Centre (NCSC) to examine potential system vulnerabilities.
Centre (NCSC) to investigate vulnerabilities the model has exposed within critical IT infrastructure. For that purpose, key representatives from British banks, insurers, and stock exchanges are set to brief on these risks within the next two weeks. Anthropic describes the model’s deployment as part of a controlled initiative where select organizations use the AI for defensive cybersecurity purposes.
For those unversed, Anthropic has announced the development of Claude Mythos, a frontier model with such advanced cybersecurity capabilities that the company has deemed it a public safety risk.
Anthropic recently disclosed that the model has already identified thousands of major flaws across operating systems and web browsers. The UK mirrors recent nations in the US, where Treasury Secretary Scott Bessent reportedly held similar talks with Wall Street banks regarding the models' cyber risk potential. While Anthropic maintains the model is intended for defense, the speed and scale at which it identifies system weaknesses have prompted regulators to act swiftly to protect the stability of the financial sector.