UK regulators warn firms to limit risks from frontier AI models
Regulators warn frontier AI models could amplify cyber threats to financial stability and market integrity
British regulators warn that firms should not rely on artificial intelligence (AI) blindly and should prepare to avoid potential risks.
The UK's finance ministry, the Bank of England, and the Financial Conduct Authority regulator emphasized on Friday that companies should take steps to plan and mitigate risks from new artificial intelligence AI models.
"The cyber capabilities of current frontier AI models are already exceeding what a skilled practitioner could achieve, and at a significantly higher speed, greater scale, and lower cost," they said in a joint statement.
"These capabilities, if used maliciously, amplify cyber threats to firms' safety and soundness, customers, market integrity, and financial stability."
The news comes as Mythos has drawn warnings from cyber experts about its potential to supercharge complex cyberattacks, which could challenge the banking industry and its existing technology.
Last month, BoE governor Andrew Bailey said he saw major cybersecurity risks from Anthropic's Mythos product.
-
OnePlus removed from Best Buy stores in the US
-
New study reveals list of world’s most hackable passwords
-
Elon Musk’s X accepts UK rules on hate speech, militant content
-
Ex-OpenAI CTO Mira Murati’s new AI does something most others can't
-
Argentum AI signs $2.5 billion European data center partnership: Here’s what it means
-
Is software engineering roles disappearing in 2026?
-
US has only 12-24 months to beat China in AI race: Here’s why
-
Inside Musk v Altman OpenAI trial: What you missed?
