Why British banks’ push for agentic AI is worrying UK regulators
According to research firm Gartner, by the end of 2026 around 40 percent of all financial services firms will use AI agents
The British banks are now locked in a new race to deploy agentic AI for making decisions and taking autonomous action.
According to Britain’s financial watchdog, the race among banks is raising new risks for retail customers, sidelining their interests.
Recently, the UK is witnessing a boom in agentic AI deployment, revolutionizing people’s budget and investments.
Unlike generative artificial intelligence, agentic AI offers promising opportunities for companies and banks by helping in planning, decision-making, setting goals, and implementation of various tasks.
According to Financial Conduct Authority (FCA) Chief Data Officer Jessica Rasu, the consumer-facing applications are expected to witness a surge in the market by early next year.
While interviewing with Reuters, Rasu said, “Everyone recognises that agentic AI introduces new risks, primarily because of ... the ability for something to be done at pace.”
According to research firm Gartner, by the end of 2026 around 40 percent of all financial services firms will use AI agents.
The FCA also raised some concerns related to governance and financial stability posed by the increasing interaction with agentic AI.
Therefore, to counter these risks, FCA will also apply rules, such as consumer duty and senior managers regime to hold authorities accountable for wrongdoing and to protect customers’ interests.
According to Ram Gopal, professor of information systems at Warwick Business School, agentic AI is highly suitable for simple tasks, but it fails to perform the complex ones.
"These AI agents could react to identical market signals, rapidly shifting deposits or funds between accounts, dramatically accelerating the probability and pace of a bank run, for example," said Suchitra Nair, head of Deloitte's EMEA Centre for Regulatory Strategy.
AI hallucinations also pose serious challenges, giving responses that are untrue in nature.
“How well do they know the person they're 'talking' to? That's a real problem in an advisory context,” commented Taylor Wessing lawyer Martin Dowdall,
-
Japan launches probe into 'Grok AI' following global scrutiny over 'inappropriate' content
-
Pixel watch may soon warn you if you leave it behind
-
Nano Banana explained: How Google’s AI got its name
-
YouTube, BBC to ink landmark deal to launch exclusive bespoke shows
-
TikTok to roll out new age detection technology across Europe
-
WhatsApp adds new status privacy check for who can see your updates
-
Spotify introduces new monthly subscription pricing plan for 2026
-
OpenAI launches ChatGPT Translate to rival Google Translate
