Deepfake fraud goes industrial: How AI scams are scaling globally in 2026

the UK consumers have lost £9.4bn to AI scam in 9-months to November 2025

By Aqsa Qaddus Tahir
|
February 06, 2026
Deepfake fraud goes industrial: How AI scams are scaling globally in 2026

Deepfakes and AI-based fraud has witnessed exponential growth in recent years, thereby leading to “industrialization” of scams.

Now, deepfake fraud is no longer a sophisticated outlier; it’s an automated industry that is eroding the trust in the digital landscape.

Advertisement

According to an analysis published by the AI Incident Databases, the deepfake technology has moved from the “proof of concepts” niche to inexpensive and mass-produced tools used for high-volume fraud.

High-profile figures, including journalists, CEOs, and politicians have fallen victim to the personalized scams and deepfake videos.

By using these compromised videos and misusing AI tools, the hackers trick other people into transferring money and promote investment scam, which is called “impersonation of profit.”

Examples abound. In 2025, the financial officer at Singaporean multinational company was deceived into paying $500,000 to scammers during the deepfake video call with company’s executives.

According to Experian’s 2026 Future of Fraud Forecast, the top most threat to companies is “machine-to-machine” mayhem in which cybercriminals exploit good AI bots and blend them with bad bots that are specifically designed for fraud.

According to estimation, the UK consumers have lost £9.4bn to AI scam in 9-months to November 2025.

In 2025, the US Federal Trade Commission found that consumers lost more than $12.5 billion in defrauding cases.

60 percent of the companies reported the 25 percent uptick in financial losses.

In another survey released in July 2025 and reported by Experian, 72 percent of business leaders and tech giants consider AI-enabled fraud and deepfakes to be the top “operational challenge” in 2026.

Zero barrier entry

According to MIT researchers, now everyone can generate convincing fake content in a bulk, leading to the surge in reported cyber incidents. It is no mistake to say that fake content development, targeted manipulation, and scams have reached a point where it is without any barrier or effective oversight.

According to Simon Mylius, an MIT researcher who works on a project linked to the AI Incident Database,

Fred Heiding, a Harvard researcher studying AI-powered scams said, “The scale is changing. It’s becoming so cheap, almost anyone can use it now. The models are getting really good – they’re becoming much faster than most experts think.”

The ‘ghost’ employee threat

A new disturbing frontier of fraud also exacerbates the situation. The scammers use AI avatars to interview for remote engineering jobs to steal companies secrets and salaries.

Last year, the FBI issued multiple warnings about the infiltration of deepfakes employees from North Korea as IT workers into hundreds of US companies, sending their salaries back to the regime.

Given the rapidly evolving nature of AI-enabled threats, experts warn that as video quality improves, the “complete lack of trust” in digital interactions will become society’s biggest pain point.

Emerging threats on the horizon

According to Experian forecast, the frauds will not be limited to AI models. Other avenues will also be exploited by cybercriminals as AI integration grows

For instance, smart home devices, such as virtual assistants, smart locks, and security systems can be exploited due to security loopholes.

Web cloning will allow the hackers to copy the original website for the attacks. Human-like scambot with emotion literacy can honeytrap people and trick them convincingly.

Advertisement