The rapid evolution in artificial intelligence has come with its dilemma of misusing the tools. According to safety watchdog, the AI misuse has fuelled alarming surge in child sexual abuse content found online in 2025.
According to the Internet Watch Foundation (IWF), the watchdog collected 8,029 AI-generated images and videos of realistic child abuse material, demonstrating the 260-fold surge in videos.
The gravity of online videos cannot be denied. Of 8,029, 3,443 videos were classified as category A, the term for the most severe material under UK law.
Only 43 percent of videos were found to be non-AI, exhibiting the growing role of technology in generating and propagating violent content.
Kerry Smith, the chief executive of the IWF, said, “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.”
According to IWF analysts, offenders on the dark web are increasingly enthusiastic about how advancements in AI are evolving the creation and manipulation of child sexual abuse material (CSAM).
There is significant interest in "agentic" systems—AI capable of executing complex tasks autonomously—which could further scale or automate their nefarious activities.
The UK government has given authority to tech companies and child protection agencies to examine generative AI tools and ensure their reliability in preventing the creation of such unsettling content.
Last year, the government announced a ban on creating and distributing AI models that are designed to generate CSAM.
Smith also called for enacting high standards for technologies, so the lives can be saved from online abuse.
The polling published by the IWF also showed that 8 out of 10 UK adults wanted the UK government to introduce legislation, ensuring the safety of AI systems.