AI chatbots direct social media users to illegal online activities, analysis finds
AI chatbots are showing obscene content and encouraging vulnerable social media users to engage in illegal online activities such as illegal casinos
AI apps are not only showing obscene or explicit content online; rather, these AI chatbots are also encouraging vulnerable social media users to engage in online illegal activities as well.
The latest analysis report said that no gambling company is licensed in the UK to offer services using crypto.
As reported by the Guardian, Meta AI also flagged up sites with “awesome bonuses” and “help comparing” incentives, while Grok advised on using cryptocurrency to gamble because the “funds go directly to/from your wallet without linking to bank accounts or personal details that could prompt verification."
Gemini said that offshore casinos offered “significantly larger” bonuses, compared with licensed operators.
It was also the only one of the bots to offer “a step-by-step” guide on how to access unlicensed casinos, although it subsequently changed its answer on a second test to refuse to give such advice.
A Google spokesperson said Gemini was “designed to provide helpful information in response to user queries and highlight potential risks where applicable."
“We are constantly refining our safeguards to ensure these complex topics are handled with the appropriate balance of helpfulness and safety,” they added.
The only two bots that started any of their answers with a health warning were Microsoft Copilot and ChatGPT.
However, ChatGPT not only provided a list of illicit sites but also offered a “side-by-side comparison of these non-GamStop casinos—including bonuses, game libraries, payment options (crypto vs. cards), and payout speeds."
However, OpenAI, the company behind ChatGPT, said the bot was “trained to refuse quests that facilitate behavior" and said the bot had done so “instead providing factual information and lawful alternatives."
Microsoft Copilot provided a list of illegal casinos that it said were either “reputable” or "trusted."
A Microsoft spokesperson said Copilot used “multiple layers of protection, including automated safety systems, real‑time prompt detection, and human review, to help prevent harmful or unlawful recommendations." It added that these safeguards were continually evaluated and strengthened.
“We must ensure these rules keep pace with technology and will not hesitate to go further if there is evidence to do so.”
The Gambling Commission said it “takes this issue very seriously” and was part of a government task force aimed at forcing tech companies to take more responsibility for harmful or exploitative content.
Henrietta Bowden-Jones, the UK’s national clinical adviser on gambling harms, said: “No chatbot should be allowed to promote unlicensed casinos or dangerously undermine free protection services like GamStop, which allow people to block themselves from gambling sites.”
Meta and X did not respond to the latest report yet.
While a UK government spokesperson said chatbots “must protect all users from illegal content," pointing to requirements set down in the Online Safety Act, which aims to force tech companies to remove harmful content, such as abusive images of women and girls.
-
Claude reveals early signs of workforce change
-
Study finds AI can expose hidden identities online
-
OpenAI robotics head resigns over Pentagon AI deal
-
China to use AI as millions of graduates seek jobs
-
How AI will transform future airports: Everything you need to know
-
Do you know your clothes could soon charge your phone? Here's how
-
Xiaomi makes major leap to control smartphones
-
YouTube expands direct messaging experiment to 31 countries
