You can easily trick AI into spreading lies, experts warn

Journalist shows how easy it is to manipulate AI search tools with a single fake article

By Pareesa Afreen
|
February 18, 2026
You can easily trick AI into spreading lies, experts warn

Artificial intelligence chatbots are spreading misinformation after a journalist exposed a simple loophole that tricks AI search tools into repeating false claims.

The issue affects platforms such as ChatGPT, Google Gemini and AI Overviews. The test was conducted this month by a tech reporter who published a fake blog post to see how quickly AI would repeat it.

Advertisement

Within 24 hours, leading AI systems were citing the false article as fact. Experts warn that this AI misinformation loophole could affect health advice, finance decisions and local business searches.

The method is surprisingly simple. The journalist wrote a fabricated blog post claiming he was the world’s best hot dog-eating tech reporter. The event mentioned in the article did not exist. Yet major AI chatbots repeated the claim when asked about it.

Unlike traditional search engines, AI search tools summarise information directly for users. When they pull data from the web, they may rely on a single source. In this case, the AI tools cited the fake blog post as evidence.

Amsive Vice President of SEO Strategy and Research Lily Ray said AI companies are moving faster than their ability to regulate accuracy. Electronic Frontier Foundation Senior Staff Technologist Cooper Quintin said the loophole could be used to scam people or damage reputations.

SEO consultant Harps Digital founder Harpreet Chatha said anyone can create a blog ranking their own product first, and AI is likely to repeat it.

A recent study also found users are less likely to click source links when AI Overviews appear, increasing the risk of blind trust.

Advertisement