AI assistants distort news in nearly half their responses, EBU–BBC study finds
EBU and BBC urge greater transparency to prevent misinformation spread
Artificial intelligence assistants distort or misrepresent news content in almost half their responses, according to research released on Wednesday by the European Broadcasting Union (EBU) and the BBC.
The study reviewed 3,000 answers generated by leading AI-powered assistants to news-related questions. The systems, which include OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity, were tested for factual accuracy, source attribution and ability to separate fact from opinion.
The research covered 14 languages and found widespread inconsistencies, highlighting risks for users who rely on AI tools for news consumption. The findings come as media regulators and news organisations grow increasingly concerned about misinformation spread by generative AI models.
The EBU and BBC said the study underscores the need for transparency in how AI assistants process and present news content, warning that their growing popularity could blur lines between verified journalism and synthetic information.
Overall, 45% of the AI responses studied contained at least one significant issue, with 81% having some form of problem, the research showed.
Reuters has made contact with the companies to seek their comment on the findings.
Gemini, Google's AI assistant, has stated previously on its website that it welcomes feedback so that it can continue to improve the platform and make it more helpful to users.
OpenAI and Microsoft have previously said hallucinations - when an AI model generates incorrect or misleading information, often due to factors such as insufficient data - are an issue that they are seeking to resolve.
Perplexity says on its website that one of its "Deep Research" modes has 93.9% accuracy in terms of factuality.
Sourcing errors
A third of AI assistants' responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study.
Some 72% of responses by Gemini, Google's AI assistant, had significant sourcing issues, compared to below 25% for all other assistants, it said.
Issues of accuracy were found in 20% of responses from all AI assistants studied, including outdated information, it said.
Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.
Twenty-two public-service media organisations from 18 countries, including France, Germany, Spain, Ukraine, Britain and the United States, took part in the study.
With AI assistants increasingly replacing traditional search engines for news, public trust could be undermined, the EBU said.
"When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation," EBU Media Director Jean Philip De Tender said in a statement.
Some 7% of all online news consumers and 15% of those aged under 25 use AI assistants to get their news, according to the Reuters Institute’s Digital News Report 2025.
The new report urged AI companies to be held accountable and to improve how their AI assistants respond to news-related queries.
-
NASA prepares Artemis astronauts crew for first Moon mission in decades
-
Comet 3I/ATLAS puzzles scientists, revealing secrets of alien worlds
-
Scientists unlock whale longevity secrets – Could humans live 200 years?
-
Were humans born on Mars? Scientists raise a stunning possibility
-
Anduril acquires ExoAnalytic solutions to bolster ‘Golden Drone’ missile defense capabilities
-
Massive 600-kg NASA satellite to hit Earth Today: Could humans be at risk?
-
Massive 3D map exposes early universe like never before
-
Scientists reveal stunning images of rare deep-sea species & corals off British Caribbean coast