A bot’s scoop?

By Editorial Board
July 09, 2025

An image of AI Newsroom created with AI, in which a newscaster is sitting. —OpenAI/File
An image of AI Newsroom created with AI, in which a newscaster is sitting. —OpenAI/File

Can we automate the art of news reporting? This question may have perplexed software developers worldwide, prompting them to consider various ways of integrating artificial intelligence (AI) into the newsroom. The end consumers (the readers), however, are not too thrilled about this prospect. The 2025 Digital News Report, released by the Reuters Institute for the Study of Journalism, reveals that most readers believe AI may make news reporting and gathering cheaper, but it is far less trustworthy than traditional news media outlets. The study reports a net score of -18 for AI’s impact on news trustworthiness. But this may not necessarily be good news for traditional news outlets (print or TV). According to the same report, social media and video networks (54 per cent) have, for the first time, overtaken television (50 per cent) as the most-used news source in the US, a trend driven by younger demographics. The report highlights that this change is amplified in politically polarised environments, where populist leaders and their supporters are increasingly bypassing the mainstream press.

Criticism of media outlets in the US increased under US President Donald Trump’s first tenure, during which he blamed news channels critical of his policies for publishing fake news. This led to a shift in audience, with some people switching to digital channels, where, as evidence suggests, a chaotic mix of fake news and genuine reporting dominates people’s timelines. The same pattern was observed in Pakistan when former prime minister Imran Khan mentioned that he advised that if people stopped reading newspapers or watching talk shows, everything would be alright. The final nail in journalism’s proverbial coffin was the sudden rise of AI chatbots in 2022, prompting many people to ask their pressing questions to these bots. In recent days, observers have been quick to point out the growing dependency of our political parties’ supporters on a chatbot, Grok, launched by X (formerly Twitter). The political point-scoring exercise results in some unverified and completely fabricated responses (some of which were based on tweets posted on X), pointing to a potentially dangerous future if AI’s use in journalism is not studied properly.

AI chatbots are data-hungry and naturally inclined towards the data they are trained on. They also lack critical thinking and rely on principles of predictability. It is a little comforting that most news readers are sceptical of this new innovation. But an undeniable reality is that they cannot stop AI integration, which is slowly creeping into everything. From AI-generated results by search engines to ready-to-assist chatbots on messaging platforms, there is no escaping this digital reality. For journalists, it is time to think about how AI can be integrated into news production, ethically. The answer to social media chatbots should be an LLM trained by journalists who understand the nuance of news reporting. Some news consumers may have doubts over AI integration into newsrooms, but they will eventually move to chatbots for their news information. If journalists and media houses let this chance pass, we will be welcoming a world where half-truths reign supreme and facts are royally ignored.