Propaganda in the age of AI

Threat is clear: democracies, social stability, and institutional trust are at stake

By Editorial Board
|
June 21, 2025
An illustration of an AI robot with people walking on a street.— Reuters/File

In an age where war is waged not just on battlefields but across bandwidths, the battle for truth has become increasingly difficult to win. The recent Digital News Report 2025 by the Reuters Institute for the Study of Journalism tells us of a seismic shift in how news is consumed globally, with more and more people turning to generative AI chatbots to keep up with current events. For the first time, these AI-driven platforms are serving as primary sources of news for significant numbers of users, especially among younger demographics. This new reality brings with it not just convenience, but an avalanche of challenges, most notably the proliferation of misinformation and disinformation. The World Economic Forum’s Global Risks Report 2025 from earlier this year highlighted this very concern, identifying false information – often bolstered by AI-generated content – as one of the most pressing global risks. The threat is clear: democracies, social stability, and institutional trust are at stake.

And nowhere is this more apparent than in the context of conflict. We witnessed this during the Indo-Pak conflict last month and more recently amid the ongoing Israel-Iran tensions. In both cases, doctored videos, AI-generated deepfakes and false narratives flooded social media platforms like X (formerly Twitter) and Facebook, often going viral before being verified or debunked. The consequences are real, sometimes deadly. In India, where mainstream news channels have

e often been accused of aligning more with political power than journalistic integrity, the lines between news and propaganda have blurred alarmingly. However, this is not just an Indian problem. During the ongoing Israel-Iran conflict, even a fabricated claim suggesting Pakistan’s willingness to engage in nuclear retaliation was picked up by a UK-based media outlet, until it was publicly debunked by Foreign Minister Ishaq Dar. His decision to correct the record formally, on the floor of the House, sets an important precedent. We need facts to counter fakes – officially, swiftly and transparently.

Unfortunately, for too long, the default response to digital disinformation in many nations has been to shut down platforms or restrict access. While perhaps effective in the short term, this approach erodes public trust and hampers free expression. The more sustainable and democratic path is to counter falsehoods with verifiable truth and engage with misinformation in the public domain. But governments and officials alone cannot carry this burden. Media organisations, fact-checkers, tech platforms and even AI developers must collectively commit to identifying, flagging and curbing the spread of false content. Education and media literacy are also crucial; citizens must be empowered to critically assess the information they consume. One thing has to be clear: truth must not be the first casualty of innovation. We have the technology as well as the tools to ensure that our societies remain informed, not manipulated. The age of AI may have blurred the boundaries of what is real, but our commitment to the truth must be unwavering.