close
Friday May 17, 2024

The dangers of AI

Proliferation of these tools has raised concerns about their misuse to spread disinformation

By Bakhtawar Iftikhar
February 10, 2024
This representational image shows the Artificial Intelligence (AI) Logo. — Unsplash
This representational image shows the Artificial Intelligence (AI) Logo. — Unsplash

We live in an era where it is common to see AI-generated content with political undertones on social media. Generative AI tools like DALL-E, ChatGPT, etc help create content in a format of one’s choosing: audio, video or text.

The proliferation of these tools has raised concerns about their misuse to spread disinformation. As per the World Economic Forum’s ‘Perception Survey of Global Risks by Severity’, AI-generated mis/disinformation ranks second in the current (2024) risk landscape.

However, this phenomenon is neither new nor exclusive to AI. Historically, disinformation has always been used to further political agendas. For example, the Sophists acted as mouths-for-hire in Ancient Greece; the printing press was used by the British to ridicule Napoleon – portraying the military commander as unusually short – and sensational headlines spread anti-Spanish sentiment amid the Spanish-American war.

In recent history, during the period before AI, doctored or photoshopped images were used to generate propaganda against rival political leaders. Thus, these tactics have taken root as part of political campaigns.

Now that AI tools are widely available, the phenomenon continues via different means. Recent examples from the global ‘Year of Elections’ include ‘a deepfake robocall of US President Biden aimed to suppress New Hampshire Democrat voters’. Similarly, in Pakistan, a proxy group reportedly used an AI-generated image of an opposition leader next to Adolf Hitler for propaganda against his party. Such incidents are only expected to increase, where lies or exaggerated claims are deliberately disseminated in large proportions.

AI does exacerbate the threat for two reasons: by increasing the scale and speed of disinformation and by making it look more realistic and thereby, more persuasive. These characteristics have led experts to fear that AI will “supercharge online disinformation campaigns”.

If people believe fake news, there is a risk of societal and political polarization. When limited digital literacy is coupled with reliance on social media as primary sources of information, the barrage of disinformation curated through AI will fatally impact the standing of truth in society. Even if people do not believe what they see or hear, it will still be harder for them to discern fact from fiction. The overflow of information may overstimulate their cognition and thus, hinder their ability to make informed choices – be it as citizens or voters.

Therefore, it is necessary to exercise caution in this regard. Unraveling the complex web of half-truths created by AI is certainly a daunting task and the challenge is two-fold: to address disinformation and to challenge the acceptability of propaganda as a modus operandi in political campaigns.

The first challenge can be addressed by focusing on the delivery system – social media. Governments must engage with social media companies, given their control over these digital public spaces, and encourage them to invest more in fact-checking also known as ‘Trust and Safety’ departments. These measures are especially crucial before elections around the world in 2024. Even though significant progress has been made in this regard, Nighat Dad, a member of Meta’s oversight board, states that these companies give more importance to elections in Western democracies. Developing countries remain largely neglected.

Moreover, ‘societal antibodies’ may develop, whereby people become sceptical of accepting the content they see as true. Individuals can also identify if the content they are seeing is AI-generated by looking for giveaway signs such as repetitive patterns, short sentences etc. However, Andy Carvin, a managing editor and senior fellow at Digital Forensic Research Lab (DFRLab) notes that as AI improves, “it’s only a matter of time before it becomes nearly impossible to tell the difference between what’s human-generated and what’s AI-generated.” For that scenario, investing in cryptographic techniques and watermarking to identify AI-generated content may prove effective.

However, these technical solutions are insufficient to address the larger socio-political problem. Thus, the second and more important task is to cultivate a healthy political culture, where propaganda has little acceptability and political campaigns are issue-based instead. If political actors focus on building informed and authentic narratives, not only will they combat disinformation but also contribute to enhancing the civic capacity of people – an indispensable element of healthy democracies.

Even though AI poses significant challenges in the battle against disinformation, it is merely a magnifying mirror, in which we catch a vivid glimpse of our shortcomings. By overly attributing ills to AI, we deny ourselves agency and evade responsibility, perhaps to take a technological cover to mask human folly.


The writer is a research assistant at the Centre for Aerospace & Security Studies (CASS), Islamabad. She can be reached at: cass.thinkers@casstt.com