close
Tuesday May 28, 2024

Swapping faces

By Aimen Akhtar
May 29, 2021

In 2017, several cases emerged where images of public celebrities were photoshopped for illicit purposes; a CEO was scammed out of $243,000 using a voice deepfake; an imitated video of President Ali Bongo was released to convince the public about his good health – and the list goes on. Such hybridization of human bodies and faces, through Deepfakes, is likely to erode trust among people in media and technology platforms.

Deepfakes is one of the subsets of artificial intelligence (AI), a synthesis of ‘Deep Learning’ and ‘fake’. The deepfake algorithm works on deep machine learning, which examines the facial expressions and movements of a person and superimposes face images of a target person over the video of a source person for falsification purposes. Although it has proven beneficial in the fields of education, art, self-expression, and in solving various issues in the computer vision domain, it is problematic owing to its indistinguishable nature between real and tampered images and videos.

The process of tqmpering footages through deepfakes can be performed by professionals and even novices having minimal photoshop or programming skills. It can also be produced with a static image. The issue lies in its strikingly convincing results where it is almost impossible to discern a real video from a fake one. Even high-tech computer programmes, including AI, have proven ineffective to mark difference, including any variance in pixels, of a deepfake and a realistic video.

The inability to determine a deepfake has given way to numerous unprecedented problems. They are a harbinger to threats in the future such as exploiting political and religious tensions among countries by re-contextualizing and re-staging original videos into fabricated political and religious deepfake weapons, affecting electoral campaigns by manipulating public opinion, creating chaos in the financial market, and spreading unusually persuasive fake videos and speeches of both public figures and ordinary people.

The prospects of increased spread of misinformation and hate speech are also heightened; political and religious tensions and chaos among public are probable to stimulate as a result of deepfakes due to its ability to alter the ways in which the media was interpreted. It is also blurring the line between hybridized and evidentiary videos used in court rooms. The concern becomes more urgent because research has confirmed that hoaxes and fabricated news tends to reach people ten times faster to people than an authentic piece of information.

In 2018, a video was circulated on WhatsApp by re-contextualizing a nerve gas attack in Syria in 2013 to manipulate viewers of rural India to buttress false claims of child kidnapping and violence. US intelligence has been using deepfakes as a weapon to create political instability and chaos among public and political figures; deepfakes such as an American politician taking a bribe, a soldier exterminating civilians overseas, a US official purportedly confessing to a conspiracy etc, are only a fraction of examples where deepfakes have been utilized as tools of de-stabilization and spreading fake information among public.

Not only this; the ability of deepfakes to manipulate words and facial expressions of someone also helps suppress political opposition. In this regard, portraying female politicians and activists in a sexualized way has been the prime target again. A fake illicit video of a female journalist, Rana Ayyub, who critiqued the excesses of the Indian government, was circulated via WhatsApp in 2018 by the Bhartiya Janata Party. Such an intense uptick in the creation of deepfake videos is leading to legal implications such as privacy harms, identity sabotage, and threats of defamation.

Can technology platforms play a role in minimizing the spread of deepfakes? The non-stop pace of content transmission and the platform’s mandate to provide an opensource distributive system is making content moderation and fact-checking a real challenge. The encryption models of certain applications, like WhatsApp, makes the detection of fake videos even harder to accomplish.

These events are causing an information apocalypse which will alter the ways in which individuals consume information. It is also akin to distorting reality and eroding trust among people on online technology platforms. This raises the concern about whether the right to information also includes the right to misinformation. The proliferation of deepfakes has also triggered the need for laws and regulations prohibiting such behavior. However, any such regulation is likely to be hit by the right to free speech and expression incorporated in the constitution of almost every country.

The writer is a Lahore-based lawyer.