Deepfake technology: AI risks, detection and mitigation in cybersecurity

By Salis bin Perwaiz
|
Published July 06, 2025
Director-General Federal Investigation Agency (DGFIA) Dr Sanaullah Abbasi speaking during the seminar “prevention of cyber-attacks” at FIA headquarters in Islamabad on November 9, 2021. — FIA

Former Federal Investigation Agency (FIA) chief Dr Sanaullah Abbasi has identified deepfake technology as one of the most pressing and complex challenges.

Talking to The News on Saturday, he said that in the rapidly evolving landscape of cybersecurity, deepfake technology has emerged as one of the most pressing and complex challenges. Deepfakes refer to synthetic media generated using artificial intelligence (AI), particularly deep learning algorithms such as Generative Adversarial Networks (GANs), to create highly realistic images, videos, or audio recordings of individuals. These synthetic representations are often indistinguishable from authentic media to the naked eye or ear.

Advertisement

Dr Abbasi, who has a PhD, said that while deepfake technology has promising applications in entertainment, accessibility and education, its malicious use poses significant threats to individual privacy, organisational security, national integrity, and democratic institutions. As these technologies become more accessible, the barrier to entry for cybercriminals lowers, increasing the risk of widespread abuse.

Cybersecurity risks

Dr Abbasi said one of the most widely known threats of deepfakes is their use in manipulating public opinion. Fabricated videos of political leaders, public figures, or events can be created to spread misinformation, influence elections, incite violence, or manipulate geopolitical narratives. For example, a fake video of a politician making controversial statements can go viral, creating unrest before the content is debunked.

Attacks

The former FIA chief said deepfakes have elevated the risk of social engineering attacks to unprecedented levels. Cybercriminals can impersonate CEOs or managers in video or audio calls to deceive employees into transferring funds or sharing confidential information and advanced variant of Business Email Compromise (BEC), sometimes referred to as Deepfake BEC. In one real-world case, criminals used AI-generated voice to mimic a CEO and successfully stole $243,000 from a company in the UK (Harwell, 2020).

Reputation damage

Dr Abbasi further said that the creation of fake explicit content using deepfake tools has led to cases of cyber harassment, blackmail, and reputational harm.

This is particularly concerning for individuals in the public eye, where malicious actors superimpose a person’s face onto inappropriate content. Victims often struggle to prove the inauthenticity of the media, leading to severe psychological and reputational damage.

The dividend is a troubling phenomenon in which real, legitimate evidence can be dismissed as fake by those seeking to evade accountability. As the public becomes more aware of deepfakes, it becomes easier for wrongdoers to claim that authentic footage is fabricated, undermining digital evidence in journalism, law, and governance.

National security

Dr Abbasi said deepfakes could be used for propaganda, psychological operations, and cyber warfare by nation-state actors. In conflict zones or during political crises, fake videos could be used to simulate military announcements, surrender statements, or fake news broadcasts, causing mass panic or strategic disruption.

He added that there is a need for AI-based detection models. Moreover, researchers are developing machine learning models specifically trained to detect deepfakes. These models analyze facial movements, inconsistencies in blinking, lip-sync accuracy, and head positioning. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are commonly used for frame-by-frame analysis of video inputs (Guera & Delp, 2018).

Another aspect is Biological Signal Analysis. Human biological signals such as pulse rate, subtle muscle twitches, and pupil dilation can be extracted from video and compared against known biological behavior. Since GANs often struggle to replicate these small but consistent cues, they are useful for identifying deepfakes (McDuff, 2018).

Moreover, traditional forensic methods include analyzing image or video compression artifacts, frame-level discrepancies, and metadata. Techniques such as Error Level Analysis (ELA) can highlight inconsistencies introduced during the manipulation process.

Blockchain technology can be used to track the provenance of digital media by timestamping and securely logging its origin. If widely adopted, this could provide an immutable audit trail for verifying whether content has been altered. Platforms like Twitter and YouTube have begun implementing crowdsourced fact-checking and AI-based moderation systems to flag or label potential deepfake content. While not perfect, they play a role in real-time detection and public awareness.

Awareness campaigns

Dr Abbasi said one of the most important defence mechanisms is public education. Media literacy programmes that teach individuals how to identify signs of manipulated content can drastically reduce the spread and impact of deepfakes.

Training employees to verify identities before acting on sensitive instructions can also reduce social engineering risks. Dr Abbasi was of the view that deepfake technology is a double-edged sword innovative and dangerous in equal measure. While it holds promise in fields like cinema, gaming, and accessibility, its exploitation of misinformation, fraud, and manipulation represents a serious cybersecurity threat.

A comprehensive strategy that blends AI-based detection, public policy, education, and organizational resilience is vital. The goal must not only be to identify deepfakes but to build a digital ecosystem where authenticity can be reliably verified and trust in digital media restored.

Share this story:
Advertisement