Inside the fight against online harassment

In past year, we have observed deeply concerning patterns of targeted harassment against high-risk individuals

By Ayesha Sarwar Nooral
|
May 05, 2025
Representational image of a person pressing their hands against the glass. — AFP/File

“I cannot even begin to explain how much of a nightmare that time was. It felt hopeless”, said the father of two minor daughters who became victims of technology-facilitated, image-based abuse (IBA) despite having no social media presence themselves.

The incident came as a complete shock to the family when they discovered manipulated photos of the girls being shared inappropriately on social media. While narrating the incident, the father said, “I couldn’t figure out how to reassure my daughters that it is going to be okay; they were completely terrified.”

It wasn’t just a violation of their privacy; it was a violation of their safety, dignity and childhood. After exhausting different avenues and being disappointed, the father reached out to the Digital Security Helpline at the Digital Rights Foundation (DRF). Upon receiving the complaint, the Helpline promptly took action and removed the accounts through platform escalation.

Since its creation, the DRF has advocated for safe and accessible online spaces where everyone, especially women and vulnerable individuals, can exercise the right of self-expression without fear. The Digital Security Helpline aims to break the perpetuating cycle of online violence, serving as the first line of defence between those targeted by online harassment and those who attempt to silence them.

In 2024 alone, the Cyber Harassment Helpline received 3,171 cases, demonstrating the critical need for digital safety awareness and direct support. Among these were 124 cases involving minors, 113 involving journalists, 18 cases from gender minorities, nine from ethnic minorities, and 15 by religious minorities. These numbers illustrate the alarming and ongoing threats to vulnerable communities online. Over the past eight years, the helpline has responded to 20,020 cases, highlighting its role as a critical support system in the digital landscape.

The helpline equips survivors with the necessary resources to fight back and reclaim their agency, whether that involves legal advice to hold perpetrators responsible, digital security assistance to regain control of their online safety, or mental health support to recover from the psychological impact of ongoing abuse.

It fills the gap by stepping in where platforms, policies and law enforcement fall short, so that online harassment does not become a looming threat and make people unsafe. It strives to empower people to stand against harassment by supporting them even if they just want someone to listen without judgment.

With each case it takes on, each survivor it supports, and each barrier it helps remove, the helpline is not just offering assistance but reclaiming digital spaces and redefining what safety, respect, and dignity should mean in digital spaces, particularly for high-risk individuals and marginalised groups.

In the past year, we have observed deeply concerning patterns of targeted harassment against high-risk individuals. Journalists and human rights defenders who advocate for truth and social justice face intimidation tactics online such as coordinated misogynistic harassment, impersonation and disinformation campaigns.

Vulnerable identities, such as individuals from marginalised groups based on gender, religion or ethnicity, often experience hate speech and dehumanisation. Their rights are overlooked even if the law protects them. They are vilified for their identity, faith and heritage. They are painted as threats to the country by people actively encouraging and participating in online campaigns that incite violence against them through content on social media. Even students and minors have found themselves at the centre of online abuse, especially through the rising trend of ‘confession pages’ on social media platforms, where personal data and images are circulated without consent.

All of these trends observed in digital spaces point towards the rise in tech-facilitated gender-based violence (TFGBV), exploitation of individual and cultural biases against marginalised groups and use of online platforms to maintain and escalate cycles of violence against the vulnerable.

These individual behaviours online are also a reflection of larger systematic gaps in how digital safety is understood and enforced. To promote a positive online experience, social media platforms have developed community guidelines that regulate content and promote safe and respectful online interactions. These community guidelines address hate speech and harassment, nudity and sexual exploitation, violence and threats, misinformation, impersonation and deceptive practices, and enforcing respect for privacy. Each platform might implement these differently, but the overarching goal is the same – user safety.

Despite this, there are a number of challenges. Technology companies, including social media platforms, often operate with a Western-centric understanding of harm. Content that doesn’t violate their community standards might still be deeply harmful or dangerous in local contexts. Posts in regional languages that target individuals, or text included within images and videos, or even suggestive songs, often go unchecked and therefore unresolved when reported, due to a lack of linguistic and cultural competence in content moderation.

Similarly, legislative responses, like those from local law enforcement, are inconsistent and often inaccessible. Many cities lack FIA (now NCCIA) cybercrime wings offices. Consequently, victim-survivors, especially women and children, are forced to navigate long, retraumatising legal procedures on top of having to depend on a male family member for mobility, finances, and accessing the services in the first place.

A multi-pronged approach is necessary to ensure digital spaces are safe for everyone. Technology companies must adapt their content moderation policies to reflect cultural diversity and regional context. The helpline, as ‘trusted partner', regularly engages with social media companies to advocate for victim-survivors of TFGBV and other online harms and present regional concerns. However, these platforms must invest more in these partnerships and respond with greater accuracy, speed, and consistency.

We also work to bridge the gap between survivors and law enforcement through our all-women pro-bono legal service. But law-enforcement authorities and policymakers need to play their own part, and must make the complaint processes more accessible.

Law-enforcement officers should be gender-sensitised and be able to respond to the complaints received on their online portals in a timely manner. Police stations should be authorised and trained to facilitate the registration of cybercrime complaints to safeguard the rights of all citizens, particularly those with mobility issues.

Most importantly, we must foster a broader culture of empathy and accountability in online spaces, including educational and government institutions, social media and traditional media discourse, and within family and community structures. The goal isn’t just to fix problems after they happen, but to create systems and legislation that focus on prevention and decrease the frequency of those problems from happening in the first place.


The writer is a dedicated clinical psychologist with extensive training in multiple domains of psychology. She advocatesfor mental health rights and accessible support for all.