Are Facebook and Twitter failing us?

February 9, 2020

Social media platforms claim to have community guidelines in place but selective enforcement is the real challenge

Qurat-ul-Ain, a 29-year-old accountant, was harassed by a man who edited her images, made a fake profile under her name and blackmailed her to talk to him. While she did not give in to his demands, she shared that she was scared to ask others to report the photos. “I reported it to Facebook but the profile was not taken down, and I couldn’t ask anyone else to do it as well because that would mean spreading [these] photos within my circle, and [I] didn’t want anyone to see [them],” she says. The profile was later deleted and the man stopped messaging her. But the incident still haunts her.

In instances such as these, the onus for protecting the victim lies with social media platforms as much as it does on law enforcing agencies. And yet, these platforms have done very little to extend the kind of action that is needed to counter abuse that thrives on all mainstream social media apps. The collective inefficiency of these corporations to deal with these issues reflects their priorities that lie in profits rather than the safety of their users whose data is constantly exploited. This inefficiency confirms that these corporations are businesses working for money, independent of the welfare of consumers. For them, ‘to bring the world closer together’ only means more data of more people that can be stored, compiled, sorted and sold.

Every major social media platform has community guidelines that inform users about what is and is not acceptable on their website, and dictate content moderators what to delete and what not to. These guidelines are crucial as they determine whether the platform is safe for its users or not. Community policies for two major social media websites, Facebook and Twitter, particularly mention hate speech. They also specify the language that will be removed from their respective platforms. Despite this, Facebook has been found to be profiting from right-wing hateful content which specifically targets women of colour. A Guardian investigation found that an Australian senator, Mehreen Faruqi, fell victim to a right-wing network on Facebook when it employed over 500,000 followers to attack her for speaking against racism in the Australian parliament. While Facebook took down the posts when The Guardian flagged it, the damage had already been done.

Similarly, Twitter has routinely refused to take down content on account of it being ‘of public interest’. For instance, US President Donald Trump posts threats of violence directed towards other nations, media and individuals quite frequently. However, in January 2018, responding to reports of taking down his tweets and account, Twitter said in a blog post, “Blocking a world leader from Twitter or removing their controversial tweets would hide important information people should be able to see and debate,” essentially communicating selective enforcement of the policies that categorically prohibit violence and hate speech on the platform.

The collective inefficiency of these corporations reflects their priorities that lie in profits rather than the safety of their users whose data is constantly exploited for millions of dollars.

Regular users of these platforms constantly bear the brunt of ineffective implementation of these policies. On Facebook, there are countless closed groups hosted by Pakistanis that gain traffic and followers through sensationalism based on objectification of women. A blogger recently posted an almost 4-minute video drawing attention towards how deep-rooted the issue is, and how desperate members of these groups are to acquire access to what they call ‘leaked videos’ of women, public figures or otherwise.

There is a need for social media platforms to analyse their role in countering the problem before it causes the damage it aims for.

Facebook deleted the group mentioned in the video after it had been around for a while. Compare this to the application of the so-called community guidelines that are quick to take down content that is truly essential for democratic discourse in public spaces.

Just as Facebook constantly fails to protect more than two billion people against prevailing cyber harassment, hate speech and trolling, Twitter has also done very little to protect its users. While it is widely used to express opinions, political and otherwise, and has been helping users in making connections, it has constantly discriminated against women in protecting them from gender-based violence on its platform. And although it does have what it calls a ‘philosophy’ to implement the rules, it doesn’t specify how these rules will be enforced.

The ambiguity in the process of specifying what is unacceptable has resulted in accounts of Pakistani women being suspended for responding to their harassers; while the harassers continue to use their accounts.

A 2018 research by Amnesty International titled Toxic Twitter states, “Amnesty International requested that Twitter share disaggregated data about the company’s reporting process and response rate on three separate occasions but our requests were refused.”

Both Facebook and Twitter agree that context matters when they analyse the reported content for suitable action. However, they frequently fail to take down visibly disturbing content sometimes because they fail to understand the local content. While circulation of explicit content is a huge issue on both the social media platforms, a photo of a child with no pants on, who died in an accident, stayed on the platform despite multiple reports against it. On the other hand, a call for job applications was taken down because it encouraged women to apply. A video from October 2019 of a man from Sindh committing suicide continues to be available on multiple platforms to this day – owing to the language barrier, because he was speaking in Sindhi, content moderation was ineffective.

Where social media has become the primary standard of communication, corporations providing these platforms to users have a considerable responsibility to protect users from hate, fascism, discrimination and violence. Selective access, unreasonable laws, violation of rights, and inadequate protection offered through policies put users at risk of potential violence that can be avoided if the implementation of state and corporate policies is unbiased, transparent, and effective.


The writer is a digital rights advocate, and specialises in communications for development. She tweets at @hijakamran

Are Facebook and Twitter failing us?