Moderating ‘content’ in democracies

November 24, 2019

This year, Pakistan was at the forefront of seeking restrictions making for 31 percent of the global share, Facebook transparency report reveals

This past week the social media behemoth, Facebook, released its transparency report for the period January till July 2019. This report, issued on a bi-annual basis, contains information regarding the enforcement of its community standards and policies on intellectual property as well as compliance with legal requests from governments across the world.

The report reflects overall trends of increased regulation of online spaces, as reflected in the recent Freedom on the Net Report 2019 which charts a global decline in internet freedoms over the past year. As per Facebook’s own transparency report, requests by states are at an all-time high reflected in the fact that there has been a 16 percent increase in government requests for user data. Furthermore, there were 67 disruptions of Facebook services in fifteen countries, up from 53 disruptions from the last reporting period. A vast majority of these disruptions were in India, translating into 8 weeks, 2 days and 22 hours of shutdowns, particularly in the context of Jammu and Kashmir. Interestingly, the volume of content restrictions based on local law decreased globally by 50 percent from 35,972 to 17,807. This decrease should not be taken as a sign of a decrease in internet censorship, rather a product of an unusual spike in the last half of 2018 when Facebook restricted 16,600 in compliance with an order by the Delhi High Court. Worryingly, this year the Government of Pakistan was at the forefront of content restrictions making 31 percent of the global share with 5,690 restrictions in the first half of 2019.

The content moderation model employed by Facebook is based on a combination of “community standards” developed by the company which covers a wide range of topics such as safety, hate speech, violent behaviour and child pornography among others. Facebook uses a combination of artificial intelligence and human content moderation to review content against its guidelines. Additionally, there is another layer of moderation through local laws of individual states. In cases where content moderation does not violate the community guidelines believed to be in contravention of local laws, governments can make content restriction requests. If Facebook complies with these requests, access to the content is limited in the specific country where it is deemed to violate local laws. In Pakistan, these requests are made by the Pakistan Telecommunications Authority (PTA) as per the criteria laid down in Section 37 of the Prevention of Electronic Crimes Act 2016 (PECA). Furthermore, user data requests are made to Facebook through law enforcement agencies pursuant to legal proceedings. In Pakistan, these requests can only be made by the Federal Investigation Agency (FIA) as the designated investigation body under Section 27 of PECA. Social media companies are not bound to comply with data requests in the absence of a mutual legal assistance treaty (MLAT) with countries in which these companies are located. Nevertheless, Facebook complied with 51 percent of such requests during the coverage period.

The ability of social media companies to make decisions regarding which content is removed or allowed, as well as which content is prioritised, translates into enormous political and social power. While aspects of this power are not unprecedented as privately owned channels and newspapers make determinations regarding publication all the time, the scale and nature of such regulation have fundamentally changed as decisions are being taken primarily through artificial intelligence which sifts through millions of posts to determine whether community guidelines are violated. Content moderation is not restricted to simple removal of content, it also happens through the thousands of micro-decisions, based on private user data, that result in our social media timelines determining which content gets our attention.

Digital rights activists around the world have been critical of this model of content moderation, calling it undemocratic. The community standards are grounded in a model of self-regulation, which raises democratic concerns as the average citizen has very little say in these policies. Facebook refers to an amorphous concept of “community” as a basis for its content moderation decisions without much consideration given to the fact that a community of over two billion users is extremely diverse. This approach has resulted in serious blindspots, for instance, Facebook’s real name policy has a disparate impact on transgender users whose gender identity is often not reflected in state-issued identification due to structural limitations.

Additionally, there is immense ambiguity regarding the nature of social media platforms. While Facebook actively curates social media timelines as per its community standards and algorithms, it has taken a hands-off approach for fact-checking on political ads. These platforms get to pick and choose when to step in and at the heart of the matter is whether the platforms are publishers or merely platforms? This determines the level of responsibility that can be accorded for their decisions, i.e. intermediary liability. Social media companies have been reluctant to subject themselves to the same regulation as broadcasters and publishers, preferring to be seen as platforms and neutral “pipes” of information despite the level of content moderation that takes place.

On the flip side, social media companies have been deferential to country-specific laws and requests by governments. While many saw the internet as a globalising force sounding the death knell of the nation-state, draconian internet laws across the globe demonstrate that the nation-state is clawing back in the internet landscape. The increase in compliance with government-based content restrictions needs to be subject to a higher level of scrutiny and transparency; for instance, Twitter often informs users if government requests are made against specific Tweets. This is important since according to Facebook’s transparency report anti-state content is among the reasons for content removal – this has the potential to include protests and speech of dissidents.

Regulation of speech is by no means simple and the challenge facing social media companies is formidable. Nevertheless, concerns regarding content moderation by social media companies grounded in democratic and civic freedoms are critical: there is a dire need for the traditional regime of free speech laws to catch up to the era of automated decision-making and excessive powers at the hands of social media companies.


The writer is a project manager at Digital Rights Foundation

Social media in Pakistan: Moderating ‘content’ in democracies