While all major social media companies are being applauded for banning Trump, the timing is curious and speaks to the capacity of these platforms to implement meaningful content moderation
It was the ban heard around the world. Donald Trump, soon to be ex-president of the most powerful nation in the world, was banned from all major social media platforms including his favourite Twitter as well as Facebook, Snapchat, Spotify, Twitch, Shopify, and Stripe. Platforms such as YouTube, Reddit, TikTok, and Pinterest have placed restrictions on his activities until the transition of power is completed. These decisions have been applauded by many, coming in light of the incitement to violence from Trump that precipitated the January 7 attack on the US Capitol. While these rebukes have further solidified his credentials as a fascist and white supremacist, as if there was any doubt before, the series of bans and suspensions have wider implications for online content moderation, free speech and the power of private companies.
First, the timing of these decisions is curious as they came at a time when Trump’s departure from the office of president was a foregone conclusion, spurious challenges to the legitimacy of the elections notwithstanding. While we would all like to believe that the likes of Mark Zuckerberg were truly moved by the violence at the Capitol, the outrage seems hollow given that his company was not moved by the incitement to violence by Trump in late May of 2020 when he posted “when the looting starts, the shooting starts” against Black Lives Matters protestors. Twitter merely hid that tweet behind a warning of glorifying violence, Facebook did nothing. Throughout his stint in office, Trump has blatantly flouted the community guidelines and social media platforms did nothing. On the other hand, accounts of activists, human rights defenders and members of marginalised communities have been suspended for far less. Women are routinely banned from digital platforms for ‘aggressive’ speech countering their own harassment and accounts of minority groups have been suspended as a result of mass reporting. Whenever the discrepancy in treatment was raised, these companies contended speech by elected officials needed to be available in the public interest. What changed now?
Social media platforms are being applauded for evenly applying community guidelines to all accounts, regardless of whether it is someone with two followers to their name or the most powerful man in the world. The sentiment is noble but merely a smokescreen for the fact that Silicon Valley is rarely moved by moral argument; they are driven by the bottom line and the writing was finally on the wall as the Biden administration was ready to take power. It was finally politically and economically efficacious to ban Trump.
Jillian C York, activist and director at the Electronic Frontier Foundation, points out that while the decision has been posited as a win for democracy, it is in fact part of a larger move to privatise decision-making regarding free speech. While Trump’s incendiary tweets were a threat to democracy and safety of vulnerable groups, the concentration of power in the hands of profit-making entities is equally worrying. The fundamental question here is who gets to decide the parameters of free speech: governments, independent courts or an insular group of tech companies whose heads are among the richest men in the world?
This is not merely a philosophical question; it speaks to the capacity of these platforms to implement meaningful content moderation. Activists and marginalised groups have been arguing for years that these digital platforms are host to violent hate speech; however, development on this front was piecemeal. If we zoom out from the US, which tends to monopolise discussions, Facebook’s failure to curb genocidal free speech in Myanmar was fatal to the Rohingya population, leading some to argue criminal negligence. The companies are fundamentally ill-equipped to effectively moderate speech and have demonstrated a lack of commitment to investing resources in content moderation leading to selective and arbitrary decisions.
If we zoom out from the US, which tends to monopolise discussions, Facebook’s failure to curb genocidal free speech in Myanmar was fatal to the Rohingya population, leading some to argue criminal negligence.
More worryingly, there is a severe lack of transparency in the decision-making of these platforms. Ideally, when speech is curbed on other mediums, such as electronic and print media, decisions can be challenged and regulatory bodies provide reasoned judgments on the matter. There is a complete lack of transparency when it comes to decisions regulating speech on tech platforms. These decisions seem arbitrary because they are, the underlying rationale is power rather than free speech rights or the welfare of vulnerable groups.
The fundamental issue is that tech giants have amassed immense power with little to no regulation or accountability. The thorny issue of regulation, however, should not surrender to short-termism. Trump has been touting a repeal of Section 230 of the Communications Decency Act of 1996 that prevents intermediary liability, that is, platforms cannot be held legally responsible for content they host. This section is considered by many experts as the bedrock of the internet as we know it, however, Trump and many others have argued repealing it will be the silver bullet for truncating the power of big tech. A version of this solution has been put forward by the Pakistani government as well in the form of the Removal and Blocking of Unlawful Online Content (Procedure, Oversight and Safeguards), Rules 2020 which create content moderation obligations for social media companies. However, this approach merely transfers power to governments, whose intention is to control the internet. Companies, on the other hand, have pushed for a self-regulatory model which inevitably lacks transparency and accountability to end-users. While public pressure on platforms has led to coordinated action such as the Trump ban, without meaningful accountability these measures further concentrate powers in the hands of tech companies and have the danger of creating what Evelyn Douek calls “content cartels”.
These false regulatory binaries leave users between a rock and a hard place as these solutions fail to dismantle the governing logic of these platforms: surveillance capitalism (a term popularised by Shoshana Zuboff). Any analysis of online content moderation is incomplete without interrogating the power relations and economic interest that underpin these platforms. A good start would be to break up the monopoly of tech companies through antitrust laws. For users in the Global South, only greater cooperation and concerted action can lead to an inversion of the power balance that exists between them and big tech situated thousands of miles away from them.
Shmyla Khan is the Director of Policy and Research at Digital Rights Foundation.