close
Tuesday April 23, 2024

Regulatory challenges

By Nanjala Nyabola
February 16, 2021

The first month of 2021 was marked by two pivotal political moments across two continents, united by a curious feature of modern communication. In the United States, right-wing extremist groups successfully radicalised, organised and militarised on social media, managing to invade the seat of Congress, while refusing to accept the result of a recent election and threatening the lives of the country’s senior-most lawmakers.

And in Uganda, a highly contentious election pitted an increasingly cruel octogenarian president, who refuses to cede power, against a charismatic musician turned politician, young enough to be the former’s grandson. Although the Ugandan incumbent promised a “scientific” election in which tech would be a key factor, instead he flung the entire country back into the pre-digital era for a week using his preferred weapon of late – a national internet shutdown.

Two elections united by the changing role of technology in our public spheres more broadly, and specifically the role of social media, in hosting and moderating political conversations. In the US, social media has been an enabler of rampant misinformation and hate speech that incubated groups like QAnon who infiltrated the US Capitol.

In Uganda, social media was the main platform on which the opposition was able to document the violence of the ruling party, to advocate for social change, and to organise against the excesses of power.

In the US, the social networks de-platformed the now-former president and disabled his accounts. In Uganda, they did the same and the president retaliated by banning social media for removing accounts allied to his party before turning off the internet wholesale.

Within the first two weeks of 2021, the two extremes of what social media represents in the public sphere were on full display, underscoring how the same approach on the same platforms in different social contexts can have wildly disparate outcomes and implications.

Predictably the aftermath of both elections has seen calls to enhance regulation of the social networking platforms. Unfortunately, now that the harms caused by content moderation fails are finally affecting US national politics, the calls for regulation are also being shaped by US perspectives and interests.

It would be a tremendous mistake for rule-making around social networking sites to only take the US experience into account, particularly because activists and analysts from other parts of the world have not only been flagging these issues just as long if not longer, but would have to live with the consequences of any new regulations without the social or economic capital to make them sensitive to local contexts.

Basically, we are facing a situation where a choice to ban politicians in the US leads to a choice to ban a dictator who then cracks down on the public sphere in Uganda, while a choice to do nothing results in genocide in Myanmar.

Based on their origin myths and in their own descriptions of what they do, the founders of these sites certainly did not see them evolving into the big, global political players that they are today.

In my 2018 book, Digital Democracy, Analogue Politics, I analyse how these sites evolved into key pillars of the public sphere in Kenya where they insert themselves into spaces left open by the retreat of traditional media and constraints on organising and mobilising in the analogue public sphere. Kenya is the example but the principle holds across all societies: the internet is an intensifier of whatever energies exist in the analogue public sphere and you cannot understand the role that these platforms will play in various societies if you do not understand the society in question first.

Until recently few countries – like Germany – actively pushed for regulations that went beyond corporate accountability to place guardrails on the kind of speech that would be allowed on these platforms.

The rest perceived the US approach to free speech as the best approach and thought that whatever restrictions were needed would be provided by users coming together to volunteer to flag negative content.

But the US approach to free speech is a political value grounded in a specific socio-political history: it only makes sense in the US in the context of US history and other provisions in US law more broadly that outline the limits to absolute free speech in order to prevent large-scale harm. Even in the US, there is no such thing as absolute free speech, certainly not without consequences.

When these corporations went global they took the US approach to free speech but not the guardrails that came with it. What has been evident is their willingness to cosy up to powerful politicians in fraught political contexts in order to secure market access. In India, Facebook’s chief lobbying officer declined to deal with Hindu nationalist hate speech at the risk of damaging the platform’s business prospects in the country.

This in the context of rising tensions between the country’s two main religions that analysts argue is being incubated and disseminated through social media. Social networking sites need to remember that free speech is not a standalone value. It is a value that has to be protected in social context, and that understanding and responding to a social context requires more substantial investment than merely relying on community-driven content moderation.

It is not unreasonable to demand a localised, historically sensitive approach to moderating the content these platforms allow. The German approach to regulating Nazi speech on social media has been the most visible example of making these primarily US corporations bend to local realities.

While the prohibition on creating and sharing Nazi sympathetic material online has not stopped the rise of the far right in Germany, it has certainly slowed it down in a way that would make the US jealous. The point is that German regulators had a clearly articulated red line for the kind of content that the social networking sites could permit and perhaps because of the potentially large market, the sites listened.

Still, Germany is an example where communities can trust regulators to be acting in the public interest. The story is different in countries like Uganda where the regulator is primarily acting in the interest of those in power. In countries where regulation has been historically lax and primarily focused on curbing freedom of expression and criticism of the state, merely bowing to the pressure from the government without asking why can consolidate authoritarianism.

Excerpted: ‘How should social media be regulated?’

Aljazeera.com