An invisible digital apartheid

October 29, 2023

Social media platforms often hide behind the opacity of their algorithms

An invisible digital apartheid


T

he past three weeks have been extremely heavy, with our social media feeds flooded with content from the genocide unfolding in Gaza. As the mainstream media continues to promote imperialistic narratives justifying occupation and the dehumanisation of the Palestinian people, you might have heard some users complain about being ‘shadow banned’, a shorthand for having your content demoted, limiting its visibility.

This is not the first time that pro-Palestinian content has been shadow banned. In 2021, when there was an escalation of violence in Gaza and the West Bank, Instagram removed posts and hashtags relating to the Al-Aqsa mosque as its moderation systems conflated the name of the mosque with that of a designated terrorist organisation. Meta later apologised for the “enforcement error.” However, it never acknowledged the shadow banning of content.

Later, an independent human rights due diligence audit carried out by the Business for Social Responsibility of the May 2021 events found that there was an over-enforcement of content removal for pro-Palestinian Arabic content as opposed to Hebrew Israeli content resulting in an adverse impact on Palestinian digital freedom of expression.

Cut to the present, there is little difference in how social media platforms have handled the crisis. X, formerly known as Twitter, has been rife with disinformation that has contributed to a surge in hate speech. Palestinian digital rights organisation, 7amleh reported that there has been a significant rise in hate speech, disinformation and incitement to violence against Palestinians, with little action from platforms, such as X.

Accounts, ranging from a few followers to 100,000s, complained of a significant reduction in the reach of their content–posts, videos/ reels, stories since they started sharing content regarding the occupation and genocide in Palestine.

Many users have reported that their accounts were suspended or penalised for sharing content depicting the horrifying attack on Al-Ahli Arab Hospital that resulted in the death of more than 500 Palestinians. Instagram has had to apologise for auto-translating the word “terrorist” in some Palestinian profile bios.

Shadow banning in particular is an insidious form of censorship as unlike content removals or account suspensions it is difficult to document or prove. Social media platforms often hide behind the opacity of their algorithms to say that what you, the user, are sharing is simply not getting traction. Furthermore, it is not seen as definite banning as the content is never removed, only its reach is limited.

However, shadow banning is as effective as any other form of censorship, particularly in the context of crisis. If no one outside of Gaza can bear witness to its oppression and genocide then that has material consequences for how the world will react.

Shadow banning is as effective as any other form of censorship, particularly in the context of crisis. If no one outside of Gaza can bear witness to its oppression and genocide then that has material consequences for how the world will react.

Algorithmic reasons for shadow banning should not absolve social media companies of accountability for amounting to viewpoint discrimination, i.e., censorship based on the content of posts and the viewpoints expressed in them. Policies and automated systems are unlikely to be deliberately built to downgrade pro-Palestinian content, however, bias is baked into how these policies and systems are constructed and implemented. Many have rightly pointed out the disparity in response to crises by social media companies in Ukraine as opposed to the genocide unfolding in Gaza.

In the face of systemic censorship, users are getting creative in this war of digital narratives. Tried and tested methods of evading shadow bans are being traded on the very platforms silencing them. Understanding content moderation policies and practices is an important step to ensuring we get smarter about sharing and amplifying voices on these imperfect platforms.

You might have seen posts, such as selfies, vacation photos and random shots of people’s pets being sandwiched between posts about Palestine to “confuse the algorithm.” Post-2016, there has been a concerted shift in the types of content platforms promote on their feeds, preferring personal content over news. This technique plays on that algorithmic bias to ensure that there is a steady stream of personal content to sustain your reach while you post about news coming from Gaza.

Another way of avoiding shadow bans is customising posts while sharing them by adding your own caption to avoid engaging in “bot-like behavior.” Bot-like behaviour is often downgraded by social media platforms and includes indiscriminate and rapid reposting of others’ content. As dystopian as it might sound, often the trick is to prove to the algorithm that you are “human.” Lastly, it is important to document these bans and raise alarm regarding the different ways in which Palestinian and pro-Palestinian voices are being silenced.

At a time when connectivity in Gaza is limited, with electricity and internet services being cut, any content and voices coming out of the region need to be amplified. However, we see the apartheid on the ground being replicated online. While censorship circumvention and smarter sharing techniques are important, ultimately tech companies must be held accountable for their complicity in this genocide and erasure of Palestinian voices.

As Pakistanis experience the discrimination built into technological design through their digital activism on Palestine, the hope is that they start to think critically about the technologies and platforms that they use and have come to rely on.


The writer is a researcher and  campaigner on human and digital rights issues.

An invisible digital apartheid