Can (imperfect) algorithms be left in charge?

March 29, 2020

Corona-related Facebook posts temporarily flagged as ‘spam’, text messages about having been around the corona-positive move at an accelerated rate on AI-regulated systems

The COVID-19 is in the process of fundamentally changing the world as we know it; questions of good governance, service delivery, communications and power will become issues of digital rights as we migrate online to perform everyday tasks during this pandemic. While the full reverberations of this seismic shift will become apparent in the coming days, the use of machine learning and automation is perhaps at the forefront of our lives: informing resource allocation during the pandemic, disease tracking and the contents on our social media feeds.

As public spaces, workplaces and educational institutions are closing down, the reliance on social media platforms as sources of information is greater than ever. Thus, social media companies are faced with a unique situation where while there is more traffic on their platforms, mandated safety measures have resulted in less people in offices to moderate that traffic. This regulation nightmare came to a head last week when Facebook sent thousands of its contractual human moderators on leave. Given privacy protocols and the mental health toll content moderation takes, it was not feasible to implement a work-from-home model there. Under this transition, the algorithms were left in charge. Facebook had been touting the efficacy of its algorithms as effective online speech moderation tools for years; however, in their first ride without training wheels the irreplaceable nature of human intervention and the dangers of full automation in decision-making contexts were laid bare.

Chances are that if you were posting any news or information on Facebook related to coronavirus last week your post was temporarily marked as “spam”. This included links from legitimate news websites and agencies, accounts of news publications, and individual users wishing to share information about the virus. The problem was attributed to a software glitch that wasn’t coronavirus-specific and posts marked as spam seem to have been restored; however, this points to the larger issue of using artificial intelligence as a substitute for human judgment and highlights how seemingly innocuous glitches can impede important information flows in times of a pandemic.

This move towards greater automation is not Facebook-specific. All major social media platforms will be heavily relying on artificial intelligence, algorithms and machine learning to regulate content in the coming weeks. As concerns are being raised regarding misinformation proliferating on social media platforms, we must be mindful that these nebulous determinations of truth and public importance will be increasingly made by algorithms. This not just raises abstract ethical questions, but also leads to real implications as potential fake news can have fatal consequences during a pandemic.

Algorithms are notoriously as good as the data fed into them, if the data is incomplete or if the algorithm is only looking at one aspect of the dataset then the determinations it makes can be inaccurate and in a public policy context, can have serious implications.

While these companies are doing the right thing in sending their content moderators on leave, it is important to note that artificial intelligence is notorious for being clumsy when used in contexts of policing and moderation for outlier cases. Artificial intelligence is imagined as ‘more intelligent’ and ‘neutral’ compared to humans, but extensive research has shown that algorithms often end up replicating the social biases and structures of society that produces them. According to a foundational study by ProPublica, software used by police in the United States to conduct risk assessments of individual criminals to predict future behaviour was found to be biased against individuals belonging to the black community. Machine learning that informs facial recognition technology (FRT) disproportionately detects people of colour as potential public risks, reproducing the “stop and frisk” policies based on racial profiling.

The neutrality of artificial intelligence tools is relevant as governments around the world are using technology to contain the spread of coronavirus. Dataveillance, algorithms and machine learning are being used in countries like China, South Korea and even Pakistan to track potential patients and construct predictive models. In China, thermal scanners have been used to screen people in public places; FRTs and “real-name systems” are tracking down individuals at potential risk; and GPS is employed to ensure that quarantined individuals are abiding by social isolation orders. COVID-19 has accelerated the use of algorithms and brought them close to home: the Digital Pakistan initiative is using mobile location and call history data to develop predictive models of potential corona infections. Working in conjunction with the Ministry of National Health Services and Pakistan Telecommunication Authority (PTA), SMS alerts were sent to those who have come into contact with confirmed COVID-19 patients, advising them to take necessary precautions. There is little clarity however on the algorithm used to make these determinations; transparency, for instance, on how “close proximity” is determined will help citizens make informed choices about the risk they carry. Algorithms are notoriously as good as the data fed into them, if the data is incomplete or if the algorithm is only looking at one aspect of the dataset then the determinations it makes can be inaccurate and in a public policy context, can have serious implications. This is not just conjecture, an error in modeling by advisers to the UK government led to it adopting its now infamous “herd immunity approach”, delaying more drastic measures to flatten the curve.

There is no denying that we are tasked with an unprecedented problem at a global scale, and the use of machine learning and artificial intelligence to develop vaccines and medicines could save countless lives. Algorithms are indispensible in processing large amounts of data and have their place in efforts to fight the public emergency that corona presents. Nevertheless, given experience in the past, it is not alarmist to speculate that technology used to track and identify coronavirus patients might reproduce prejudices and could have serious social justice implications for citizens on the receiving end. Radical transparency, especially at a time when governments are exercising executive and emergency powers, is key to ensuring that the balance between public health and civil rights is maintained.


The writer is a programme manager at Digital Rights Foundation

Using technology to track, identify coronavirus patients might reproduce prejudices