Most regulatory proposals focus on content regulation rather than engage with the intricacies of governing artificial intelligence
E |
arlier this year, Minister of Information Technology and Telecommunication Shaza Fatima Khawaja spoke about the need for “ethical AI governance” at an international conference in Riyadh. Her comments came at a time when her government had just passed amendments to the Prevention of Electronic Crimes Act, 2016, that were widely seen as violating human rights, and a sweeping Digital Nation Pakistan Act, with the explicit aim of advancing digital public infrastructure without any safeguards for data privacy and exclusion. The minister did not elaborate on the specifics of what an ethical AI governance framework would look like nor did she allude to the AI bill under consideration at the Senate’s Standing Committee for Information Technologies. Buzzwords around AI governance and digital governance writ large, obscure the motivation behind these calls for regulation and governance – the real goal is often greater control, particularly of the media.
Much has been made of the threat artificial intelligence poses to the media and our information ecosystem. The proliferation of generative AI has accelerated these concerns, bringing them to the forefront of public consciousness, including that of policymakers. Many have already made apocalyptic predictions regarding the unchecked spread of disinformation and amplification of harmful content, leading to panicked calls for more curbs on freedom of expression and access to information on the internet. Nevertheless, most proposals to regulate AI only aim to superficially regulate its outputs and results of AI rather than propagate substantive regulation centred on transparency and accountability.
There is no denying the harm to the media and freedom of expression posed by AI. In fact, many journalists have been raising alarm for a while. AI’s ability to generate inauthentic content at scale and its role in amplification of content is a cause of concern. This is particularly true in the context of Pakistan where digital and media literacy is low, making users susceptible to manipulated information. However, most regulatory proposals focus on content regulation rather than engage with the intricacies of governing artificial intelligence.
Model regulation of AI places human rights, transparency and accountability at the centre of its discourse. Emerging human rights standards around AI require that its development, deployment and use be in a transparent manner. There exist clear pathways for accountability for harms that result from the use of AI. AI’s potential harms are more expansive than the current discourse around misinformation indicate. They include concerns around technology such as facial recognition, predictive policing and exasperation of discriminatory systems, particularly when deployed for providing essential and welfare services. Further, much of the conversation about AI regulation fails to take into account the invisible labour that underscores it – much of it coming from the global majority. Investigations have found that development of most AI tools is centred on manual data labelling performed by outsourced labour from countries such as Pakistan. These workers are often paid low wages, exposed to harmful content and include child labour. Most conversations around “ethical AI” completely ignore these aspects, focusing merely on the most visible manifestations of AI.
Thus, we have laws and proposed legislation that seek to regulate AI generated content based on vague grounds such as “fake news” or “national security,” mere pretexts to suppress dissent. The intention is rarely to mitigate harm and more frequently to regulate speech that would inconvenience the powerful. While there is no denying that governments have valid concerns about generative AI, particularly where false accusations are frequent, sweeping regulation in the name of AI are little more than convenient excuses to pass laws to curb freedom of expression. At best, these can be seen as attempts to regulate a technology that lawmakers do not understand and have done little to try to understand better.
However, there is hope. The last time the AI bill was discussed by the Senate Standing Committee in November 2024, the committee members and the Ministry “recommended a cautious approach, urging that it may be premature to establish a dedicated AI regulator at this stage, as the ecosystem is still developing.” The call for caution and recognition of the complexities involved are good signs. It is hoped that the wisdom continues to prevail. A regulatory framework that centres equity, transparency and accountability, not censorship, is essential. AI regulation must protect people, not silence them.
The writer is a researcher and campaigner on human and digital rights issues