Global advocacy on AI regulation

December 3, 2023

The government of Pakistan should engages in an open process of developing policies about emerging technologies

Global advocacy on AI regulation


L

ast week Foreign Secretary Syrus Sajjad Qazi called for a global framework for responsible artificial intelligence governance through inclusive policymaking and equitable access to AI, particularly in developing countries. The statement echoed increasing calls for AI governance across the globe, but provided no details about what such a regulatory framework should look like.

Of late, regulation around AI has been a subject of fervent debate of late, with many heads of state, researchers, members of civil society and tech executives calling for regulation of these emerging technologies. These calls for regulation were at the forefront of the fault lines that became apparent during the ouster and subsequent return of OpenAI’s CEO, Sam Altman, in the last two weeks.

It has emerged that tensions regarding Altman’s leadership role in the company, which is responsible for large language models, such as ChatGPT and generative image tools, such as DALL-E 3, stemmed from disagreements around slowing down the development of AI given the possible dangers it could pose.

Earlier this year, Future of Life Institute, a non-profit organisation focusing on risks posed by AI, had published an open letter calling for a pause on training of AI so that necessary understanding and guardrails could be developed. The letter was signed by researchers, academics, technologists and businessmen like Elon Musk.

Among the many global calls for greater AI regulation, the European Union’s AI Act stands as the foremost legislative proposal laying down a possible framework. The EU AI Act hopes to create a tiered, risk-based system of AI technologies consisting of unacceptable risk, high risk, limited risk, and low/ minimal risk — with different mechanisms to deal with each tier of technology and harm. For instance, the high risk category includes the deployment of AI in policing contexts (i.e. facial recognition technology or predictive policing), screening job applications and distribution of public welfare.

The high-risk uses of AI are not completely prohibited but will be subject to risk assessment requirements at every stage of design and use. These systems will also be liable to be added to an EU-wide public database. While questions remain regarding the applicability of the draft Act to foundational AI models, particularly large language models and generative AI, the Act does lay out a blueprint, or at the very least provide a starting point for envisioning AI regulation.

It is in this backdrop of deep contestations around AI that Pakistan’s call for a binding framework regulating AI is situated. While the statement made by the foreign secretary speaks to an urgent need, the government’s own track record adds no reassuring depth to these declarations.

The Draft National Artificial Intelligence Policy released in June 2023 leaves a lot to be desired. It lacks sufficient human rights safeguards and relies on vague voluntary systems to ensure the promise of ethical AI.

The Draft National Artificial Intelligence Policy released in June 2023 leaves a lot to be desired. It lacks sufficient human rights safeguards and relies on vague voluntary systems to ensure the promise of ethical AI. Pakistan’s stance on international fora would be stronger if it engaged in an open and consultative process of developing policies relating to emerging technologies, such as AI.

Also, if Pakistan wishes to bring a Global South perspective regarding the equitable and ethical use of AI, it would do well to emphsise the centrality of labour issues. While talking about labour within the context of automated systems might appear to be a futile discussion, it is an urgent one for the developing countries.

Despite what we are asked to believe about AI, large datasets fuelling AI are built on cheap, outsourced labour training and coding data to make it intelligible for machine learning. This is referred to as “ghost work” by Mary Gray and Siddharth Suri, whose seminal research highlights a “global underclass” that performs work of content moderation, transcription and captioning underpinning the AI systems.

Further, this labour is often exploited, with tech companies taking advantage of lax labour laws and regulations in the Global South. For instance, earlier this year it was uncovered that OpenAI employed outsourced Kenyan labourers earning less than $2 per hour to train the foundational models for ChatGPT.

In a struggling economy like Pakistan’s, with high youth unemployment and a rapidly deteriorating exchange rate, labour is ripe for exploitation through online gig work and outsourcing.

Any global legislative proposal must also take into account historical and extant power dynamics between nation-states on the global stage. Often global debates are centred around the concerns of countries from the Global North, centring experiences and concerns that are often removed from our part of the world. Technologies and AI models are developed by and for Western audiences, often ignoring the unique needs and applications of AI elsewhere.

Additionally, nation-states rarely centre the experiences of marginalised groups in global policymaking. Instead, national interests are prioritised. Any global framework regulating AI must focus on those at the margins, particularly given the preponderance of evidence that AI systems perpetuate existing biases and inequalities based on gender, race, class and disabilities. If Pakistan wants to take on the role of an advocate for ethical and equitable AI on the global stage, it must make non-discrimination and compliance with international human rights frameworks the focus of its advocacy.


The writer is a researcher and campaigner for human rights and digital rights issues

Global advocacy on AI regulation