The AI frontier at WGS

I hope and pray that AI revolution does not become latest one in which we are relegated to end users of other people’s labour

By Dr Ayesha Razzaque
March 01, 2024
A visitor watches an AI sign on an animated screen at the Mobile World Congress, the telecom industry’s biggest annual gathering, in Barcelona. — AFP/File

While Pakistan was still counting votes and figuring out who won these elections, Dubai was hosting the three-day World Governments Summit 2024 from February 12 to 14.


Speakers included heads of governments and international organizations, and leaders from business, media, and academia. The sessions covered a variety of areas, including government acceleration & transformation, development & future economies, future societies & education, sustainability, urbanization & global health. Also among them was artificial intelligence (AI) & the next frontiers.

On the latter theme, invited speakers included some of the most influential people in AI today. Key among them on Day 1 of the summit was Jensen Huang, the founder and CEO of NVIDIA which has a near monopoly on making the chips, Graphics Processing Units (GPUs), that are orders of magnitude more efficient (in terms of speed and power consumption) at accelerating the kind of computations needed for gaming PCs, cryptocurrency mining rigs and AI applications.

In the current AI gold rush, if big-, medium- and small-tech companies are prospecting for gold, NVIDIA is the one selling all of them shovels. The session was hosted by UAE Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications H E Omar Sultan Alolama.

The overarching theme of Huang’s session was the democratization of AI. While AI is receiving increasing attention in Pakistan’s policy circles and is seeing matching excitement on the domestic tech startup scene, this aspect of AI still receives far too little coverage. Huang’s perspective is that most of the foundation models from big- and not-so-big tech companies making headlines for the last year are trained (developed) on ‘Western’ data and mostly English language text. A country and society’s data codifies its language, culture, history, and societal intelligence and that will extend to AI models trained on it.

The Industrial Revolution was a transformation of raw materials into finished products, enabled early on by the conversion of energy trapped in fossil fuels to steam and later to electrons moving through copper wires (‘electricity’). The computer /internet revolution was powered by the invention of the microprocessor CPU which accelerated the transmission and processing of information. Now, the AI revolution, powered by the GPU which has accelerated computations by a factor of a million, is transforming data into intelligence that can be nearly instantly transmitted across the globe.

Other countries will not produce foundation models for the world’s many countries, societies, and languages. Many countries now have data protection laws that make it difficult to impossible for national data to cross national borders which makes it hard for one country to develop comprehensive AI models for communities other than their own. That is why countries and communities need to become active participants in developing their own AI models.

So, what will that take? Huang’s session at the WGS was also enlightening because he broke it down in quantitative terms. The most elementary prerequisite for building artificial intelligence is the conversion of non-electronic literature, knowledge, and data to, and the collection of, electronic data.

This will require building out the internet and traditional data centers. Huang estimated the current install base of data centers at around $1 trillion and that this will grow to about $2 trillion in the next few years. When I heard that figure, my immediate thought was, ‘What slice of that spending can a country like ours afford?’ These new data centers will have to be built specifically to support AI workloads – which means they will have a lot more GPUs than previous ones.

NVIDIA’s current generation top-of-the-line GPU for data centers is the H100 and a single one is priced at around $30,000. Its next-generation GPU, the H200, is expected to be priced between $25,000 and Rs40,000, the range of a family sedan in Pakistan.

Facebook is using batches of 16,000 GPUs or more to train a single iteration of a Generative AI (GenAI) model which takes about one month. The ballpark figure for only one such data center is around $1 billion today. That is more than many IMF tranches Pakistan received in recent years and have kept it borderline solvent.

Data centres for AI workloads are also much more power-hungry and require two to three times the power of existing data centers. How many such data centres will we be able to afford? They say freedom is not free, and neither is the democratization of AI. Not being left out of the next technical revolution will require investments that are massive by our GDP’s standard.

For decades, the mantra in education has been that all children need to become at least computer /internet literate and know the basics of programming computers. Huang explained that the current generation of GenAI models will do away with the need to know programming to get serious work done on a computer, replace it with a natural language interface, and reset the technological edge workforces of some countries had over others. A large part of the world’s population is less than fluent in English. This makes it vital to develop natural language interfaces for GenAI models in languages other than English. In Pakistan, that means Urdu, Punjabi, Pashto, Sindhi, Saraiki, Balochi and others.

Day 2 of the summit prominently featured Dr Yann LeCun, Meta’s (formerly Facebook) vice-president and chief AI scientist and winner of the ACM Turing Award (the most prestigious prize in computer science) along with Geoffrey Hinton and Yoshua Bengio, known together as ‘The three godfathers of AI’. In the recently resurrected debate about AI taking over the world, LeCun was on the side that holds the view such fears are overblown and takes an optimistic view of the future development of AI systems.

LeCun is also known for his strong stance in favour of keeping AI systems that will be part of the public infrastructure of the future open source. ‘Open source’ means the code an application is written is published and available for inspection, modification, and improvement. Open-source projects benefit from more eyeballs, and more scrutiny, making them safer. AI systems are likely to become a widely used repository of human knowledge. LeCun argued that these systems must be diverse and free, just like the press, so opinions do not come from a single source. For this reason, LeCun did include a warning in his session: While it will be necessary to develop guardrails and safety systems for AI systems, do not legislate open-source AI systems out of existence.

Speakers on Day 2 also included Sam Altman, CEO of OpenAI, the company that gave the world the DALL-E text-to-image generator, ChatGPT, and, just this week, a first taste of Sora, a text-to-video GenAI model.

He shared LeCun’s optimistic forecast of a future with GenAI tools. Although it is always hard to foretell, Altman sees the most powerful use cases coming from education where large language models (like ChatGPT) can provide tailored instructions, healthcare, and government services for tasks as mundane as filling out forms. More broadly, Altman was of the view that for the betterment of individual lives, to improve productivity and creativity, and to discover new “killer” applications, these and future tools need to be put in the hands of everyone. A significant chunk of his session was dedicated to the issue of regulations on which Altman’s view was that we ought to give this new technology space for experimentation in regulatory sandboxes before imposing regulations.

The ideas discussed at the summit were ahead of what I usually catch in local policy circles and left me walking away with many questions: Wikipedia’s ‘List of Pakistanis by net worth’ tops out in the low single-digit billion USD range. Who in this country can afford to invest in a single data centre equipped with the resources fit to train a single GenAI model in our local language(s) (and host it)? I do not see the government capable of it and I cannot see why a big-tech company would invest in such a magnitude because I cannot see them make their money back.

LeCun said that the current generation of GenAI models is being trained on “all the public data” available on the internet and yet, the models so produced are still not perfect. Do we even have enough data to train large language models in our local languages? The amount of published content in, electronic and non-electronic forms, of our local languages must pale in comparison to what is available in English. Without the infrastructure and requisite data, our technology sector will likely be confined to fine-tuning existing foundation models (in English) which will have limited impact in a country where most of the population lacks the level of command over English to adequately express themself to make a GenAI model do their bidding.

A few weeks ago, just before the elections, the PTI produced an audio address of Imran Khan delivering key messages to voters probably using key talking points and voice samples from past media appearances as input, and it seemed everyone temporarily lost their minds.

On the regulatory side, are the powers that be ready to put GenAI tools in the hands of everyone? It has been more than a decade since the Pakistan Telecommunications Authority embarked on an expensive and ongoing quest to develop the perfect internet traffic monitoring and censorship tools. Is a country that is in the habit of shutting down cell phone service, banning virtual private networks, and blocking or throttling Twitter for days on end ready for what widespread access to GenAI platforms might bring?

I hope and pray that the AI revolution does not become the latest one in which we are relegated to end users of other people’s labour.

The writer (she/her) has a PhD in Education.