The AI debate

This indicates that those who have access and knowledge of ChatGPT are benefitting from it

By Maheen Shafeeq
February 23, 2025
A illustration showing a robotic figure against the backdrop of AI written in the background on May 4, 2023. — Reuters
A illustration showing a robotic figure against the backdrop of AI written in the background on May 4, 2023. — Reuters

Artificial intelligence (AI) is increasingly considered as the most transformative human invention. According to Kai-Fu Lee, a Taiwanese AI expert and author of ‘AI Superpowers’, AI rivals the impact of electricity and may even surpass it.

AI’s impact is also evident from the often-drawn comparison between nuclear energy and AI due to the considerable similarity of the impact on human life. One can argue if this impact is destructive or constructive. However, this underscores AI’s profound influence on modern society, transforming it into an AI society.

While we depend on AI for daily tasks such as navigation, weather prediction, and even filtering spam emails, what has truly revolutionised AI is the recent introduction of a large-scale Large Language Model (LLM). These are frequently being used in the form of chatbots like OpenAI’s ChatGPT, Google’s Gemini, and so on. These digital platforms have easily become a common household name.

The popularity of ChatGPT, for example, is evident from the fact that in a world of 8.2 billion people, it receives about 4.7 billion visitors per month. This indicates that those who have access and knowledge of ChatGPT are benefitting from it. Another important indicator of this revolution is how rapidly it has gained popularity. Within five days of its launch in November 2022, ChatGPT gained over 100 million users. In comparison, Facebook took ten months while Netflix took 3.5 years to reach 100 million users.

While it is a learning curve for Gen X, ChatGPT has been readily adopted by millennials and Gen Z also known as ‘digital natives’. OpenAI estimates that about 80-95 per cent of its users are aged between 20 and 22, indicative of the fact that university students are frequent users of ChatGPT. Whereas 55 per cent of people aged between 45 and 64 have neither heard of it nor used it.

This not only indicates a generational gap but also ‘technology dependence syndrome’ in an AI society. Last year in November, a 30-minute outage of ChatGPT impacted over 19,000 users. More and more users are now becoming dependent on chatbots. Again, it can be argued if this is right or not, but this illustrates how AI has become an integral part of our daily lives, and its absence will become the cause of disruption.

While AI offers efficient solutions, it also poses challenges such as misuse through advanced tools like deepfakes. Financial frauds such as deepfakes impersonating CEOs have also become frequent. This tool is also actively used during election season as deepfakes can create the illusion that a political candidate is personally calling a potential voter. Again, one can argue if this is right or depictive, but this indicates that like other fields AI needs guardrails.

By imposing strict regulations, countries do not wish to limit the potential of AI. This is mostly due to geopolitical rivalries between major AI players. Recently, the breakthrough of China’s DeepSeek saw a shockwave in Silicon Valley, indicating the diversity of competition in AI. This makes understanding the right ratio of regulations very complex.

Moreover, even if the major AI players develop international regulations, the have-nots – most likely the Global South – could be at the losing end. This is also evident from social media regulations, whose data centres are located in the Global North and are regulated by their national laws. Due to this, Global South has the least control over the algorithms, especially sensitive subjects. Likewise, chatbot developers would present their version of the history, undermining the Global South’s perspectives.

Another fear of being at the losing end is at the human level. With such significant developments in AI, many fear that, like in the case of lamplighters and telegraph operators being replaced, AI will replace their jobs. This fear lingers from pilots to software engineers, who are concerned that AI tools such as drones and no-code software would replace them. For now, AI will introduce new roles while some might become less human-dependent. AI experts estimate that the fear of replacement by AI could potentially be a worry of the late twenty-first century, or by the 2080s.

This is because AI largely responds based on its training model, and also still lacks human qualities such as emotional intelligence and empathy. To address this, many AI researchers are working on AI-powered assistants that respond to user’s concerns with a sense of familiarity with emotional feelings. This indicates that AI is learning to mimic human emotional intelligence.

The best approach to living in an AI society is to adopt AI as a co-pilot. It should be about human-and-machine teaming rather than machine steering the human. This highlights that chatbots like ChatGPT should only be used as a study partner rather than an instructor. This enhances productivity as it multiplies human decision-making and thinking power with AI’s computational power.

As we move forward in an AI society and with AI-related developments, it must be ensured that it is a step towards ensuring responsible, collaborative and inclusive practices that augment human abilities.


The writer is a research analyst in emerging technologies and international security. She tweets/posts @MaheenShafeeq