AI: utopia or dystopia?

While these films are fictional, parallels between their narratives and real world discussions about AI are hard to ignore

By Rubia Shoukat
May 11, 2024
This representational picture shows a metallic figure against a computer. — AFP/File

The eerie scenes from the ‘Terminator’ movie franchise, where ruthless machines rule a post-apocalyptic world, have long captured our imagination. In the realm of science fiction, movie franchise has intrigued audiences with its portrayal of a future dominated by AI-driven chaos.

Advertisement

While these films are fictional, parallels between their narratives and real world discussions about AI are hard to ignore. As AI continues to integrate into our daily lives, from smart devices to industrial applications, a question arises: Could this new technology bring about unintended consequences, reminiscent of the movie’s dystopian future?

AI, a testament to human creativity, embodies duality with respect to its applications as it puts forth immense capacity for development and concerns for global stability. Today’s cutting-edge AI systems are powerful in many ways, but profoundly fragile in others. They often lack any resemblance of common sense, can be easily fooled or corrupted.

“These things could ultimately get more intelligent than us and could decide to take over, and we need to worry now about how we could prevent it from happening,” says Geoffrey Hinton, the ‘Godfather of AI’. Last year, Tesla and SpaceX founder Elon Musk, with over 1,000 other tech leaders, urged in an open letter to put a pause on large AI experiments, citing that the technology can “pose profound risks to society and humanity”.

Sean McGregor, founder of the AI Incident Database warns that “we expect AI incidents to far more than double in near future and are preparing for a world where AI incidents are likely to follow some version of Moore’s law.”

The AI Incident Database lists socially unacceptable but non-fatal incidents such as Google’s Photos software in 2015 labelling Black people as ‘gorillas’, or a recruiting tool that Amazon had to shut down in 2018 for being sexist when it marked down women candidates.

In another incident, in 2017, Facebook’s automatic language translation software incorrectly translated an Arabic post saying ‘Good morning’ into Hebrew saying “hurt them,” leading to the arrest of a Falasteeni man in the ‘only democracy’ in the Middle East.

However, some incidents proved to be fatal, such as when a robot in a Volkswagen plant killed a worker in 2015 by pinning him to a metal plate, or when a Tesla Model S driver on autopilot mode in Los Angeles reportedly went through a red light and crashed into another car, killing two people in 2019.

In 2023 a man in Belgium committed suicide after a chatbot allegedly encouraged him to kill himself. In 2023, a Tesla engineer was attacked by a robot during a brutal malfunction at the company’s Giga Texas factory near Austin. In 2023, a man was crushed to death by a robot in South Korea.

AI also offers tangible benefits that span various sectors. Its capability to process vast data volumes and advance algorithms has revolutionized fields like medicine, finance, education, agriculture and transportation. The potential for information gathering and analysis through AI cast a shadow on human capabilities.

According to a report published in ‘The Guardian’, a longstanding concern is that digital automation will take a huge number of human jobs. It suggests AI could replace the equivalent of 85 million jobs worldwide by 2025. However, it will also create new opportunities.

According to KPMG, AI, machine learning and robotic process automation (RPA) technologies are set to reach $232 billion by 2025. Statista suggests that although AI technologies currently account for almost $100 billion of global investment, this number is expected to increase.

In the military realm, the impact of AI stretches far beyond individual applications, influencing global geopolitics and strategic stability. This will compel nations to redefine their security postures, potentially leading to new forms of warfare. However, as AI capabilities evolve, challenges emerge such as in the case of command and control.

AI-driven decision-making, autonomous weapons and the absence of human judgement in critical scenarios raise ethical concerns and could lead to unintended escalations and consequences.

In the context of South Asia, a region marked by multifaceted and complex rivalries and security concerns, the integration of AI into military capabilities introduces shifts in geopolitics.

India’s Land Warfare Doctrine 2018 places significant importance on AI and its integration in military. The Multi Agent Robotics Framework (MARF) and the 2000 DAKSHA Autonomous Robots that are remotely operating vehicles (ROV) have been integrated in the Indian Army. India is collaborating with the US, the UK, the EU, Canada, Japan and many other states to acquire more sophisticated technology with image interpretation, target recognition and kill-zone assessment of missiles.

The Pakistan Air Force (PAF) has also launched a cognitive electronic warfare (CEW) programme at its Centre for Artificial Intelligence and Computing (CENTAIC). Modern, connected weapon systems generate vast amounts of data requiring artificial intelligence and machine learning software for speedy analysis and rapid decision-making.

The allocation of resources underscores AI’s pivotal role in shaping global dynamics. India’s allocation for AI startups in 2022 stood at $3.24 billion. In Pakistan, despite economic challenges, the government has allocated Rs723 million for the promotion of AI in 2022. More than 70 AI startups are currently operating in Pakistan.

According to the Organization of Economic Cooperation and Development (OECD), there are more than 800 active AI policy initiatives in 69 countries. Pakistan has also developed a policy for the responsible adoption of AI in the country.

The rise of AI necessitates collaborative efforts to establish international norms and regulations. The Amended Protocol II of Convention on Certain Conventional Weapons (CCW) provides a framework for establishing universal rules and regulations on AI, robotics and semi-autonomous and autonomous weapon systems on which Pakistan has supported a pre-emptive ban on lethal autonomous weapons (LAWS) at the Convention on Certain Conventional Weapons (CCW) forum since 2017.

The cautionary echoes remind us that the trajectory of AI’s evolution holds both promise and peril. Fictional tales and real-world scenarios converge to underscore the importance of responsible AI development. By harnessing AI’s potential with foresight and ethical consideration, the world can leverage its transformative capabilities while avoiding the unintended consequences.

The writer is a researcher at the

Centre for International Strategic Studies (CISS), Sindh.

Advertisement