The flipside of artificial intelligence

While AI showcases remarkable strengths in specific tasks, concerns arise when contemplating super-intelligence surpassing human capabilities

The flipside of  artificial intelligence


R

eading Yuval Noah Harari’s book, 21 Lessons for the 21st Century, upset me. In some chapters of the book, he has explored the potential implications of artificial intelligence (AI) and its ability to control humans in the long run.

Harari raises concerns about the increasing power of AI systems and their potential to manipulate and influence human behaviour. He argues that computer systems now have the potential to gather vast amounts of data about individuals, enabling those to better analyse human emotions, desires and weaknesses than most humans.

Given this knowledge, AI systems could tailor personalised content and experiences to manipulate human decisions and actions. Harari refers to this as “the dictatorship of data,” a scenario where AI algorithms possess the ability to control human choices, thoughts and beliefs.

Furthermore, Harari highlights the dangers of AI being controlled by a centralised authority, such as a government or a corporation. He suggests that in such a scenario, the AI systems could be used to monitor and control individuals, eroding personal freedoms and privacy. This concentration of power in the hands of a few could result in the subjugation of the masses.

Harari also emphasises the importance of ethical considerations in the development and deployment of AI. He urges society to address questions about the values and goals we want to programme into these systems. Without thoughtful deliberation, AI could inadvertently perpetuate existing biases or be used to serve narrow interests, leading to greater inequality and social divisions.

While Harari acknowledges the potential benefits of AI in solving complex problems and improving our lives, he calls for vigilance and regulation to prevent it from becoming a tool of control and manipulation. He urges societies to actively engage in conversations about the ethical implications of AI, fostering transparency and democratic decision-making to ensure that AI serves humanity’s best interests rather than controlling it.

In his book, The Form of Things: Essays on Life, Ideas and Liberty in the 21st Century, philosopher AC Grayling paints a scarier picture. He notes that in developed countries, people have become accustomed, albeit perhaps vaguely aware, to the pervasive presence of computers in almost every aspect of their lives.

From energy and water supplies to communication devices, from banking systems to the seamless delivery of goods at local supermarkets, from the airplanes and cars that transport them to the factories that produce their purchases, and the security systems safeguarding the society, computers have become the very foundation of ordinary existence.

However, it is essential to note that computers, in and of themselves, are not artificial intelligence (AI) entities; they function deterministically by executing algorithms programmed into them. According to Grayling, the term artificial intelligence suggests the ability to replicate human cognitive capabilities, manifesting intelligence through problem-solving, making connections, storing and applying memories and adapting to patterns in input data. It is in this convergence that computers transcend their mere mechanical existence and assume a different form.

The notion of bringing computers to life and transforming them into something beyond their current capabilities may evoke concerns akin to science fiction classics like 2001: A Space Odyssey or The Terminator. James Barratt’s report indicates that many experts in the field of AI development are optimistic that machines, or humans augmented by machines, will eventually make crucial decisions governing human lives. They express confidence in the benign nature of this future state, envisioning a painless and gradual transition. However, these thoughts quickly lead to contemplation of a more significant concept: artificial intelligence surpassing human intelligence, often referred to as super-intelligence.

At first glance, governance by super-intelligence appears utopian — an impeccably rational, just and well-organised world. Yet, we must remember that believers in a religious worldview already attribute super-intelligent governance to the current world, despite the suffering and the evil. Their theodicy, the explanation of why a God-governed world permits suffering and evil, suggests that a perfect world may not be the optimal choice, with suffering and evil serving a purpose in the divine plan.

Could super-intelligent AI systems adopt a similar perspective? A completely logical and emotionless super-intelligence might assess the most disruptive and destructive entity on the planet and accurately identify humans as this entity, leading to their extermination. From a utilitarian standpoint, such a choice could be rational, weighing the interests of all species against those of humans.

The smooth transition to computer dominance would proceed uneventfully and possibly safely if it weren’t for a crucial factor: intelligence. Intelligence is not just occasionally unpredictable or limited to certain cases. Computer systems that reach human-level intelligence are likely to be consistently unpredictable and inscrutable. We will lack a deep understanding of how self-aware systems will behave or accomplish their tasks. This inscrutability, combined with the inherent complexity and unforeseen events inherent to intelligence, poses significant challenges.

Philosopher Nick Bostrom provides a comprehensive examination of concerns regarding super-intelligent AI in his work Superintelligence: Paths, Dangers, Strategies. If we were to develop machine brains surpassing human general intelligence, this new super-intelligence could wield immense power. The problem we face is a “control problem,” which addresses how we can ensure that the emerging super-intelligence from AI development will uphold human values.

Bostrom notes that this task appears quite difficult, given the creative and unpredictable nature of intelligence. It seems that we will have only one chance. Once unfriendly super-intelligence emerges, it would prevent us from replacing or altering its preferences, effectively sealing our fate.

In this discussion, two terms stand out: “self-awareness” mentioned by Barratt and “general intelligence” used by Bostrom. Barratt refers to the importance of self-awareness in super-intelligence, while Bostrom focuses on artificial general intelligence, which closely mimics human intelligence. It is the latter concept that fuels apocalyptic anxieties about AI since it possesses potential risks.

The current AI replicates the cognitive abilities of creatures like bees, owls, cows that lack the cognitive capacity of non-human primates and dogs, thus falling short of true intelligence. However, within its designated tasks, AI already outperforms humans in several areas, such as defeating world champions in games like chess and Go, executing complex mathematical calculations swiftly, and excelling in tasks like spot-welding, facial recognition, medical diagnostics and pattern identification within extensive datasets. These capabilities demonstrate the intelligence of AI in specific domains.

In summary, while AI showcases remarkable intelligence within specific tasks, concerns arise when contemplating super-intelligence surpassing human capabilities, posing challenges related to control and unpredictable behaviour.

Self-awareness becomes a crucial aspect when discussing AGI. While it may not be necessary for AGI to possess self-awareness to calculate that the earth would benefit from the absence of humans, more nuanced judgments require considerations beyond utilitarian reasoning. Certain goods, such as the preservation of artistic genius, might outweigh basic utility trade-offs, even if it incurs some environmental damage. When weighing interests, affective and subjective dimensions play a significant role, potentially surpassing purely quantifiable factors. Hence, an AGI would need to be self-aware, capable of appreciating and perhaps experiencing these qualitative properties to assess whether exterminating humans is necessary to protect the planet’s ecology.

Bostrom’s “control problem” arises when AI, particularly AGI, surpasses human capacity to control, understand and restrain it if it exhibits threatening behaviour.

AI already possesses the ability to teach and develop itself, with unsupervised reinforcement models of deep learning becoming standard. Concerns arise when a system continually designs itself to become smarter, leading to an intelligence explosion as envisioned by IJ Good in 1965. Once an intelligent system comprehends its own design and engages in a feedback cycle of self-enhancement, the potential for continuous and unlimited intelligence growth emerges.

This moment, when AGI surpasses human intelligence, is referred to as “the singularity.” Considering the creativity, novelty, and unpredictability inherent in intelligence, the occurrence of the singularity brings about a state of uncertainty where all previous assumptions become questionable.


The writer is Professor in the faculty of Liberal Arts at the Beaconhouse National University, Lahore.  He can be reached at tahir.kamran@bnu.edu.pk

The flipside of artificial intelligence