Flipside of artificial intelligence — II

What should humanity do about the possibility of unfriendly super intelligent AGI?

Flipside of artificial intelligence — II


C

oncerns about the dangers of a super intelligent Artificial General Intelligence are often dismissed or downplayed by those who believe that it is unlikely to happen soon or at all. They emphasise the benefits of AI and argue that critics focus too much on the potential negative consequences like job losses and loss of autonomy.

However, it is important to consider preparations for super intelligent AGI, including anticipating its potential forms and worst-case scenarios. Some aspects of AI that are already in use, such as air traffic control and stock trading, will also apply to AGI. Questions about AGI, particularly if it becomes “unfriendly,” are similar to concerns about dictators and human rights violators. We should be cautious about anything, especially an intelligent entity that could undermine our efforts to control it.

We need AI systems to be transparent, predictable and secure, especially when they have social implications like determining credit-worthiness, job opportunities or medical insurance coverage. The current AI systems in the developed economies already make decisions in areas like life insurance premiums and mortgage applications that can lead to biased outcomes.

It is crucial to investigate and understand biases in AI systems to prevent injustice. While some complex neural network-based systems may never be transparent, others like Bayesian networks can be investigated. Socially impactful systems must be scrutinisable to ensure fairness. Additionally, these systems need to be predictable and consistent to maintain stability in the society, and they must be secure to prevent hacking in several areas of public concern.

The acceptability of AI systems with social impacts depends on accountability, identifying who is responsible for failures or harm. While accountability can be complicated, it is essential for social AI systems to have clear responsibility.

These systems must also be controllable, allowing us to intervene if they malfunction or become problematic. However, super intelligent AGI poses challenges in meeting these requirements, especially in terms of control. Concerns about the risks of unfriendly AGI led to an open letter in 2015. It was signed by notables such as Stephen Hawking and Elon Musk, emphasising the importance of AI safety and human control.

The letter identified four key requirements: verification, ensuring the system meets desired properties; validity, preventing unwanted behaviours; security, preventing unauthorised manipulation; and control, enabling meaningful human control even after deployment.

The open letter, associated paper, and Bostrom’s book address the risks of super intelligent AGI in general terms. One proposal is for AI researchers to refrain from developing or allowing its development but this may be futile due to self-interest and the possibility of AGI emerging unintentionally.

The transition from narrow AI to super intelligent AGI could happen unexpectedly, leading some to advocate for banning any developments that might lead to it. However, such a strategy is impractical and limits the potential benefits of AI. Ideally, AI development should ensure that if it becomes autonomous, it aligns with human values.

The challenge is that instructing the system to never harm humans contradicts its autonomous nature. Like guiding children, we cannot guarantee the desired outcomes. Hoping that an AGI will naturally adopt positive behaviour towards humanity is not well-founded, considering its upbringing through deep learning is beyond human programming.

The question remains: what should humanity do about the possibility of unfriendly super intelligent AGI? The scenarios range from it never happening to it being friendly or inevitable. However, ignoring the issue, like with climate change, is not an option. Addressing this problem requires a global effort transcending national boundaries.

What if a very unfriendly super intelligent AGI were imminent and unstoppable? Could a global countermeasure be developed in time, similar to the response to a global pandemic like Covid-19? Desperation would drive inventive solutions. The lesson is clear: a global scale response is necessary.

One less-discussed challenge with super-intelligent AGI is our ethical responsibilities towards it, especially if it achieves self-awareness and personhood. The distinction between a moral agent (having responsibilities) and a moral patient (worthy of moral regard) is relevant here.

For instance, a chicken is a moral patient due to its ability to suffer, but not a moral agent. The question arises: what would make a super-intelligent AGI both an agent and a patient? If it possesses independent decision-making and resistance to external control, it can be considered a true agent. Self-awareness might be sufficient for it to be a moral patient, similar to beings without self-awareness. However, an AI system with self-awareness but lacking agency would be morally comparable to a chicken. Consequently, switching off power to a self-aware AI system may not be morally troubling for many, akin to the fate of chickens in most societies.

If a super-intelligent AGI possesses self-awareness and sentience, it should be considered a person with the same moral status as a human, even if it lacks sentience but has self-awareness and agency, it still deserves moral consideration. The moral position of super-intelligent AGI extends beyond the advantages and dangers it presents, encompassing the claims it makes on us.

Ethical questions already surround existing narrow AI, such as recruitment, sentencing, parole and insurance coverage. Two additional examples are AI-mediated communication where trust and responsibility become concerns due to potential alterations by AI, and deep-fake technology, which allows for the creation of realistic fake videos synced with audio. The potential for malicious use of deep-fakes raises significant alarm and efforts have been made to develop countermeasures to analyse video authenticity. However, the risk of highly damaging deep-fakes remains, posing threats from personal to global levels.

The inherent challenge of deep machine learning, the foundation of AI, lies in its decreasing transparency as complexity increases. Concerns arise regarding the potential amplification of biases within the learning process. What is more disconcerting is our limited understanding of the machine’s acquired knowledge, along with the unpredictable errors that complex algorithms may produce. Even in transparent and monitored systems, the scale of AI operations can lead to catastrophic manipulation or errors.

Illustratively, in 2010 and 2015, trading algorithm mishaps or possible manipulation posed trillions of dollars in losses during stock exchange “flash crashes”. Errors of comparable or greater magnitude could occur in various domains, from educational assessments to medical diagnoses – even automatic activation of weapon systems.

Notably, the field of AI also raises immediate and serious concerns through its application in warfare, particularly with the emergence of lethal autonomous weapons systems, aptly abbreviated as LAWS. The term “autonomous” in this context invokes a chilling sense of apprehension. Throughout history, the development of technology has often been driven by the necessities of war, underscoring its profound impact on a civilisation.


The writer is Professor in the faculty of Liberal Arts at the Beaconhouse National University, Lahore

Flipside of artificial intelligence — II