close
Tuesday April 23, 2024

Can AI be destructive?

By Maheen Shafeeq
May 24, 2023

On May 17, Sam Altman, the CEO of OpenAI – the company that owns ChatGPT – urged for artificial intelligence (AI) regulations. He testified before the US Senate committee concerning the possible dangers of AI and identified that the potential of AI could be effectively utilised to change the world, as did the printing press.

However, he agreed that the dangerous consequences of AI must be addressed. A US senator echoed parallel notions and expressed that even though AI could have a revolutionary impact, it could be as destructive as an atomic bomb.

Altman’s emphasis on the perils of AI and the necessity to regulate it is concerning, as the introduction of ChatGPT has taken the technology world by storm. The chatbot created on generative AI – launched on November 30, 2022 – has penetrated every sphere of life. In such a short time, it has attracted over 96 million monthly visitors from businesses, academia, health, finance and banking, technology, etc.

ChatGPT is a powerful artificial intelligence (AI) tool and a computer language model that can generate human-like text in a conversational setting. With new upgrades, ChatGPT can also analyze pictures and videos, can read a scribbled web design on paper, and generate a front end of a website on its own. It has been trained by using about 300 billion words, showing the amount of data fed into it to learn from and create new content. It solves day-to-day tasks such as letter writing as well as complex tasks such as corporate auditing. ChatGPT has proven to be an effective tool with an accuracy of 85 plus per cent. It has become famous quickly because it is accessible and user-friendly – like social media apps.

While inventions are evidence of a progressive society, it is essential to control them to ensure safety and security. This becomes more urgent as AI can evolve the human-machine relationship into a master-slave relationship. This uncontrolled master-slave relationship is evident in the captivating control of mobile phones. From when we wake up to when we go to bed, we are consumed with our mobile phones. This compromises other aspects of life as there are no regulations over the usage of phones. Similarly, if timely rules over AI are not adopted, it could consume us.

Undeniably AI has benefits; however, unregulated AI has political, economic, social and military consequences. In the political course, generative AI or chatbots can assist people in assessing the political mandates of political leaders. It can assist political leaders in designing an informed political campaign, their websites, running their social media, creating content, interacting with the people, and answering their questions.

Generative AI can also assess public support and predict the outcome of elections. These are a few positives of generative AI; however, unregulated AI could be used to manipulate political leaders and the people. When ChatGPT was inquired regarding the impact of AI on political communication and democracy, it answered that AI had the potential to impact strategic and political communication significantly. Still, it is crucial that AI is used ethically and that policymakers consider its risks.

Chat GPT could generate this answer as it was trained on data. With more and more data generated every day, AI will become even more intelligent. It can create deep fakes of political leaders, as was created of Ukrainian President Zelensky surrendering to Russia. AI and chatbots could also be used to spread misinformation, disinformation and fake news, and manipulate social media algorithms to spread it amongst the public. This becomes incredibly concerning for public security and protection.

Also, AI will be able to enhance the productivity and efficiency of the economy. According to a Goldman Sachs report, generative AI is expected to increase the annual global GDP by 7.0 per cent over 10 years. Generative AI, such as Chat GPT or Google’s Bard – AI models that mimic human-like communication – has super levels of intelligence that most humans do not possess. This creates the most raging concern: can super-intelligent AI take over human jobs?

While AI has assisted many in their daily activities, there is a constant threat of AI replacing human jobs. For example, Uber and Careem drivers could lose their jobs with driverless cars, as regular taxi drivers lost their jobs because of Uber and Careem. While generative AI might not completely replace humans, it can make some jobs redundant while creating new technological and non-technical jobs. Those unwilling to adapt to these changes or learn might be at the losing end.

Similarly, ChatGPT could be helpful in various social sectors. The most prominent is its use in academia. Students extensively use it to either do their assignments or summarize articles. While this has reduced the burden on students, it has increased the burden on professors. An AI-generated assignment is difficult to differentiate from an assignment written by humans due to generative AI’s ability to reproduce text that resembles one written by humans. According to research by the University of Cambridge, a total dependence would threaten integrity even though using ChatGPT to generate ideas would be helpful. Therefore, regulating ChatGPT and other generative AI bots that might come in the future becomes necessary to ensure academic integrity.

While generative AI applications have tremendous potential for the civilian sector, military establishments can reap its benefits as well. An official from the US Department of Defence (DoD) believes that generative AI could help create military software as they have long struggled to attract coders to defence.

ChatGPT could benefit the military in decision-making, developing tactics, strategies, course of operation and analyzing critical data from various sensors. It can also be used in emerging military technologies such as attack drones, missiles, jets, tanks, etc. The most concerning would be the use of generative AI by developers of lethal autonomous weapon systems (LAWS), as these weapons can autonomously target and kill people once activated. This creates fears about accountability, responsibility and transparency. Therefore, it becomes vital to regulate AI as, with time, it might become more intelligent and take independent actions while taking humans out of the loop.

Though Italy has banned ChatGPT, many countries are reluctant to introduce laws regulating generative AI. They believe such laws could put them on the losing end of the technology race. With rapid advancements in AI, the conversation of banning technology is irrelevant and not in the interest of technologically advanced states. Therefore, it becomes essential to stress regulations rather than a ban, as stated by the CEO of ChatGPT and other members of Silicon Valley.

According to Sandford University’s 2023 AI Index, about 37 bills were passed into law globally, but these laws regulate limited aspects of AI at a national level. Globally, progress in AI is pacing, increasing competition and decreasing the desire to regulate AI, which could be undefendable penalties in the future. Only adequate regulations can ensure AI has a positive impact and that it does not become as catastrophic as an atom bomb.

The writer is a research analyst in emerging technologies and

international security. She tweets @MaheenShafeeq