The AI threat

AI proliferation can eventually lead to reduction in development of critical thinking skills in humans

By Eesha Afzal
|
October 05, 2025

The people who shut their eyes to reality simply invite their own destruction; anyone who insists on remaining in a state of innocence long after that innocence is dead turns himself into a monster.

— James Baldwin

T

The sudden rise of artificial general intelligence, in the form of large-language models (LLMs) over the recent prompted a lot of debate on the usefulness or harm these models may bring about. I argue here that the harm is not well understood, particularly by the lay public.

We are already seeing increasing AI-caused disasters in various realms, especially human intellect and self-expression. A Cornell University research paper titled Your Brain on ChatGPT concludes that the unregulated use of such tools could restrict development of human intelligence. It could particularly reduce the development and exercise of critical thinking skills in the masses. Large corporations and the political elite an gain substantial benefit from an illiterate crowd that only needs to be fed a narrative that seeks to aid the major players’ agendas. The easiest way to achieve such indoctrination is through biased programming of chatbots.

The masses are being told that the LLMs they use come at a cost. This cost is not limited to cognitive impairment, alone. It can include severe environmental risks, democratic deterioration and public resource exploitation.

Karen Hao, American journalist and author of Empire of AI, identifies these in her brilliantly researched work. She mentions the depletion of freshwater and land resources resulting from unchecked use by some companies for the sake of generative AI development. She suggests that these companies operate as techno-authoritarians and are trampling democratic principles in not consulting the inhabitants of the areas where their data centres are located about the irreparable harm to their surroundings by using large amounts of natural resources and energy in these areas.

The Environmental and Energy Study Institute (EESI) has estimated that a single data centre’s water consumption can be as large as five million gallons a day, equivalent of the water use of a town of 10,000-50,000 people. The Citizen Action Coalition (an Indiana-based non-profit orgnisation) has stated that these AI companies often use shell companies or secret project code names to locate their new data centres in order to deny public the knowledge of their plans till they have secured local approvals. Hoosier Environmental Council says the generative AI data centres are hyper-scale facilities requiring very large quantities of water and energy.

Importantly, these data centers are expanding. It is a fair inference that there will be greater exploitation of public resources and human labour and we may witness an exponential increase in carbon emissions. The subject has been explored in detail in Sourabh Mehta’s article How Much Energy Do LLMs Consume? Unveiling the Power Behind AI for the Association of Data Scientists.

Vercept co-founder Oren Etzioni, in an interview with HBS Institute for Business in Global Society, addresses some of the ‘myths’ about AI and poses the question of harm as one perceived solely due to misinformation rather than any tangible threats to humanity. His advice to the viewers is to learn to use AI more efficiently so that they do not get left behind. The profound fallibility of such statements becomes apparent when CEOs profiting off of this technology dress it up as a tool to boost productivity in all fields. The uncomfortable truth is that these tools create dependency, from performing trivial daily chores to important vocational tasks.

Etzioni’s claim that people are unable to separate life from fiction and treat AI as being on the path to becoming a sentient force is inherently flawed. Chatbots today are not under firebecause they are being seen as Harlan Ellison’s 1967 antagonist AM (a sentient AI hell bent on causing human suffering); rather it is due to its proven intellectual unreliability and its impact on users’ cognitive ability; income inequality; and violation of privacy and ownership of their data.

Way forward

What must be done then to keep up with the times? There are two things people must do to maintain their autonomy to achieve the desired results: showing individual resistance and collaborating for collective action. Resistance is a simple, personally beneficial, step. It consists of using one’s own intellect and reasoning instead of asking chatbots to do the ‘heavy lifting.’

Do not succumb to dopamine hits from constant mental stimulation on mind-numbing social media platforms. Invest time and energy into reading the classics with the sole aim of improving focus, cognition and literacy. Let the words of Homer, Goethe, Lermontov, Thucydides,

Milton, Stendhal, Cellini and the like transform you.

Let’s be clear about this: chatbots are merely predictive algorithms — their abilities depend on the amount of data they absorb. They do not possess the ability to form original thought, a trait unique to human beings. Even the generative AI models, as has been explained by the International Business Machines Corporation, are only able to produce ‘original’content by training on massive volumes of raw data. Remember the IBM is a platform built for assimilating use of AI into modern businesses.

Collective action self-preservation is more difficult. It requires great resolve and resilience. However, it can be highly effective as it allows the masses to have a say in how these technologies are assimilated and in what capacity or to what extent they can operate. This can, hopefully, give people more protections and safeguard their civil liberties and human rights - intellectual as well as labour.

We are at the cusp of a new era. Our aim should be to belabour the conversation around the implementation of universal human rights and safeties in an increasingly AI-operated world.


The writer can be reached at:eeshafzlgmail.com