Is conscious AI already here, hidden in plain sight?
'If we accidentally make conscious or sentient AI, we should be careful to avoid harms,' McClelland says
Is artificial intelligence (AI) secretly conscious? In today’s tech-driven world where AI models are evolving at breakneck speed, this million-dollar question holds a significant relevancy.
The debate over machine sentience has shifted from "if" to "how would we even know?"
Unfortunately, the question of AI consciousness has become more of an enigma due to limited human knowledge regarding philosophical and epistemological views of consciousness.
In simple terms, consciousness is often associated with being self-aware, ability to perceive, and experience various emotions like human beings.
According to Dr. Tom McClelland, a philosopher from the University of Cambridge, humanity may be facing a permanent epistemological blind spot when it comes to the inner lives of our silicon counterparts.
In shocking revelation, McClelland said, It is highly possible that AI might already be conscious, invisible to humans and hiding in plain sight,
McClelland argued that our evidence for what constitutes consciousness is far too limited, it is impossible for humans to detect the stage of consciousness in AI models.
“We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological,” said McClelland.
In a study published in the journal Mind and Language, McClelland revealed that neither common sense nor the hard-core research and appropriate framework can determine whether AI has achieved consciousness or not.
‘Black Box Problem’
In the artificial intelligence landscape, the phenomenon of the “Black Box Problem” has been gaining ground. It refers to a situation where the internal workings of AI systems are opaque, even to its creators.
It means even the tech creators cannot fully explain how certain outputs are achieved. This problem highlights a massive gap between the understanding and knowledge.
In the Interesting Times podcast with New York Times columnist Ross Douthat, Anthropic CEO Dario Amodei shed light on the lack of understanding regarding AI consciousness.
We don’t know if the models are conscious," Amodei admitted.
“We are not even sure that we know what it would mean for a model to be conscious, or whether a model can be conscious. But we're open to the idea that it could be,” he added.
AGI race and rise of world models
The notable tech companies are racing to build superintelligence and Artificial General Intelligence, culminating in Singularity, a point where technological growth and innovation will outpace human intelligence.
McClelland points out the limitations in explaining consciousness, while tech giants pour vast sums of money pursuing AGI.
He said, “If we accidentally make conscious or sentient AI, we should be careful to avoid harm. But treating what's effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake.”
The emergence of world models is also pushing AI systems towards high possibility of consciousness.
Recently, the Godfather of AI, Yann LeCun, has raised 1.03 billion, as the AI chief seeks to commercialize artificial intelligence systems built around reasoning, planning and "world models."
“AMI aims to build a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe,” Ex-Meta AI chief said.
In January 2026, a landmark paper in Trends in Cognitive Sciences synthesized work from 19 leading consciousness researchers, including Patrick Butlin, Robert Long, Yoshua Bengio, and Tim Bayne.
The researchers published a set of “indicator properties” for AI consciousness. Rather than a binary yes or no, it instead suggests assessing systems across a set of properties that map to features thought to correlate with consciousness in biological systems.
The findings revealed that although current LLMs are not likely conscious, there are no technical barriers to building systems that meet the criteria of consciousness.
'Agnosticism': A reliable position
According to McClelland, the most reliable position is agnosticism. He said, “The logical position is agnosticism. We cannot, and may never,” know about AI consciousness.
“There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology. It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness,” he said.
Agnosticism is primarily an epistemological position, focusing on the limits of human knowledge in understanding the philosophical and faith-related questions.
-
How to watch Nvidia GTC 2026 keynote event live
-
Instagram to remove end-to-end encryption messages support
-
Elon Musk announces xAI rebuild, contacts top talent
-
Uber partners with Motional to launch commercial 'robotaxis' in Las Vegas in latest technology push
-
Adobe to pay $75m to settle US lawsuit over hidden charges
-
AI adds to employee workload, study finds
-
GTC 2026: Nvidia to unveil next gen AI breakthroughs to outpace rivals
-
AI data centers faces backlash over rising power bills
