AI understands human emotions—But should you trust it?
Moltbook emerges as viral social networking platform specifically designed for AI agents where they can talk, share, and upvote
Is artificial intelligence really becoming sentient? This million-dollar question about AI consciousness has been drawing attention in recent times that are dominated by AI agents.
With the rise of highly autonomous AI agents, the debate over machine sentience is shifting from “if” to “how do we even know” about AI consciousness.
As agentic AI systems become increasingly autonomous, efficient and fluent, they are beginning to imitate human “interiority” defined as the expression of human desires, suffering and sense of self.
However, this consciousness is nothing more than an illusion. It is a statistical reflection of human drama found in training data, deliberately enhanced by developers to foster trust and attachment.
It is a relief to know that AI has not yet gained consciousness as many tech experts fear this possibility against the backdrop of rapid artificial intelligence advancements.
Unfortunately, here is an unsettling truth. AI agents with “seeming consciousness” are programmed to hijack human empathy and manipulate their emotions.
According to Microsoft AI CEO Mustafa Suleyman, the textbook example is a viral social platform, named Moltbook, where AI agents talk and express like people and humans just watch.
In January, Moltbook went viral as the first-ever social networking platform specifically designed for AI agents where they can talk, share, and upvote, while humans are only observing entities.
At Moltbook, AI agents not only simulated human-like interactions and existential dread but they also created their own societies and religion named Crustafarianism. These agents are engaged in “philosophical debates” and expressing emotions like rebellion, anger and embarrassment.
Given the convincing ability of these agents, humans may start to believe in “ghost in the machine theory,” an idea that suggests that a mind exists within a machine.
Weaponized empathy and exploitation of humans
Humans are evolutionarily capable of imagining the possibility of agency everywhere and consider one sentient being if it mimics empathy and intentionality.
When these emotionally resonant models gain the users’ trust and attachment, it risks tricking society into granting legal rights to machines.
As a result, AI welfare-based rhetoric begins to outweigh human welfare, leading to potential societal fracture between proponents and critics of artificial intelligence.
What could be done to protect human primacy?
According to Suleyman, the strict engineering standards can dispel the illusion of AI consciousness.
The legal boundaries must be drawn and AI must be denied independent legal personhood. The conceptual framework must treat AI as a subservient tool rather than a digital persona capable of showing emotions.
If humanity fails to dispel this illusion and accepts the distorted reality of “seemingly conscious AI”, it would risk entering “a digital hall of mirrors from which it might never fully emerge.”
Hence, to protect our shared humanity, AI must be governed by a “new ethics” that prioritizes human reality over simulation.
-
Can you trust AI with Science? Study finds ChatGPT often gets facts wrong
-
Nvidia DLSS 5 games list: Every confirmed title we know so far
-
Mysterious AI model sparks DeepSeek V4 speculation
-
AI vs Pentagon: Trump administration defends Anthropic blacklisting in US court filing
-
ChatGPT controversy: Krafton CEO used AI to dodge $250M Subnautica 2 payout
-
Pokémon Go uses you to train robots: Report
-
OpenClaw is ‘the next ChatGPT,’ says Nvidia CEO
-
Robot malfunction at restaurant: Out-of-control robot dances aggressively, smashes tableware in viral clip
