News

2026 Pro-Human AI Declaration: How humanity can control artificial intelligence

72% of voters believe companies should be legally responsible for AI harms

March 05, 2026
2026 Pro-Human AI Declaration: How humanity can control artificial intelligence
2026 Pro-Human AI Declaration: How humanity can control artificial intelligence

Recently, it has been feared that artificial intelligence is heading toward dangerous territory dominated by AI hallucinations, the race for superintelligence, the advent of Singularity, and the ultimate replacement of humans.

Given the rapidly evolving advancement of AI, many tech experts have voiced concerns that in the coming years humanity might become an obsolete species.

The renowned historian Yuval Noah Harari has termed it a massive “useless class”, not useless in a moral sense, but economically and politically irrelevant because they could no longer contribute to the system more effectively than AI.

Despite persistent warnings, the international community has turned a blind eye to the potentially intimidating nature of artificial intelligence.

To one’s relief, sanity has prevailed among AI ethicists who have put out another plea for the world to pay attention to the tech’s risks.

On Wednesday, a coalition of leaders from various industries and tech experts announced the “Pro-Human AI Declaration.”

The recent declaration moves past vague ethical suggestions to demand hard-coded safeguards that ensure humanity should remain at the helm of these autonomous systems.

The manifesto centers on a simple premise: “AI must serve humanity, not the reverse.” It is signed by prominent figures as diverse as Yoshua Bengio, Sir Richard Branson, and Susan Rice.

As agentic AI begins to autonomously manage the complex tasks and military logistics, and enter the chain of command, the declaration proposes five non-negotiable “humanity-first” pillars.

2026 Pro-Human AI Declaration: How humanity can control artificial intelligence
2026 Pro-Human AI Declaration: How humanity can control artificial intelligence

Humanity-in-the-loop

The pillar focuses on “meaningful and non-negotiable” human control over AI systems. According to a poll survey, Americans favor human control over development speed by an 8-to-1 ratio. The key mandates include:

  • The Off-Switch: It gives humans authority to immediately shut down these powerful systems.

  • Superintelligence pause: The experts believe that a halt should be placed on the superintelligence race unless a broad-spectrum scientific consensus and strong public buy-in come in the loop.

  • Prohibition on rogue architecture: the companies should not build such reckless systems which not only replicate themselves, control autonomous weapons but also resist being plugged off.

Avoiding concentration of power

According to this declaration, absolute power should not be handed over to tech giants and corporate sectors to avoid “societal lock-in.” In fact, the experts call for shared prosperity and societal transitions based on democratization.

Protection of human experience

Strict bans should be placed on AI designed to replace human relationships, experiences, and emotions and to exploit the cognitive abilities of children.

The companies should be required to subject AI systems to pre-deployment testing for certain risks like increased suicidal ideation, mental health disorders, and other known harms.

To ensure transparency, AI-generated output must be labelled and never masquerade as human.

Ensuring human agency

The ethical framework explicitly rejects AI personhood and promotes human data rights, liberty and privacy. AI should not be allowed to exploit users’ emotional and mental state.

Instead of crippling humans' cognitive powers, AI must empower humans’ intellectual independence.

Accountability for AI companies

Accountability promotes transparency. Hence, tech companies should be held accountable for the deployment of unsafe AI models. Criminal penalties must be imposed on executives responsible for prohibited child-targeted systems or ones causing catastrophic harm.

72% of voters believe companies should be legally responsible for AI harms. Moreover, AI must adhere to professional ethics working in various domains or sectors like humans.