The world is living and breathing the age of AI, which is evolving at breakneck speed. The time is not far off when humanity will witness the dawn of “Artificial Superintelligence.”
Given the trajectory of AI-related developments, the tech giants think that humans are going to experience superintelligence in the near future.
This is what OpenAI CEO Sam Altman thinks and predicts. While addressing the delegates at AI Impact Summit in New Delhi, Altman revealed the timeline for AI superintelligence.
Altman said, “On the current trajectory, we believe we may be only a couple of years away from the early versions of true superintelligence.”
“If we are right, by the end of 2028 more of the world in electronic capacity could reside inside of data centers rather than outside of them,” he added.
The CEO of ChatGPT maker also said he could be wrong in suggesting the timeline, but it really bears serious consideration.
According to Altman, superintelligence at some point in its development curve “will be capable of doing better jobs than the CEOs of many companies and doing better research than our best scientists.”
The idea of artificial superintelligence is not new as many tech moguls including the CEO of Microsoft Mustafa Suleyman have propagated it as a part of "Singularity."
However, last year Suleyman set out Microsoft AI's goal of “humanist superintelligence.”
According to the Microsoft AI blogpost, Humanist Superintelligence (HSI) equipped with advanced AI capabilities will be designed to work for people and humanity more generally.
“We think of it as systems that are problem-oriented and tend towards the domain specific. Not an unbounded and unlimited entity with high degrees of autonomy – but AI that is carefully calibrated, contextualized, within limits,” the blogpost read.
Artificial superintelligence refers to a hypothetical stage in AI development where a machine’s cognitive abilities far surpass those of any human across every conceivable domain, including social wisdom, creative arts, jobs, and scientific reasoning.
Currently, in 2026, the agentic AI phase is gaining ground, transitioning to the point where AI systems can plan and execute multi-tasks autonomously.
In the case of superintelligence, various disruptions would follow. Jobs would become automated; inequality would be prevalent; AI governance issues would intensify, and humans would lose control over values.
Humans are unlikely to be prepared for this massive technological revolution as the superintelligence stage might turn humans into obsolete beings.