Anthropic has launched its smartest and the most professional AI model, Claude Opus 4.6 with enhanced capabilities.
The newly launched model is considered the agentic powerhouse, ending the “context rot” and taking “vibe coding” to the next level.
Claude Opus 4.6 also represents the fundamental shifts in how artificial intelligence handles complex tasks and delivers deep professional analysis.
Unlike its predecessors, Claude Opus 4.6 works on improved coding skills characterized by careful planning in agentic tasks, reliable operating of codebases and better review.
For the very first time, Anthropic has introduced an Opus-class model which is capable of processing up to 1 million tokens. It also eliminated “context rot” with 76 percent efficiency in the 1M-token needle-in-a-haystack test.
The model can also perform multitasks autonomously , sustain long-running tasks without human oversight, and coordinate “agent team” on the basis of its optimization with Cowork and Claude Code.
Adaptive thinking mode
The professional model is equipped with adaptive thinking. With this feature Claude Opus 4.6 can decide on how much extended thinking can be used to solve a problem.
The new effort parameter combined with adaptive thinking lets the users balance depth of reasoning or intelligence against speed and cost.
Anthropic also announced the integration of its model in Office. For instance, Claude in Excel can structure data autonomously. Moreover, Claude in PowerPoint in a research preview can be helpful for daily working.
Claude Opus 4.6 can automatically summarize older parts of long-running conversation instead of truncating it. This feature will prevent hitting token limits during long sessions.
Being an industry-leading agent, the model is better at retrieving information and reasoning across the professional works, such as finance and legal documents, synthesis reports and regulatory filings.
Opus 4.6 aims at delivering better reasoning without compromising the safety. It shows a low rate of misaligned behaviours during automated behavioural assessment. The model aims to prevent the users from deception, delusions, hallucinations, misuse, and sycophancy.
Anthropic’s latest release of the model comes when OpenAI released Frontier for companies to manage agentic AIs.