Technology

Microsoft warns of AI double agents as enterprise adoption of AI agents surges

New security findings show many firms deploy AI agents without strong safeguards

By The News Digital
February 11, 2026
Microsoft warns of AI double agents as enterprise adoption of AI agents surges
Microsoft warns of AI double agents as enterprise adoption of AI agents surges

Microsoft has flagged serious security risks tied to AI agents in its latest Cyber Pulse Report, warning about so-called “AI double agents”. The report, published by the Redmond-based tech giant, examines how rising enterprise AI adoption may expose sensitive data.

It highlights that AI agents with excessive privileges but weak safeguards can be manipulated through prompt engineering attacks, effectively turning them into security threats. The findings are based on Microsoft’s first-party telemetry and research.

According to Microsoft, over 80% of Fortune 500 companies are now using AI agents built using low-code or no-code tools. The company warns that this kind of rapid roll-out raises concerns, especially when security protocols are not built into the system right from the beginning.

Microsoft noted in a blog post that human and AI agent teams are expanding globally. However, agents developed through simplified coding approaches may lack enterprise-grade protections.

The report urges businesses to strengthen observability, governance and Zero Trust security frameworks, which follows a “never trust, always verify” model, meaning no user or device is automatically considered safe.

The Microsoft Cyber Pulse Report introduces the term “AI double agents” to describe AI systems with too much access and insufficient oversight. Microsoft warned that bad actors could exploit these privileges, redirecting agents to perform harmful tasks.

Researchers documented cases where AI agents were misled by deceptive interface elements or manipulated task framing. In such scenarios, the agent follows harmful instructions embedded in otherwise normal content.

A multinational survey of over 1,700 data security professionals, commissioned by Microsoft from Hypothesis Group, found that 29% of employees use AI agents for unsanctioned work tasks.