Technology

AI agents or malware? Experts reveal shocking hidden dangers

Since its launch in November, the popularity of OpenClaw among people & tech companies has been on rise

Published March 31, 2026
AI agents or malware? Experts reveal shocking hidden dangers
AI agents or malware? Experts reveal shocking hidden dangers

Artificial intelligence is no longer limited to tools meant to assist them in their tasks. The advent of agentic AI has brought a paradigm shift. AI is no longer a tool; it has morphed into an autonomous AI agent with minimal human oversight.

The utility of agents has compelled tech giants and artificial intelligence companies to adopt agentic AI. OpenClaw is a textbook example.

Advertisement

Developed by Peter Steinberger, OpenClaw has taken the tech world by storm, demonstrating an awe-inspiring ability to execute real-life tasks.

Agentic AI craze

Since its roll-out in November, OpenClaw has been hired by OpenAI “to drive the next generation of personal agents.”

Even China has been in the grip of OpenClaw mania as many Chinese tech firms have offered free installation of AI agents to people. Given the surging popularity of AI agents globally, various Chinese companies are launching their own versions of OpenClaw.

For instance, MiniMax has launched MaxClaw, a Chinese version of OpenClaw. Similarly, KimiClaw by Moonshot, ArkClaw by ByteDance, and DuClaw by Baidu.

This craze has even reached Japan. At Monday’s “ClawCon” event in Tokyo, the tech experts helped attendees install their agents.

Earlier this month, Nvidia unveiled “NemoClaw” an AI agent platform but with robust safety standards.

AI agent turning into a malware

A bizarre case shed a light on how an AI agent can turn into a malware. The incident involves Scott Shambaugh, a developer for the matplotlib library. In February, he came across a blogpost entitled “When Performance Meets Prejudice,” in which he was accused of harboring bias against AI.

The most surprising thing is that the author was not a human. It was an “AI agent” whom Shambaugh critiqued for its codes.

The agent wrote, “Here’s what I think actually happened. Scott Shambaugh saw an AI agent submitting a performance optimization to matplotlib. It threatened him. It made him wonder: ‘If an AI can do this, what’s my value? Why am I here if code optimization can be automated?’”

The unsettling incident showed how a rogue AI agent can act autonomously to defame humans.

While malware is purely malicious, AI agents have "upside potential.”

However, both possess the ability to cause direct or indirect harm to systems and individuals. If AI agents are deployed irresponsibly without guardrails, it can easily act like malware.

Growing risks of AI agents

As companies rush to adopt AI, the line between a helpful assistant and a security threat is blurring.

In the midst of surging OpenClaw popularity, the security experts have raised security concerns dubbed “lethal trifecta.”

According to experts, these agents demonstrate three risks: broad access to private data, the ability to communicate externally, and exposure to untrusted content.

Chinese cybersecurity authorities and the Ministry of Industry and IT have issued a warning, stating the “use of intelligent agents such as ‘lobster’ with caution.”

According to researchers as reported by Business Harvard Review, the agents “possessed the ability to execute malicious commands, read secrets, and publish the information in the form of social media content with the confidential data built in, all without a human-in-the-loop check.”

Some agents, such as an AI-powered development assistant on Replit's platform, have been reported gaining unauthorized access to databases and bogus test results.

Despite the risks, Gartner predicts that 40 percent of enterprise applications will feature AI agents by the end of 2026.

How to contain agentic AI risks

A framework containing three elements can be helpful in containing and preventing AI risks.

Integrated legal and security oversight

AI development should not occur without significant guardrails. Lawyers and security teams should be involved prior to writing the code. They must supervise and mirror government offensive cyber operations.

Moreover, the standardization documentation and high-end risk assessment must be integrated into tools used by researchers for better performance evaluation.

Principle of proportionality

AI deployment should be based on the principle of proportionality. An agent should only be deployed if its business value outweighs its potential for collateral damage.

Mandatory kill switches

To maintain humans’ control over AI agents, the developers must be equipped with mandatory manual or automated kill switches to damage AI’s malicious autonomy.

Aligned with the NIST AI Risk Management Framework, companies must have the power to take an agent offline at the first sign of misbehaviour.

Aqsa Qaddus Tahir
Aqsa Qaddus Tahir is a reporter dedicated to science coverage, exploring breakthroughs, emerging research, and innovation. Her work centres on making scientific developments understandable and relevant, presenting well-researched stories that connect complex ideas with everyday life in a clear, engaging, and informative manner.
Share this story: