China’s cybersecurity authority has issued a warning about OpenClaw, an emerging agentic AI tool, citing serious AI security risks and vulnerabilities that could expose users to cyberattacks.
The alert from the National Computer Network Emergency Response Technical (CERT) team, which cautioned that OpenClaw’s weak default security settings could allow hackers to exploit the system.
The agency said the warning follows a surge in OpenClaw downloads in China, driven by cloud platforms offering quick deployment options.
According to the CERT, the OpenClaw system, by default, is not well secured. This provides an advantage for those who may seek to exploit the tool through malicious web pages and plugins.
The agency had earlier issued a warning that the vulnerabilities in the AI system could allow the exploitation of the system and result in the stealing of credentials. This may result in the perpetration of more serious cyberattacks.
The officials also pointed out the chances of users making mistakes, noting that those using the OpenClaw system may inadvertently delete critical data.
In addition, the users may expose critical data. To mitigate the chances of the exploitation of the system by those with malicious intent, the CERT advised that the agentic AI tool be contained in a container. It also advised that the management ports of the OpenClaw system not be available on the public internet.
The CERT advised that OpenClaw system users be required to go through a stringent authentication process. It also advised that the system not be allowed to update automatically.
It further advised that the system not allow access to any external plugin that may expose the system to security threats.
The warning comes at a time when some of China’s biggest technology companies are starting to use the technology.
Similarly, Tencent has started using it to launch a new product dubbed Work Buddy that enables users to link up to five different chat platforms within a few minutes.
However, despite its popularity, fears about the cybersecurity of AI are rising. Earlier this month, research firm Gartner had described it as an “unacceptable cybersecurity risk” for business users and advised that it should only be run in a test environment.