close
Saturday July 27, 2024

WormGPT: Hackers get power-up as ChatGPT's evil AI twin hits dark web

Anonymous creator of WormGPT calls chatbot "the biggest enemy of the well-known ChatGPT,"

By Web Desk
July 18, 2023
This picture shows a the interface of WormGPT. — The Hacker News/File
This picture shows a the interface of WormGPT. — The Hacker News/File

Researchers have warned that a ChatGPT-style AI tool that is free of any "ethical" restriction or limitation is paving new ways for hackers to carry out attacks on a scale that has never been seen before.

Cybersecurity firm SlashNext found the generative AI tool WormGPT being promoted on forums on the dark web for cybercrime. It is a "sophisticated AI model" that can generate text that resembles human speech for use in hacking operations.

“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” the company explained in a blog post.

“WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

WormGPT was used in tests by the researchers, who gave it instructions to create an email that would try to persuade an unwitting account manager to pay a fake invoice, The Independent reported.

Leading AI tools like OpenAI's ChatGPT and Google's Bard have built-in safeguards to stop people from abusing the technology for evil, but WormGPT is allegedly made to facilitate criminal activity.

According to the researchers, the experiment showed that WormGPT was capable of creating emails that were "not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks."

The anonymous developer of WormGPT posted screenshots to the hacking forum demonstrating the various tasks the AI bot is capable of, including creating emails for phishing attacks and writing code for malware attacks.

As it enables users to "do all sorts of illegal stuff," WormGPT is "the biggest enemy of the well-known ChatGPT," according to its creator.

According to a recent report by the law enforcement organisation Europol, large language models (LLMs) like ChatGPT may be used by cybercriminals to carry out fraud, impersonation, or social engineering attacks.

“ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes,” the report noted.

“Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.”

LLMs allow hackers to conduct cyberattacks "faster, much more authentically, and at significantly increased scale," according to a warning from Europol.

The discovery of this dangerous AI chatbot was made after tech experts, government officials, and even the creator of ChatGPT highlighted the dangers the latest technology poses and pressed legislators to bring forward regulations to ensure its safe use.