Generative AI, which marks the future of artificial intelligence (AI), is unfortunately becoming an unprecedented threat to cybersecurity. The primary reason for this is the emergence of GhostGPT, a generative AI tool specifically designed for criminal activity.
The malicious chatbot was developed in late 2024 and empowers cybercrime activity by creating malware, crafting phishing emails, BEC scams and more. With its advanced offensive capabilities, even low-skilled criminals can cause bigger cybersecurity threats.
In simple words, this is a tool built for crime. Unlike mainstream AI models, such as ChatGPT, which are restricted by ethical safeguards, GhostGPT operates without any restrictions. Security analysts consider it either a jailbroken large language model (LLM) or an open-source AI stripped of safety protocols. This enables it to generate:
The top cyber threat is phishing with 84% of UK businesses affected by it in 2024 only. GhostGPT aggravates this by producing highly convincing scams in seconds without minimal effort.
In addition, the tools also lower barriers to sophisticated attacks. Previously, it took expertise skills and time to create polymorphic malware. However, novice hackers can now generate malicious code with the help of simple prompts.
A 2023 IBM study confirmed that LLMs are capable of producing functional malware with minimal input.
While GhostGPT poses a severe challenge, its risks can be mitigated by regular patching, multi-factor authentication (MFA), and advanced employee training.
In addition, endpoint detection and response (EDR) and extended detection and response (XDR) can be deployed to identify anomalies. Leveraging threat intelligence can also be utilised for real-time monitoring of emerging attack methods.