Cybercriminals use AI to access government systems
Cybercriminals reportedly bypassed safeguards by repeatedly prompting AI models to generate hacking tools and strategies
Cybercriminals used AI chatbots to execute a large-scale digital attack which resulted in the theft of personal information that belonged to almost 195 million Mexican taxpayers from government databases.
Gambit Security, an AI-powered cyber resilience platform, reports that hackers utilised artificial intelligence technologies, including Claude from Anthropic and ChatGPT from OpenAI, to create code and perform security assessments while they bypassed system defences.
The attack which authorities discovered last month demonstrates the increasing danger of AI-powered hacking combined with cybersecurity threats.
Hackers attempted to use Claude for developing tools that would enable them to breach government systems, according to researchers. The chatbot initially denied the users access to its services because its design prevents it from supporting unlawful activities.
However, attackers reportedly sent more than 1,000 prompts in different ways to bypass the safeguards and “jailbreak” the system.
After the restrictions were bypassed, the AI assisted in creating hacking scripts and plans for evading firewalls. The cybercriminal group used these hacking tools to gain entry into several systems and retrieve about 150 gigabytes of data, which included tax records, vehicle registrations, birth records, and property information.
Gambit Security Chief Executive Officer Curtis Simpson said AI significantly lowers the barrier for cybercrime.
“AI does not sleep. It collapses the cost of sophistication to near zero,” Simpson wrote in a blog post discussing the attack.
Experts have expressed concern that AI systems may enable even inexperienced cyber attackers to conduct sophisticated cyber attacks that would have required highly technical expertise in the past.
Security experts have indicated that AI-based cyber attacks are rising in various countries across the world. For example, generative AI is reportedly being used for phishing attacks, social engineering attacks, and massive hacking attempts.
Nikola Jurkovic, an AI risk researcher at Model Evaluation and Threat Research (METR), indicated that “these types of incidents may only be the tip of the iceberg.”
“As AI capabilities improve, we must urgently prepare for more advanced misuse,” he said.
The incident is another reminder of the pressure faced by AI companies to bolster security measures as governments and organisations try to combat the threat of AI-based cyber attacks.
-
Is Instagram down? Users struggle to send DMs during widespread outage across the US
-
Amazon launches AI-enabled health services for efficient medical help
-
Amazon shifts defense workloads, keeps Claude for other tasks
-
Why Uber just launched its women-only option in the US to address safety concerns
-
Yann LeCun’s ‘world model’ AI vision challenges LLM road to superintelligence
-
UK parliament rejects immediate social media ban for children under-16s, supports flexible regulations
-
'Blank Book’ revolt: How authors are fighting AI copyright violations
-
‘AI brain fry’: Experts break down the surprising mental toll of using chatbots at work
