‘From dating scams to fake lawyers’: OpenAI bans ChatGPT accounts over misuse

OpenAI says several ChatGPT accounts used social media to commit cybercrimes while posing as dating agencies, law firms, or other legitimate organizations

By Hafsa Naeem Baig
|
February 26, 2026
‘From dating scams to fake lawyers’: OpenAI bans ChatGPT accounts over misuse

Amid ongoing global backlash against the harmful impact of chatbots.

OpenAI said that it had banned accounts linked to Chinese law enforcement, romance scammers and influence operations, including a smear campaign against Japan's first woman prime minister, in a report detailing the misuse of its ChatGPT technology.

Advertisement

The ChatGPT parent company said several accounts used its chatbot alongside other tools, including social media accounts, to carry out cybercrimes while posing as a dating agency, law firms and U.S. officials, among others.

Here are some details from OpenAI:

A small set of accounts that likely originated in China used OpenAI's models to request information about U.S. persons, online forums and federal building locations, and sought guidance on face-swapping software

The same accounts also generated English-language emails to state-level U.S. officials or policy analysts working in business and finance, inviting targets to participate in paid consultations.

OpenAI said it banned a ChatGPT account linked to an individual associated with Chinese law enforcement whose activity involved orchestrating a covert influence operation targeting Japanese Prime Minister Sanae Takaichi

A cluster of ChatGPT accounts used the chatbot to run a dating scam targeting Indonesian men and likely defrauded hundreds of victims a month, according to OpenAI.

OpenAI said the scam used ChatGPT to generate promotional text and ads for a fake dating service, luring users to join the platform and pressuring targets to complete several tasks requiring large payments

Several accounts used OpenAI's models to pose as law firms and impersonate real attorneys and U.S. law enforcement, targeting fraud victims, OpenAI said.

Previously, chatGPT clearly said that people should not rely on the chatbot for any sort of medical, legal or financial and that the information provided on the bot is not authentic or experts' review.

Advertisement