‘From dating scams to fake lawyers’: OpenAI bans ChatGPT accounts over misuse
OpenAI says several ChatGPT accounts used social media to commit cybercrimes while posing as dating agencies, law firms, or other legitimate organizations
Amid ongoing global backlash against the harmful impact of chatbots.
OpenAI said that it had banned accounts linked to Chinese law enforcement, romance scammers and influence operations, including a smear campaign against Japan's first woman prime minister, in a report detailing the misuse of its ChatGPT technology.
The ChatGPT parent company said several accounts used its chatbot alongside other tools, including social media accounts, to carry out cybercrimes while posing as a dating agency, law firms and U.S. officials, among others.
Here are some details from OpenAI:
A small set of accounts that likely originated in China used OpenAI's models to request information about U.S. persons, online forums and federal building locations, and sought guidance on face-swapping software
The same accounts also generated English-language emails to state-level U.S. officials or policy analysts working in business and finance, inviting targets to participate in paid consultations.
OpenAI said it banned a ChatGPT account linked to an individual associated with Chinese law enforcement whose activity involved orchestrating a covert influence operation targeting Japanese Prime Minister Sanae Takaichi
A cluster of ChatGPT accounts used the chatbot to run a dating scam targeting Indonesian men and likely defrauded hundreds of victims a month, according to OpenAI.
OpenAI said the scam used ChatGPT to generate promotional text and ads for a fake dating service, luring users to join the platform and pressuring targets to complete several tasks requiring large payments
Several accounts used OpenAI's models to pose as law firms and impersonate real attorneys and U.S. law enforcement, targeting fraud victims, OpenAI said.
Previously, chatGPT clearly said that people should not rely on the chatbot for any sort of medical, legal or financial and that the information provided on the bot is not authentic or experts' review.
-
New AI tool targets extremism, redirects ChatGPT users to real-world help
-
Has X disabled the ability to copy video links?
-
Experts call on Google to ban Youtube AI videos for kids
-
Apple turns 50: Tim Cook reflects on five decades of impact
-
Perplexity AI accused of exposing sensitive user data
-
Anthropic Claude AI source code leak: ‘Human error’ sparks security concerns
-
Why women fall behind in AI use, former Meta COO explains
-
AI agents or malware? Experts reveal shocking hidden dangers
