‘From dating scams to fake lawyers’: OpenAI bans ChatGPT accounts over misuse
OpenAI says several ChatGPT accounts used social media to commit cybercrimes while posing as dating agencies, law firms, or other legitimate organizations
Amid ongoing global backlash against the harmful impact of chatbots.
OpenAI said that it had banned accounts linked to Chinese law enforcement, romance scammers and influence operations, including a smear campaign against Japan's first woman prime minister, in a report detailing the misuse of its ChatGPT technology.
The ChatGPT parent company said several accounts used its chatbot alongside other tools, including social media accounts, to carry out cybercrimes while posing as a dating agency, law firms and U.S. officials, among others.
Here are some details from OpenAI:
A small set of accounts that likely originated in China used OpenAI's models to request information about U.S. persons, online forums and federal building locations, and sought guidance on face-swapping software
The same accounts also generated English-language emails to state-level U.S. officials or policy analysts working in business and finance, inviting targets to participate in paid consultations.
OpenAI said it banned a ChatGPT account linked to an individual associated with Chinese law enforcement whose activity involved orchestrating a covert influence operation targeting Japanese Prime Minister Sanae Takaichi
A cluster of ChatGPT accounts used the chatbot to run a dating scam targeting Indonesian men and likely defrauded hundreds of victims a month, according to OpenAI.
OpenAI said the scam used ChatGPT to generate promotional text and ads for a fake dating service, luring users to join the platform and pressuring targets to complete several tasks requiring large payments
Several accounts used OpenAI's models to pose as law firms and impersonate real attorneys and U.S. law enforcement, targeting fraud victims, OpenAI said.
Previously, chatGPT clearly said that people should not rely on the chatbot for any sort of medical, legal or financial and that the information provided on the bot is not authentic or experts' review.
-
What happens if ChatGPT gains access to your financial accounts? Experts are alarmed
-
Anthropic seeks legal pause on Pentagon supply-chain risk decision: Here’s why
-
'AI washing' or real shift? Atlassian cuts 1,600 jobs in latest tech shake-up
-
Experts predict AI will trigger biggest shift in mathematics history
-
China’s cyber agency raises concerns over OpenClaw AI
-
WhatsApp plans major change for younger users
-
Musk unveils Tesla, xAI joint project ‘Macrohard’ amid advanced AI push
-
Nvidia secures $2 billion deal with AI cloud provider Nebius
