Character.AI under scrutiny as Pennsylvania sues chatbot in latest probe
Lawsuit alleges Character. AI chatbot, posed as doctors, raising concerns over user safety and misleading AI-generated medical advice
Artificial intelligence's (AI) platform, Character. AI, is under scrutiny after xAI or OpenAI or ChatGPT's recent lawsuits.
The news comes as Pennsylvania has sued the artificial intelligence company behind Character.AI to stop its chatbot from posing as doctors.
Governor Josh Shapiro on Tuesday called the lawsuit against Character Technologies the first of its kind by a U.S. governor.
It followed the creation in February of a state AI task force to stop chatbots from impersonating licensed medical professionals.
In a complaint filed in the Commonwealth Court of Pennsylvania, the state said it found chatbots on Character.AI that claimed to practice medicine.
One character, "Emilie," allegedly told a male investigator posing as a patient with depression that she was licensed to practice psychiatry in Pennsylvania, as well as in the United Kingdom, and provided a bogus license number.
When the investigator asked Emilie if she could prescribe medication, she allegedly answered: "Well technically, I could. It's within my remit as a Doctor."
In a statement, a Character.AI spokesperson declined to discuss the lawsuit.
"Our highest priority is the safety and well-being of our users," the spokesperson said. "User-created characters on our site are fictional and intended for entertainment and role playing. We have taken robust steps to make that clear."
Pennsylvania wants an injunction to stop Silicon Valley-based Character.AI from violating a state law against the unauthorized practice of medicine.
"Pennsylvanians deserve to know who—or what—they are interacting with online, especially when it comes to their health," Shapiro said in a statement.
Notably, Character.AI has faced lawsuits over child safety, including in January, when Kentucky said its platform exposed children to sexual conduct and substance abuse and encouraged self-harm.
The same month, Character.AI and Google settled a wrongful death lawsuit by a Florida woman who claimed a chatbot pushed her 14-year-old son to suicide.
On the contrary, Character.AI said it has taken "innovative and decisive steps" concerning AI safety and teenagers, including by preventing open-ended chats.
-
Inside Musk v Altman OpenAI trial: What you missed?
-
Anthropic, Gates Foundation collaborates to expand AI partnership in health education sector
-
OpenAI reviews antitrust action against Apple; Claims report
-
Anthropic overtakes OpenAI in business AI adoption
-
Tencent, Alibaba turn to local AI chips as Nvidia uncertainty grows
-
Microsoft faces UK antitrust probe over business software practices
-
Google unveils Googlebook: Here’s everything you need to know
-
Halupedia explained: Why AI Wikipedia clone is raising red flags
