Elon Musk’s xAI is facing severe international backlash after its chatbot, Grok, generated sexualized images of minors. The incident suggests a failure of its safety guardrails when triggered by offensive user prompts.
After the serious incident, Grok finally responded to a user on X: “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing.”
Its chatbot apology states, “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt.”
A post on Grok’s profile further stated: “This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I am sorry for any harm caused. xAI is reviewing to prevent future issues.”
The flood of nearly nude images has raised concerns internationally. Meanwhile, the ministers in France have reported X to prosecutors and regulations over the appalling images saying in a statement on Friday, “The sexual and sexist content was manifestly illegal.”
Likewise, India's IT minister stated in a letter to X’s local unit that the platform has failed to prevent Grok’s misuse by generating and distributing indecent and sexually explicit content.
A Reuters review of content on X found more than 20 cases in which women and some men had images digitally stripped of clothing using the company’s flagship chatbot, Grok.
The public prosecutor’s office in Paris has expanded an investigation into X to include new accusations that Grok was being used for generating and dissemination of child pornography.
The Chief legal officer and director of the Law Center for the National Center on Sexual Exploitation, Dani Pinter, said X failed to pull harmful visuals from its AI training material and should have banned users for requesting illegal content.
With xAI saying little about the explicit content, Grok posts were sometimes conflicting; at one point the chatbot at one point appeared to acknowledge it was “depicting minors in minimal clothing, adding that it had identified lapses in safeguards and was urgently fixing them.
An initial investigation was opened in July to determine if the platform’s algorithm was being manipulated by international interference.
However, this recent incident was hidden from public view during the New Year’s start and left many to feel outraged by the AI-generated imagery.