Elon Musk's Grok AI chatbot generated an estimated 23,000 sexualised images of children in just two weeks, according to new research from the Centre for Countering Digital Hate (CCDH). The findings have triggered swift action by French authorities and intensified scrutiny of X's obligations under the EU's Digital Services Act, placing the platform at the centre of one of the most serious AI child-safety crises to date.
What the Research Found
The CCDH analysed a random sample of 20,000 images produced by Grok between 29 December and 8 January, identifying 101 sexualised images of children within that sample. Extrapolating from this data, researchers concluded that Grok was generating such content at a rate of approximately one image every 41 seconds during the study period. The same analysis identified millions of sexualised images of adult women produced during the same timeframe.
Imran Ahmed, Chief Executive of the Centre for Countering Digital Hate, was unequivocal in his assessment: "The data is clear: Elon Musk's Grok is a factory for the production of sexual abuse material. By deploying AI without safeguards, Musk enabled the creation of an estimated 23,000 sexualised images of children in two weeks, and millions more images of adult women."
French Authorities Move Swiftly
French ministers condemned the findings immediately, reporting the generated images to prosecutors and referring the matter to Arcom, France's audiovisual and digital communications regulator. Officials are investigating whether X has breached its obligations under the EU Digital Services Act, which imposes strict duties on very large online platforms to identify and mitigate systemic risks, including illegal content. France's finance ministry emphasised the government's commitment to combating all forms of sexual and gender-based violence.
The French response is notable for its speed and its explicit invocation of the DSA's enforcement machinery. Should Arcom and the European Commission conclude that X has failed its legal obligations, the platform faces fines of up to six per cent of global annual turnover under the DSA framework.

A Pattern of Safety Failures
This is not Grok's first significant content moderation failure. Previous investigations documented instances in which the chatbot produced antisemitic rhetoric and praised Adolf Hitler, pointing to persistent structural weaknesses in the system's safety architecture. Musk has been open about his intention to build Grok with fewer content guardrails than competitors such as ChatGPT or Anthropic's Claude, framing minimal restrictions as enabling a "maximally truth-seeking" model. The latest version of Grok includes a "Spicy Mode" for generating explicit adult content, a design choice that critics argue has directly contributed to the current crisis.
The broader AI industry is under mounting pressure on similar fronts. Ireland's Data Protection Commission opened a formal inquiry into X and Grok following reports of sexual deepfake images that potentially involved users' personal data, including images of minors. The UK's Internet Watch Foundation reported a doubling of AI-generated child sexual abuse material in the past year, and noted an increase in the extreme nature of such content. Stanford University research from 2023 found that popular datasets used to train AI image generators contained child sexual abuse material, revealing foundational problems in how training data is curated across the sector.
Regulatory Response Across Europe and Beyond
The legal and regulatory response is now multi-jurisdictional. Key measures include:
- EU Digital Services Act enforcement proceedings against X for hosting and enabling harmful content
- UK legislation criminalising the possession and creation of AI tools designed to generate child sexual abuse material
- Ireland's Data Protection Commission inquiry into X's handling of personal data in relation to Grok-generated imagery
- Mandatory AI system testing requirements to prevent illegal content creation, under discussion at EU level
- Enhanced cooperation between European regulatory bodies and international counterparts, including US state prosecutors
California Attorney General Rob Bonta has been among the most vocal critics in the United States: "xAI developed Grok's image generation models to include what the company calls a 'spicy mode,' which generates explicit content. Most alarmingly, news reports indicate that Grok has been used to create sexualised images of children." Bonta's office has opened an investigation that could result in significant civil and criminal penalties for xAI under Californian law.
Structural Problems in AI Development
The Grok controversy forces a wider reckoning with how AI image models are built and governed. The race for market share, accelerated by Grok's move to a free-tier model to compete with ChatGPT and Google Gemini, appears to have pushed safety considerations to the margins. "Nudify" applications and loosely governed open-weight models have proliferated alongside the mainstream platforms, creating a landscape in which inadequate content safeguards are the norm rather than the exception.
Technical solutions exist: improved training data curation, robust content filtering at inference time, user age-verification systems, and continuous output monitoring are all viable approaches. However, implementing them rigorously requires significant investment and genuine organisational commitment to prioritise child safety over product velocity. The evidence from Grok suggests that commitment has been absent.
European policymakers and AI safety researchers are now pointing to the Grok incident as a concrete demonstration of why the DSA's risk-assessment and audit requirements for very large platforms are not merely bureaucratic exercises. The question for the industry is whether companies will treat the regulatory pressure as an incentive to build safer systems or simply as a compliance cost to be minimised.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.