Press Release: EDRi calls for swift action as EU probes X’s Grok over AI-generated harm

The European Commission has opened a DSA investigation into Grok, X’s AI chatbot. EDRi welcomes this decision and is calling for a swift resolution to this matter, to ensure that X complies fully with its DSA obligations and protects its users.

By EDRi · January 26, 2026

26 January 2026 – Today, the European Commission (EC) opened a formal investigation into Grok, the AI chatbot integrated into X, under the Digital Service Act (DSA) for allowing users to easily create and disseminate fake sexualised and nude pictures based on real people’s photographs without their consent.

The Commission states that X may have failed to assess and mitigate the systemic risks posed by its platform, including the dissemination of illegal content with “negative effects in relation to gender-based violence, and serious negative consequences to physical and mental well-being”.

EDRi very much welcomes the EC decision, albeit a late one, as a necessary first step to address the serious and systemic harms caused by Grok. X’s AI chatbot was put on the market and integrated into the platform without a meaningful risk assessment or adequate safeguards, despite the clear obligations imposed on X by the DSA.

In recent weeks, Grok has enabled the mass production and circulation of over 3 million non-consensual, sexual images of women and minors in the 11 days before the company finally promised to prevent this from being possible. X’s chatbot has turned the platform into an infrastructure for mass AI-generated sexual abuse.

 

“Creating a global infrastructure that can easily produce and disseminate fake sexualised and nude images of real women and minors is simply hideous. Elon Musk and his company seem to treat the resulting harm as a political game and their response is wholly inadequate: it has neither stopped the abuse nor addressed the risks of a system designed to cause harm.”

Jan Penfrat, Senior Policy Advisor, EDRi

This is not an isolated incident. Last summer, the European Commission already raised concerns when Grok generated antisemitic and extremist content, including praise for Adolf Hitler, to be distributed on X. The recurrence of harmful outputs points to a pattern of non-compliance and a failure – or refusal – to conduct adequate risk assessments before deploying generative AI systems at scale.

“The Commission must now investigate without any delay and act decisively: any delay enables further real-world harm. Stopping X’s harmful market behaviour is an exercise of regulatory sovereignty and must protect our fundamental rights.”

Jan Penfrat, Senior Policy Advisor, EDRi

This case underscores the urgent need for a strong and coherent EU digital rulebook that prevents systematic abuse, protects users, and ensures that technology serves society rather than exploiting and harming it. A robust enforcement of the DSA must go hand in hand with an effective implementation of the broader EU’s digital rulebook, which must not be weakened by the current deregulatory agenda under the pretext of “simplification.” Laws like the DSA, DMA, AI Act and GDPR are and must remain Europe’s core protection against Big Tech-facilitated abuse and digital dependency.

EDRi urges the Commission to put forward binding orders that require X to implement meaningful safeguards in its chatbot and to verify them with independent bodies before further deployment. The Commission should also impose a dissuasive fine if violation is confirmed. EDRi also highlights that this is not solely a technology issue. What is happening on Grok reflects a broader misogynistic culture that normalises the sexualisation of women and children. Whilst Grok is exacerbating the issue, unless we tackle the root causes of sexual violence and abuse, we will only be addressing one facet of the problem.

Finally, EDRi calls on EU institutions and politicians to reconsider their presence on X. A platform that trivialises sexualised abuse and amplifies extremism hands our digital spaces to those who profit from harm. X is not the public town square it pretends to be and cannot serve as neutral space for democratic debate. Safer alternatives exist, and public institutions and political leaders should lead by example by investing in a sovereign, accountable, and rights-respecting Open Social Web.