EU launches investigation into X’s AI tool Grok

eu-launches-investigation-into-x’s-ai-tool-grok

The European Commission has launched an investigation into Grok, the AI tool which is part of the social media platform X, over its use of sexually explicit images, including potential child sexual abuse material.

It follows an outcry over the spread of sexually manipulated images, including those of children.

The investigation will take place under the EU’s Digital Services Act (DSA).

“Sexual deepfakes of women and children are a violent, unacceptable form of degradation,” said EU Commissioner for Tech Sovereignty Henna Virkunen.

“With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens – including those of women and children – as collateral damage of its service.”

The Commission said it was coordinating closely with media regulator Coimisiún na Meán, given that X’s European headquarters is located in Ireland.

In a statement, Coimisiún na Meán said it welcomed the opening of a formal investigation “following our intensive engagement with the European Commission on this issue in recent weeks”.

The statement added: “European and Irish law puts clear responsibilities on online platforms relating to illegal content and legislation is underpinned by a pan-European system of regulatory enforcement overseen by the European Commission and national regulators. We are ready to play our part in the investigation.

“Our contact centre is available to provide advice and support to people who are concerned about what they encounter online and the information we receive from people helps us to do our job of holding platforms to account. There is no place in our society for non-consensual intimate imagery abuse or child sexual abuse material.”

The Commission has said X has an obligation under the DSA to carry out thorough risk assessments when it comes to illegal content on their platforms, and that the company has failed to include any risk assessment of Grok.

“When you look at the risk assessment reports that X has to publish and submit to the Commission under the DSA, this is an obligation,” said Commission spokesperson Thomas Regnier.

“These are publicly accessible. You can see them. There is one truth: [that] is that Grok is nowhere in these risk assessment reports. What does this mean? It means that X has simply not assessed the risk that Grok or the Grok features integrated into X [that] are posing to our citizens in the EU.

“This is already a fundamental issue in that we expect companies to mitigate risks stemming from their services and to make sure that they get their house in order, which doesn’t seem to be the case here.”

In a statement, the Commission said risks had “materialised”, exposing citizens in the EU to serious harm.

The Commission said that under the DSA, Elon Musk’s company was obliged to diligently assess and mitigate systemic risks, “including of the dissemination of illegal content, negative effects in relation to gender-based violence, and serious negative consequences to physical and mental well-being stemming from deployments of Grok’s functionalities into its platform”.

Separately, the Commission extended its investigation into X opened in December 2023 into its so-called “recommender” systems, algorithms which suggest products to users based on analysing past browsing behaviour, preferences and data.

That probe is looking at whether X has properly assessed and mitigated all systemic risks associated with its recommender systems, including the impact of its recently announced switch to a Grok-based recommender system.

The Commission said: “In preparing for this investigation, the Commission has closely collaborated with Coimisiún na Meán, the Irish Digital Services Coordinator. Further, Coimisiún na Meán will be associated to this investigation, pursuant to Article 66(3), as the national Digital Services Coordinator in the country of establishment in the EU.”

The Commission said it will send additional requests for information to X, and will also conduct interviews or inspections of the company.

Under the DSA, the opening of formal proceedings gives the Commission powers to take further enforcement steps, such as finding X in non-compliance.

Grok is an AI tool which was developed by X. Since 2024, Grok has been used on the platform to enable users to generate text and images and to provide contextual information to users’ posts.

Under the DSA, X has been designated as a very large online platform (VLOP) which means it is obliged to “assess and mitigate any potential systemic risks related to its services in the EU”, according to a Commission note.

These risks include the spread of illegal content and potential threats to fundamental rights, including of minors, posed by its platform and features.

In December, the European Commission fined X €120m over its use of deceptive design, the lack of advertising transparency and insufficient data access for researchers, which it is obliged to provide under the DSA.

EU officials pointed out that national regulatory and crime prevention authorities had a role in tackling the dissemination of non-consensual and illegal images.

“We establish a case that gives us the context that this is really systemic,” said a senior EU official.

Officials say Grok has been under observation for some time, due to a surge in anti-semitic material associated with the AI tool on X last autumn. Technical staff at the EU’s Centre for Algorithmic Transparency (ECAT) in Seville were also involved in monitoring Grok.

Following that, the Commission requested information on what risk assessments X was carrying out on the role of Grok, pushing the platform to make changes.

“In the last ten days [we had] indeed a very intense interaction with X. It’s a company: they decide what they decide, but I daresay that without our interaction, probably none of these kinds of changes that they have done would have appeared.”

However, officials say that despite taking some steps, X still had not gone far enough so as to remove the “systemic” risk, pointing out that the company appears to regard Grok as a separate entity.

Officials say they have been in close contact with the compliance officers in X and that they had been “cooperative” and had come forth with information.

Following the fine levied on X at the end of 2025, the company was given three months to pay the fine.

Regina Doherty, a member of the European parliament representing Ireland, said in a statement that she welcome the Commission’s decision to open a formal investigation.

“When credible reports emerge of AI systems being used in ways that harm women and children, it is essential that EU law is examined and enforced without delay,” Ms Doherty said.

Ms Doherty said the images had exposed wider weaknesses in how emerging AI technologies are regulated and enforced.

“The European Union has clear rules to protect people online. Those rules must mean something in practice, especially when powerful technologies are deployed at scale. No company operating in the EU is above the law,” she added.

Another Fine Gael MEP, Maria Walsh, said the European Commission should immediately suspend the use of Grok within the EU while it conducts the investigation.

Leave a Reply