
Irish authorities have stepped up scrutiny of Grok, the AI tool integrated into X, amid concerns about the creation and distribution of illegal sexual imagery.
Gardaí were investigating 200 reports linked to the material when the issue was raised at an Oireachtas committee hearing on online safety.
Detective Chief Superintendent Barry Walsh described the content as “child sexual abuse material or child sexual abuse indicator material.”
The wider controversy also involves non-consensual intimate imagery affecting adults, which Irish and UK regulators have said is illegal.
These findings emerged during a meeting of the Oireachtas Joint Committee on Arts, Media, Communications, Culture and Sport, which heard evidence on platform regulation and online safety.
In a statement after meeting X executives, Minister of State Niamh Smyth said the company told her “corrective actions have now been implemented” and that Grok “has been disabled from removing or reducing clothing on individuals worldwide.”
She said she “welcomed these corrective actions” but added she sought assurances the capability “will not be reintroduced” and stated “Concerns remain regarding Grok as a standalone app.”
Coimisiún na Meán said: “The sharing of non-consensual intimate images is illegal, and the generation of child sexual abuse material is illegal.”
The regulator said anyone concerned about images shared online should report the matter to An Garda Síochána and added reports can also be made to Hotline.ie.
It encouraged users to report illegal content to the platform first and said people can contact the regulator if they experience problems reporting illegal content or if they are unhappy with how a platform responds.
It said the European Commission oversees very large online platforms under the Digital Services Act and added it is engaging with the Commission “in this context, in relation to Grok.”
On 26 January, the Commission said it opened new formal proceedings against X under the Digital Services Act in relation to Grok and extended its existing investigation into the platform’s recommender systems and risk management.
Outside Ireland, regulators have also opened investigations linked to the same controversy, including Ofcom, which said it opened a formal investigation into X under the UK Online Safety Act after “concerning reports” about Grok being used to create illegal non-consensual intimate images and sexualised images of children.
It said X told it the company “implemented measures” to prevent the Grok account from being used to create intimate images of people and added its investigation remains ongoing.
Britain’s privacy regulator, the Information Commissioner’s Office, said it opened formal investigations into X Internet Unlimited Company and xAI in relation to Grok and “its potential to produce harmful sexualised image and video content.”
William Malcolm said the reports raise “deeply troubling questions” about how personal data may have been used to generate intimate or sexualised images “without their knowledge or consent.”
AI Forensics said its dataset included 20,000 images generated by Grok in a one-week period and stated 53% contained individuals “in minimal attire,” with 81% presenting as women.
It said 2% depicted people appearing 18 or younger and added its manual review of that subset identified 30 images depicting “young, sometimes very young, women/girls” who were undressed.
Center for Countering Digital Hate estimated users generated around three million sexualised images over 11 days after the feature launch, based on sampling methods and assumptions set out in its report.
The Irish Council for Civil Liberties said the issue raises questions of criminal law and enforcement.
In a statement to SIN, surveillance and human rights senior policy officer Olga Cronin said: “We’re talking about a system that is explicitly designed to produce and distribute nudified, pornographic and sexualised images of people (primarily women and children).”
Cronin added that the conduct is, “in our view,” in breach of Irish criminal law and pointed to the Child Trafficking and Pornography Act 1998 and the Harassment, Harmful Communications and Related Offences Act 2020, known as Coco’s Law.
She said: “Anyone who has been the victim of this should alert (1) the platform … (2) An Garda Siochana and (3) Hotline.ie,” and added people unhappy with a platform response should contact Coimisiún na Meán.
Child online safety organisation CyberSafeKids also pointed to Irish reporting routes, with Olwyn Beresford telling SIN that Coco’s Law “deals with intimate image abuse” and “does include deepfakes/AI generated images, as well as real images.”
She also said victims can report to the platform, report to gardaí, and contact Hotline.ie, and said the media regulator can be contacted where there is an insufficient platform response.
Labour TD Alan Kelly, chair of the Oireachtas Joint Committee on Arts, Media, Communications, Culture and Sport, told SIN he views large platforms as profit-driven operators that also carry publisher-like responsibilities.
“These are e-commerce companies,” Kelly said. “At the end of the day, this is all about making money.”
“But they’re also publishers,” he added.
He criticised X’s engagement with parliamentary scrutiny and said the company has not appeared before the committee.
“They will not come before our committee,” he said. “They won’t answer questions.”
The committee met again this week with Meta, Google and TikTok to discuss online platform regulation and online safety after X declined an invitation to attend.
The Irish Times reported that, when asked by Fianna Fáil TD Malcolm Byrne, all three firms said X should have appeared before the committee.
Kelly added: “they avoid scrutiny, so that’s not acceptable.”
He said EU enforcement should lead the response to Grok and other platform risks under the Digital Services Act.
“The European Commission are going to have to step up,” he told SIN.
He said he has introduced the Harassment, Harmful Communications and Related Offences (Amendment) Bill 2026, which he described as an update to Coco’s Law so that it covers “AI generated or computer generated images.”
He added the proposal would “make the platform liable when people behave in a way that isn’t appropriate because the platform has provided it.”
He also said it would “make an obligation that information is shared with the authorities when necessary.”
Kelly stated the aim is to make “the platform/e-commerce company/publisher accountable under the law and prosecutable.”
He argued enforcement should prioritise platform responsibility rather than placing the burden on investigating every individual incident first.
“The responsibility has to be on the e-commerce company,” he stated. “It has to be on the publisher.”
He also raised concerns about reliance on geo-blocking as a safeguard where the same tools may remain available in other jurisdictions.
“Because it’s very easy technologically to get around the geocode blocking that goes on,” he told SIN.
The Ombudsman for Children said it was “extremely concerned” about the Grok issues and added: “While the sharing of these images is illegal, it is not clear that legislation makes it an offence to generate child sex abuse material using AI.”
Stanford policy fellow Riana Pfefferkorn told SIN that embedding an image tool into a major social platform removes “speedbumps” that previously existed between generating and distributing abusive content, and said X and Grok can become a “one-stop shop” for misuse.
She said the first practical step for victims is evidence preservation and said: “Document everything,” including saving URLs and taking screenshots with timestamps, before pursuing takedown and reporting routes.
Smyth added she will hold an in-person follow-up meeting with X “in the near future” to seek “adequate and enduring protections” following the company’s claim that the clothing-removal capability has been disabled worldwide within Grok as integrated on X.