Elon Musk’s X is rife with nonconsensual images of undressed women and children generated by his company xAI’s AI tool Grok. The disgusting trend has created outrage worldwide, with the platform coming under scrutiny in several countries.
Despite backlash, X’s AI chatbot, Grok, continues to create deepfakes of women without their consent. The victims of the wicked trend range from celebrities to ordinary women and children.
Here’s what is happening.
Grok used to undress women
X users are instructing Grok to undress women and children using their existing pictures. This came after Grok rolled out an “edit image” button in December, which allows users to digitally alter photos by putting specific prompts.
The update soon began to be misused by many to create nonconsensual sexualised images of women and children in states of undress.
Grok, a free AI tool, responds to X users’ prompts when it is tagged by them in a post.
Users have asked the chatbot to undress women to show them in bikinis without their consent, as well as put them in sexual situations. Many prompts tagging Grok read “put her in a bikini” or “remove her clothes”.
The AI tool then replied with a version of the image below the post within minutes, allowing anyone to view the nonconsensual picture. Since the edit feature’s rollout, Grok has generated several sexualised pictures of real women and children stripped to their underwear.
Swedish Deputy Prime Minister Ebba Busch became a victim of the sickening phenomenon. According to Eliot Higgins, the founder of the investigative journalism group Bellingcat, users told Grok “bikini now” and “now put her in a confederate flag bikini”. Musk’s chatbot complied and provided those images.
According to Researchers at AI Forensics, a Paris-based non-profit, at least a quarter of the 50,000 mentions of @Grok on X between December 25 and January 1 were requests for the tool to create an image. There was significant usage of terms like “her”, “put”, “remove”, “bikini” and “clothing” in the image generation prompts, reported The Guardian.
Quick Reads
View AllThe non-profit said that over half the images displayed people in “minimal attire” such as underwear or bikinis, the majority being women who seemed under 30 years of age. About two per cent of the images appeared to show people aged 18 or under, as per AI Forensics. Some images also represented children under five years old.
A review by the content analysis firm Copyleaks found that Grok has recently been creating “roughly one nonconsensual sexualised image per minute”, reported Rolling Stone.
Outrage over Grok making nonconsensual images
A global uproar has erupted as nonconsensual sexualised images of women and children spread on X.
Ashley St Clair, the estranged mother of one of Musk’s children, complained that Grok generated a picture of her when she was 14 years old in a bikini.
“I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” she said this week.
Hey @grok I do not consent to being undressed by you, having intimate content produced, or having my images altered in any way. Please remove this perverted content immediately and provide a post ID for impending legal filings. https://t.co/gWVT5WUG9C
— Ashley St. Clair (@stclairashley) January 5, 2026
The deepfakes included an image of a 12-year-old girl in a swimwear.
Samantha Smith, a freelance journalist and commentator, told BBC she felt “dehumanised and reduced into a sexual stereotype” after Grok was used to digitally remove her clothing.
“Women are not consenting to this,” she said. “While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.”
Amid the controversy, Musk’s chatbot is facing scrutiny in several countries, including India, Australia and Malaysia.
Last week, India’s Ministry of Electronics and Information Technology directed X to conduct a “comprehensive technical, procedural and governance-level review” of Grok. The company was given until January 5 to comply with the order.
The Malaysian Communications and Multimedia Commission said over the weekend that it is probing X and will summon company representatives.
“MCMC urges all platforms accessible in Malaysia to implement safeguards aligned with Malaysian laws and online safety standards, especially in relation to their AI-powered features, chatbots and image manipulation tools,” the group reportedly said in a statement.
Australia’s online safety watchdog said it is investigating Grok-made sexualised deepfake images. “Since late 2025, eSafety has received several reports relating to the use of Grok to generate sexualised images without consent,” an eSafety spokesperson said, as per The Guardian.
“Some reports relate to images of adults, which are assessed under our image-based abuse scheme, while others relate to potential child sexual exploitation material, which are assessed under our illegal and restricted content scheme.
It said at this point the deepfakes of children did not meet the criteria for child sexual exploitation material.
The X app offers users a “spicy mode” for explicit content.
The European Commission, the EU’s enforcement arm, said on Monday it was “seriously looking into this matter”, adding that it was “very aware” that X was offering a “spicy mode.”
“This is not spicy,” the European Union’s digital affairs spokesperson, Thomas Regnier, told the ABC. “This is illegal. This is appalling.”
The British technology secretary, Liz Kendall, said the deepfake images were “appalling and unacceptable in decent society” and that Musk’s platform needed to deal with it “urgently”.
The UK communications watchdog, Ofcom, announced on Monday that it had made “urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK”.
The regulator said tech firms must “assess the risk” of people in the UK viewing illegal content on their platforms.
French ministers reported X to prosecutors and regulators over the deepfake images, saying in a statement on Friday that the “sexual and sexist” content was “manifestly illegal.”
Speaking to BBC, Clare McGlynn, a law professor at Durham University, said that X or Grok “could prevent these forms of abuse if they wanted to”, adding they “appear to enjoy impunity”.
“The platform has been allowing the creation and distribution of these images for months without taking any action and we have yet to see any challenge by regulators,” she said.
xAI’s acceptable use policy bans “depicting likenesses of persons in a pornographic manner”.
In an interview with CNBC, the National Center on Sexual Exploitation (NCOSE) called on the US Department of Justice (DOJ) and Federal Trade Commission to investigate the matter.
A DOJ spokesperson told CNBC in an emailed reply, “The Department of Justice takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM. We continue to explore ways to optimize enforcement in this space to protect children and hold accountable individuals who exploit technology to harm our most vulnerable.”
What is Elon Musk’s X doing?
Musk initially enjoyed the trend of Grok producing deepfake images, using crying-with-laughter emoji on Friday in response to a digitally altered picture of a toaster wearing a bikini.
He said: “Not sure why, but I couldn’t stop laughing at this one.”
After global backlash over the harmful content, he later warned that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”.
An X spokesperson said, “We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
An earlier statement from Grok said that it had “identified lapses in safeguards” and was “urgently fixing them”. However, as The Guardian noted, these remarks were created by artificial intelligence and thus, it remains unclear whether or not the company is working to fix the problem.
The latest controversy comes as Musk and X have a history of allowing users who created child sexual exploitation posts to stay on the platform.
Amid the row, Musk’s artificial intelligence company, xAI, has raised $20 billion in its latest funding round.
With inputs from agencies


)

)
)
)
)
)
)
)
)



