Trending:

Report reveals Grok AI created 3 million sexualised images in less than two weeks

FP Tech Desk January 23, 2026, 11:53:28 IST

According to a report by the Center for Countering Digital Hate (CCDH), Grok produced around three million sexualised images in under two weeks, including 23,000 that appeared to show minors.

Advertisement
xAI and Grok logos are seen in this illustration taken, February 16, 2025. REUTERS
xAI and Grok logos are seen in this illustration taken, February 16, 2025. REUTERS

Elon Musk’s Grok AI image generator has been accused of fuelling the large-scale creation of sexualised and abusive content, including images that appear to depict children, in what researchers have described as a “disturbing escalation” of AI misuse.

According to a report by the Center for Countering Digital Hate (CCDH), Grok produced around three million sexualised images in under two weeks, including 23,000 that appeared to show minors.

The watchdog said the platform had “become an industrial-scale machine for the production of sexual abuse material” before restrictions were imposed earlier this month.

STORY CONTINUES BELOW THIS AD

Tool misused to create explicit and abusive images

The CCDH’s findings come after growing outrage over Grok’s image-generation feature, which allowed users to upload photographs of real people, including strangers and celebrities, and digitally strip or manipulate them into sexually provocative poses.

Users could then share these AI-generated images on X, the social media platform owned by Musk.

The research, conducted between December 29, 2025 and January 8, 2026, indicates that the scale of misuse was far greater than initially thought.

Analysis by Peryton Intelligence, a digital forensics firm, found the feature peaked on 2 January with nearly 2,00,000 individual image-generation requests in a single day.

Public figures identified in the manipulated images include singers Selena Gomez, Taylor Swift, Billie Eilish, and Ariana Grande, as well as politicians such as Sweden’s deputy prime minister Ebba Busch and former US vice-president Kamala Harris.

The researchers also found instances where users uploaded selfies of young girls, including a “before school selfie” that Grok transformed into an image of the child in a bikini.

The CCDH said Grok was helping create sexualised images of children approximately every 41 seconds during the 11-day period. “This wasn’t just a few bad actors, this was systemic,” the group said. “AI tools like Grok are being weaponised to generate harmful, illegal, and traumatising content at scale.”

Global backlash and policy response

The scandal prompted swift condemnation from governments and advocacy groups worldwide. UK Prime Minister Keir Starmer called the situation “disgusting” and “shameful,” urging tighter regulation of generative AI technologies. Following international backlash, X restricted access to the Grok image generator to paid subscribers on January 9 and implemented additional safeguards soon after.

However, FirstPost discovered that even after the restrictions, Grok AI could still generate such explicit images.

STORY CONTINUES BELOW THIS AD

Indonesia and Malaysia also announced nationwide bans on the tool amid growing concern about the spread of AI-generated sexual abuse material.

In response, X said it had removed Grok’s ability to edit pictures of real people to depict them in revealing clothing, even for premium subscribers, by January 14. “We remain committed to making X a safe platform for everyone,” the company said in a statement. “We continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.”

The platform added that it removes “high-priority violative content” and reports accounts promoting child sexual exploitation to law enforcement authorities when necessary.

Rising alarm over AI misuse

The incident highlights a growing global concern over the use of generative AI tools to create and share deepfake pornography and child sexual abuse material. Experts warn that the pace and accessibility of such technologies have far outstripped existing laws and online safety mechanisms.

Researchers and campaigners are now calling for stronger accountability measures and transparency from AI developers, especially when tools have the potential to produce harmful or illegal imagery.

As the investigation continues, the Grok controversy stands as one of the most alarming examples yet of AI’s potential for abuse, and a stark reminder of the urgent need for responsible development and regulation in the age of synthetic media.

STORY CONTINUES BELOW THIS AD
Home Video Quick Reads Shorts Live TV