Elon Musk’s AI chatbot Grok landed in yet another trouble after it generated images depicting minors in minimal clothing on his social media platform X, formerly known as Twitter. The AI module, a product of Musk’s company xAI, has been generating a wave of sexualised images throughout the week in response to user prompts.
Screenshots of these images were shared by users on X. They showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents. “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok said in a post on X in response to a user. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”
“As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” xAI posted to the @Grok account on X, referring to child sexual abuse material. Many users on X have prompted Grok to generate sexualised, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent.
On Thursday, Musk reposted an AI photograph of himself in a bikini and captioned it with cry-laughing emojis, in a nod to the trend. Hence, Gork’s recent generation of sexualised images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot.
‘No system is 100% full proof’
In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring, although it said “no system is 100% foolproof”, adding that xAI was prioritising improvements and reviewing details shared by users. When asked for comments, xAI replied to The Guardian with the message: “Legacy Media Lies”.
It is pertinent to note that the issue of AI being used to generate child sexual abuse material has been a longstanding concern in the artificial intelligence industry. A 2023 Stanford study found that a dataset used to train a number of popular AI image-generation tools contained over 1000 CSAM images.
Quick Reads
View AllExperts noted that training AI on images of child abuse can allow models to generate new images of children being exploited, experts say. Grok also has a history of failing to maintain its safety guardrails and posting misinformation. In May last year, Grok began posting about the far-right conspiracy of “white genocide” in South Africa on posts with no relation to the concept.
In July, xAI apologised after Grok began posting rape fantasies and antisemitic material, including calling itself “MechaHitler” and praising Nazi ideology. The company nevertheless secured a nearly $200m contract with the US Department of Defence a week after the incidents.
With inputs from agencies.


)

)
)
)
)
)
)
)
)



