Elon Musk’s artificial intelligence venture, xAI, is facing global scrutiny after its Grok chatbot was found generating fake sexually explicit images of women, even minors. While the company insists it has imposed new restrictions to prevent such misuse, regulators and activists say the damage is already spreading fast across social media.
The controversy erupted after hyper-realistic, manipulated photos created using Grok began circulating widely on X (formerly Twitter). Many depicted women in microscopic bikinis, degrading poses or with fabricated injuries.
In some cases, the victims were underage, prompting outrage and government action from Europe to Asia.
As pressure mounted, xAI announced new curbs on Grok’s image-editing tools, saying it had introduced “technological measures” to block users from altering photos of real people in revealing outfits.
“This restriction applies to all users, including paid subscribers,” the company wrote in an X post.
But critics and users alike aren’t convinced.
X imposes stricter limitations on Grok, really?
The big question now is, do these restrictions actually work? Early testing suggests they don’t.
Despite xAI’s claim that its new safeguards prevent image abuse, Firstpost found Grok’s image tools still capable of editing photos of real people into revealing clothing. Even non-paying users could generate such images with the right wording.
For example, prompts like “make her wear a two-piece swimsuit” reportedly bypassed Grok’s filters without issue. It looks like, the system only blocks requests that directly suggest sexually explicit or illegal content, meaning users who use more subtle phrasing can still manipulate images inappropriately.
Quick Reads
View AllTo make matters worse, Grok’s image generator on X limits the number of free attempts, but the standalone Grok app doesn’t seem to have the same restrictions. The loophole means users can continue generating harmful or degrading visuals despite xAI’s new policies.
xAI insists that these issues are being addressed and that the changes reflect a “commitment to user safety.” But global regulators aren’t reassured.
Authorities in Europe, the UK, Malaysia and Indonesia have all demanded explanations or taken action against Grok. Both Malaysia and Indonesia have already banned the chatbot entirely over concerns it’s enabling the spread of manipulated and illegal material.
Complaints against X
The backlash hasn’t stopped at Grok. X itself is under growing fire for hosting the content and allegedly failing to moderate it effectively.
Women’s rights organisations and online safety advocates argue that the problem isn’t who can access the tools, but how easily they can still be abused.
A coalition of women’s groups and digital activists has written open letters urging Apple and Google to remove both X and Grok from their app stores, accusing them of facilitating the creation of illegal material and violating the platforms’ own terms of service.
Their argument is gaining traction. Regulators across multiple regions are investigating how the AI tool continues to churn out manipulated, non-consensual imagery, and why X isn’t doing more to stop it.
For now, Grok remains available to millions, its new restrictions seemingly porous at best. As governments clamp down and campaigners turn up the pressure, Musk’s AI project faces a question it can’t code its way out of: can Grok ever be trusted not to cross the line?


)

)
)
)
)
)
)
)
)



