Warning: This story contains references to image-based abuse.
X (formerly Twitter) has become a site for the rapid spread of artificial intelligence-generated non-consensual sexual images (also known as deepfakes).
Using the platform’s own built-in generative AI chatbot, Grok, users can edit images they upload through simple voice or text prompts.
Various media outlets have reported that users are using Grok to create sexualised images of identifiable individuals.
These have been primarily of women, but also children. These images are openly visible to users on X.
Users are modifying existing photos to depict individuals as unclothed or in degrading sexual scenarios, often in direct response to their posts on the platform.
Reports say the platform is currently generating one non-consensual sexualised deepfake image a minute.
These images are being shared in an attempt to harass, demean or silence individuals.
A former partner of X owner Elon Musk, Ashley St Clair, said she felt “horrified and violated” after Grok was used to create fake sexualised images of her, including of when she was a child.
Here’s where the law stands on the creation and sharing of these images – and what needs to be done.
Image-based abuse and the law
Creating or sharing nonconsensual, AI-generated sexualised images is a form of image-based sexual abuse.
In Australia, sharing (or threatening to share) non-consensual sexualised images of adults, including AI-generated images, is a criminal offence under most Australian state, federal and territory laws.
But outside of Victoria and New South Wales, it is not a criminal offence to create AI-generated, nonconsensual sexual images of adults or to use the tools to do so.
It is a criminal offence to create, share, access, possess, and solicit sexual images of children and adolescents. This includes fictional, cartoon, or AI-generated images.
The Australian government has plans underway to ban “nudify” apps, with the United Kingdom following suit.

People are using Grok to create sexualised images of identifiable individuals. Photo: Shutterstock
However, Grok is a general-purpose tool rather than a purpose-built nudification app.
This places it outside the scope of current proposals targeting tools designed primarily for sexualisation.
Holding platforms accountable
Tech companies should be made responsible for detecting, preventing, and responding to image-based sexual abuse on their platforms.
They can ensure safer spaces by implementing effective safeguards to prevent the creation and circulation of abusive content, responding promptly to reports of abuse, and removing harmful content quickly when made aware of it.
X’s acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner” as well as “the sexualisation or exploitation of children”.
The platform’s adult content policy stipulates content must be “consensually produced and distributed”.
X has said it will suspend users who create non-consensual AI-generated sexual images. But post-hoc enforcement alone is not sufficient.
Platforms should prioritise safety-by-design approaches. This would include disabling system features that enable the creation of these images, rather than relying primarily on sanctions after harm has occurred.
In Australia, platforms can face takedown notices for image-based abuse and child sexual abuse material, as well as hefty civil penalties for failure to remove the content within specified timeframes.
However, it may be difficult to get platforms to comply.
What next?
Multiple countries have called for X to act, including implementing mandatory safeguards and stronger platform accountability.
Australia’s eSafety Commissioner Julie Inman Grant is reportedly investigating complaints about Grok’s sexualised deepfakes.
In Australia, AI chatbots and companions are noted for further regulation. They are included in the impending industry codes designed to protect users and regulate the tech industry.
Individuals who intentionally create nonconsensual sexual deepfakes play a direct role in causing harm, and should be held accountable too.
Several jurisdictions in Australia and internationally are moving in this direction, criminalising not only the distribution but also the creation these images.
This recognises harm can occur even in the absence of widespread dissemination.
Individual-level criminalisation must be accompanied by proportionate enforcement, clear intent thresholds and safeguards against overreach, particularly in cases involving minors or lack of malicious intent.
Effective responses require a dual approach. There must be deterrence and accountability for deliberate creators of nonconsensual sexual AI-generated images.
There must also be platform-level prevention that limits opportunities for abuse before harm occurs.
Some X users are suggesting individuals should not upload images of themselves to X.
This amounts to victim blaming and mirrors harmful rape culture narratives.
Anyone should be able to upload their content without being at risk of having their images doctored to create pornographic material.
Hugely concerning is how rapidly this behaviour has become widespread and normalised.
Such actions indicate a sense of entitlement, disrespect, and lack of regard for women and their bodies.
The tech is being used to further humiliate certain populations, for example sexualising images of Muslim women wearing the hijab, headscarfs or tudungs.
The widespread nature of the Grok sexualised deepfakes incident also shows a universal lack of empathy and understanding of and disregard for consent.
Prevention work is also needed.
If you or someone you know has been impacted
If you have been impacted by non-consensual images, there are services you can contact and resources available.
The Australian eSafety Commissioner currently provides advice on Grok and how to report harm. X also provides advice on how to report to X and how to remove your data.
If you need someone to talk to, you can contact:
- Lifeline — 13 11 14
- Beyond Blue — 1300 22 46 36
- Headspace — 1800 650 890
- 1800RESPECT — 1800 737 732
- Kids Helpline — 1800 551 800
- MensLine Australia — 1300 789 978
- QLife (for LGBTIQA+ people) — 1800 184 527
- 13YARN (for Aboriginal and Torres Strait Islander people) — 13 92 76
- Suicide Call Back Service — 1300 659 467
Giselle Woodley is a lecturer and research fellow in communications at Edith Cowan University.
Nicola Henry is professor, Australian Research Council future fellow, and deputy director, Social Equity Research Centre at RMIT University.
This article is republished from The Conversation under a Creative Commons license. It may have been edited for length or clarity. You can read the original here.