What happens when you merge the world’s most toxic social media cesspool with the world’s most unhinged, uninhibited, and intentionally “spicy” AI chatbot?
It looks a lot like what we’re seeing in X right now. Users have been inputting images into xAI’s Grok chatbot, which features a powerful and largely uncensored image and video generator, to create explicit content, even of ordinary people. The proliferation of deepfake porn on the platform has become so extreme that in recent days, xAI’s Grok chatbot has spit out approximately one non-consensual sexual image every minute. In recent weeks, thousands of users have joined the grotesque trend of using Grok to primarily strip women and children (yes, children) without their consent through a rather obvious solution.
To be clear, you can’t ask Grok, or most mainstream AIs, for nudes. But you can ask Grok to “undress” a picture someone posted on X, or if that doesn’t work, ask him to put you in a tiny, invisible bikini. The United States has laws against this type of abuse, and yet the xAI team has been almost…indifferent about it.
Questions from several journalists to the company about the matter were met with automated messages of “lies from legacy media.” xAI CEO Elon Musk, who just successfully raised $20 billion in funding for the company, was sharing fake bikini photos of himself (content warning) until recently. On Friday morning, following widespread condemnation and threats from regulators,
While Musk warned on January 4 that users will “suffer consequences” if they use Grok to create “illegal images,” xAI has given no indication that it will remove or address core features. — paywalled at $8 a month or not, allowing users to create such explicit content, although some of the more incriminating posts have been removed. xAI had not responded to Vox’s request for comment as of Friday morning.
No one should be surprised here. It was only a matter of time before the toxic sludge that the website formerly known as Twitter has become combined with xAI’s Grok, which has been explicitly marketed for its NSFW capabilities, to create a new form of sexual violence. Musk’s company has essentially created a deepfake porn machine that makes creating realistic, offensive images of anyone as simple as typing a response on
You may be wondering, as I think we all now find ourselves asking several times a day: How is all this legal? To be clear, it is not. But lawyers and legal experts say current laws still fall far short of the protections victims need, and the sheer volume of deepfakes being created on platforms like X makes the protections that exist very difficult to enforce.
Sign up here to explore the big, complicated problems facing the world and the most efficient ways to solve them. Sent twice a week.
“The prompts that are allowed or not allowed” using a chatbot like Grok “are the result of deliberate and intentional choices by the technology companies that are implementing the models,” said Sandi Johnson, senior legislative policy counsel for the Rape, Abuse and Incest National Network.
“In any other context, when someone turns a blind eye to harm they are actively contributing to, they are held responsible,” he said. “Tech companies should not be held to any different standards.”
First, let’s talk about how we got here.
“Abusers using technology to commit sexual abuse is nothing new,” Johnson said. “They’ve been doing that forever.”
But AI consolidated a new type of sexual violence through the rise of deepfakes.
Deepfake pornography of female celebrities, created in their likeness, but without their consent, using more primitive artificial intelligence tools, has been circulating on the internet for years, long before ChatGPT became a household name.
But more recently, apps and websites called nudify have made it extremely easy for users, some of them teenagers, to turn innocuous photos of friends, classmates, and teachers into explicit deepfake content without the subject’s consent.
The situation has become so dire that last year, advocates like Johnson convinced Congress to pass the Take It Down Act, which criminalizes non-consensual deepfake pornography and requires companies to remove such materials from their platforms within 48 hours of being reported or potentially face fines and injunctions. The provision will go into effect this May.
For many victims, even if companies like
“For these tech companies, it was always like ‘break things and fix them later,'” Johnson said. “It must be taken into account that as soon as a single [deepfake] “An image is generated, this is irreparable damage.”
X made deepfakes a feature
Most social media and major AI platforms have complied to the extent possible with emerging state and federal regulations around deepfake pornography and, in particular, child sexual abuse material.
Not only because such materials are “flagrantly and radioactively illegal,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, “but also because it’s disgusting and most companies don’t want any of their brand associations to be a one-stop shop for it.”
But Musk’s xAI seems to be the exception.
Since the company introduced its “spicy mode” video generation capabilities on
Most nudification apps require users to first download a photo, perhaps from Instagram or Facebook, and then upload it to whatever platform they’re using. If they want to share the deepfake, they must download it from the app and send it through another messaging platform, such as Snapchat.
These multiple points of friction gave regulators some crucial opportunities to intercept non-consensual content, with a sort of Swiss cheese-style defense system. Maybe they couldn’t stop everything, but they could ban some “nudification” apps from the app stores. They’ve gotten Meta to crack down on ads promoting the apps to teens.
But on Even with the new restrictions implemented for non-premium users on Friday morning, even free users can create deepfake content almost seamlessly, without having to leave the app.
“That would matter less if it were a social media community for nuns, but it’s a social media community for Nazis,” Pfefferkorn said, referring to X’s far-right turn in recent years. The result is a non-consensual deepfake crisis that appears to be spiraling out of control.
In recent days, users have created 84 times more sexualized deepfakes on X per hour than on the other top five deepfake sites combined, according to independent deepfake and social media researcher Genevieve Oh. And those images can be shared much more quickly and widely than anywhere else. “The emotional and reputational damage to the person depicted is now exponentially greater” than on other deepfake sites, said Wayne Unger, an assistant law professor specializing in emerging technology at Quinnipiac University, “because X has hundreds of millions of users who can view the image.”
It would be practically impossible for
Will X be responsible for any of this?
If the same type of criminal images appeared in a magazine or online publication, then the company could be held liable, subject to hefty fines and possible criminal charges.
Social media platforms like the clause have been a pillar of free speech on the internet (a world in which platforms were held responsible for everything they contain would be much more restricted), but Johnson says the clause has also become a “financial shield” for companies unwilling to moderate their platforms.
However, with the rise of AI, that shield may finally be starting to crack, Unger said. He believes that companies like xAI should not be covered by Section 230 because they are no longer mere hosts of illegal or hate content but, through their own chatbots, essentially creators of it.
“X has made a design decision to allow Grok to generate sexually explicit images of adults and children,” he said. “The user may have asked Grok to generate it,” but the company “made the decision to release a product that can produce it in the first place.”
Unger doesn’t expect xAI, or industry groups like NetChoice, to back down without a legal fight against any attempts to further legislate content moderation or regulate easy-to-abuse tools like Grok. “Maybe they will give a smaller share,” since the laws governing [child pornography] They’re very strong, he said, but “at least they’re going to argue that Grok should be able to do it with adults.”
In any case, the public outrage in response to Grokpocalypse deepfake porn may finally force a reckoning on an issue that has long been in the shadows. Around the world, countries such as India, France and Malaysia have launched investigations into the sexualized images flooding X. Finally, Musk posted on X that those who generate illegal content will face consequences, but this goes beyond the users themselves.
“This is not a computer that does this,” Johnson said. “These are deliberate decisions made by the people who run these companies and they must be held accountable.”
Update, January 9, 12 pm ET: This article, originally published on January 9, has been updated to reflect the news of Grok’s deepfake capabilities via the xAI paywall.

