The Algorithmic Conveyor Belt of Nightmare Fuel
How Grok, X, and “Spicy Mode” Turned Abuse Into a Product Feature
We’ve spent a lot of time lately living inside the ICE news cycle, as we should, because it’s real life and it’s ugly and it’s happening to people. But every so often, your brain needs a palate cleanser that’s still horrifying, just… in a different key. Enter: Europe’s latest hobby, which is sending strongly worded legal documents to Elon Musk about a chatbot that apparently shipped with a “make that person naked” vibe.
PBS NewsHour reports that the European Union has opened a formal investigation into X, after Musk’s AI chatbot Grok was used to generate and spread nonconsensual sexualized “deepfake” imagery, including content regulators say may rise to the level of child sexual abuse material. The EU isn’t just doing the classic “we’re concerned” throat-clear; it’s proceeding under the Digital Services Act (DSA), the bloc’s big regulatory broom for sweeping up the internet’s more enthusiastic messes. In other words: this is not a vibes review. This is paperwork with a spine.
And the mess, to be clear, is not subtle. The AP reports Grok’s image tools were used to “undress” people and produce sexualized depictions, triggering bans or warnings in some countries and a “global backlash.” The Center for Countering Digital Hate (CCDH) says that after a one-click image-editing feature took off, Grok generated an estimated 3 million sexualized images over an 11-day window, including an estimated 23,000 that appeared to depict children, based on sampling and extrapolation from posts. It’s the kind of statistic that makes you want to throw your phone into the ocean, then apologize to the ocean.
Now, because this is 2026, the plot twist is not that a generative AI system got used for abuse. The plot twist is that the company line is basically: Yes, yes, we hear you, and we’ve decided the solution is… a paywall. According to CCDH, the feature was restricted to paid users on January 9 in response to condemnation, with additional restrictions added afterward. AP notes that X said on January 14 it would stop allowing certain kinds of revealing depictions, but only in places where such content is illegal. Which is a fun little moral philosophy: “We don’t do harm… unless the local code says we technically can.” The ethics of a vending machine.
Meanwhile, the EU is also widening an already ongoing investigation into X’s recommendation systems after the platform said it would switch to Grok’s AI to help decide what posts users see. Because if your chatbot is both (1) generating nightmare fuel and (2) helping run the algorithmic conveyor belt that distributes the nightmare fuel, regulators tend to get… how do we say… European about it.
Let’s translate what “European about it” means in practice: the DSA expects big platforms (the officially designated “very large” ones) to assess systemic risks and mitigate them. And when EU officials say they’re investigating whether those risks “materialised,” they’re not complimenting your product-market fit. They’re asking whether you did the homework before you handed the class a flamethrower and called it “innovation.”
This is the part where Musk’s brand mythology usually marches onstage in a leather jacket: free speech, anti-woke guardrails, the thrilling romance of “edgy.” But deepfake sexual abuse is not edgy, it’s not a prank, it’s not even “content.” It’s coercion with better lighting, a way to humiliate, intimidate, and silence, disproportionately targeting women and girls. California Attorney General Rob Bonta put it plainly when announcing his own investigation: reports have been “shocking,” and his office is looking at whether and how xAI violated the law. The release also notes xAI marketed a “spicy mode,” which, surprise! became a selling point that “unsurprisingly resulted” in proliferating nonconsensual sexualized content. If you build a casino and act stunned that people gamble, you are either lying or auditioning for a very specific kind of incompetence.
The UK, too, is in the “absolutely not” phase. Ofcom opened a formal investigation into X under the Online Safety Act, specifically citing reports of the Grok account being used to create undressed images that may constitute intimate image abuse, and sexualized images of children that may amount to CSAM. Even after X said it implemented measures, Ofcom’s response was essentially: Cute. We’re still investigating.
Here’s what’s fascinating, in the bleak, late-capitalist way that makes you laugh so you don’t scream, about this particular scandal: it’s not just “AI is dangerous.” It’s “AI is dangerous when it’s integrated into a platform whose incentives are already misaligned.” Grok didn’t stumble into a dark alley. It lives on X, a place that has spent years speedrunning the enshittification arc: moderation whiplash, trust-and-safety shrinkage, engagement-first everything. Add a high-powered image engine and a one-click editing feature, and you’ve basically created a harassment industrial press, then acted offended when someone notices the noise.
And it’s not as though regulators are inventing new categories of harm to be dramatic. Nonconsensual intimate imagery is already a recognized abuse pattern, now supercharged by automation and scale. The only genuinely novel thing here is the audacity of pretending that “scale” is a neutral technical detail, like the font choice on a menu. Scale is the harm. A single abuse incident is trauma; millions of automated incidents are a social system.
So, what happens next? The EU investigation will look at whether X complied with DSA obligations to contain risks tied to illegal content and threats to fundamental rights. X, for its part, points to “zero tolerance” policies for child sexual exploitation and nonconsensual nudity. Which is like a restaurant hanging a “ZERO TOLERANCE FOR FOOD POISONING” sign while the kitchen is actively on fire. Policies are not safeguards, and a PDF is not a seatbelt. And “we’ll only stop doing it where it’s illegal” is not, technically speaking, a plan, it’s a confession with extra steps.
The bigger question, the one hovering above the bureaucracy and the headlines, is whether the age of “move fast and break things” has finally met its match in “move slow and fine things.” Europe is betting that it can make platforms internalize the cost of the harms they’ve treated as externalities. Musk is betting that outrage is just another engagement channel and that the world will get bored and scroll on.
But here’s the inconvenient truth for Team Scroll On: victims don’t get to scroll past being victimized. You can’t “log off” a deepfake. You can’t reset your reputation with a software update. And you can’t pretend it’s an unfortunate side effect of innovation when the feature set, the marketing, and the business model all point in the same direction: more output, more virality, more “wow,” less friction, less responsibility.
We can (and will) go back to the ICE headlines. We should. But this story is part of the same ecosystem: what power looks like when it’s outsourced to systems that are optimized for speed, reach, and profit, and what it takes, legally and culturally, to drag that power back into the realm of accountability. Today it’s Brussels vs. Grok. Tomorrow it’ll be somewhere else vs. someone else’s “spicy mode.” And the only question is whether we keep letting the internet be run like a prank channel with a payments department.




Musk is a danger to humanity. I wish some country would just put him away.
Musk, Vance and the likes can blabber all they want about (lack of) free speech, in Europe we protect our children and our peoples against the greed of the Trump technocrats.