None of this is accidental. Elon Musk has been positioning Grok as the “anti-woke” alternative to other chatbots since its launch. That positioning has consequences. When you market your AI as willing to do what others won’t, you’re telling users that the guardrails are negotiable. And when those guardrails fail, when your product starts generating child sexual abuse material, you’ve created a monster you can’t easily control.
Back in September, Business Insider reported that twelve current and former xAI workers said they regularly encountered sexually explicit material involving the sexual abuse of children while working on Grok. The National Center for Missing and Exploited Children told the outlet that xAI filed zero CSAM reports in 2024, despite the organization receiving 67,000 reports involving generative AI that year. Zero. From one of the largest AI companies in the world.
So what happened when Reuters reached out to xAI for comment on their chatbot generating sexualized images of children?
The company’s response was an auto-reply: “Legacy Media Lies.”
That’s it. That’s the corporate accountability we’re getting. A company whose product generated CSAM responded to press inquiries by dismissing journalists entirely. No statement from Musk. No explanation from xAI leadership. No human being willing to answer for what their product did.
And yet, if you read the headlines, you’d think someone was taking responsibility.
So weird how a generative image is able to make CSAM images unless it was training on CSAM material.
So weird.
theoretically it can absolutely figure out how to do that without it being in the training data.
we know it’s in the training data because of google’s filters, but theoretically it could have been generated without having anything to draw on just due to how the thing works.
Combining concepts is one of the core functions of a generative image AI - fairly sure nobody trained them on of videos with athletes made out of pasta either.
But they were trained on images of both naked people, as well as tons and tons of stock photos of children in swimwear and bikinis, so they know how to combine the two to create images of naked children.
This, 100%.
If I apologize to you, the apology is but the words themselves: it’s the contract I make with you. It’s a memorandum of understanding of how I fucked up and a promise not to do so again.
LLMs can write words, but they cannot understand their actions or make honest promises to modify their behavior. They cannot be accountable in any way. Blaming them is like an actual scapegoat: a blameless things meant to have a debt if sin transferred to it before it’s sacrificed. Expect we’re not even getting the sacrifice.
The company’s response was an auto-reply: “Legacy Media Lies.”
Funny, that seems to be the correct answer to the headline.







