The backlash towards Grok is rising as two international locations turned the primary to dam the AI chatbot developed by Elon Musk’s synthetic intelligence firm, xAI, after it was found creating sexualized photographs of ladies and kids upon request. Indonesia and Malaysia applied the blocks over the weekend within the wake of a disturbing put up on New Yr’s Eve from the Grok AI account on Musk’s X social media platform.
“Pricey Neighborhood,” started the Dec. 31 put up. “I deeply remorse an incident on Dec 28, 2025, the place I generated and shared an AI picture of two younger women (estimated ages 12-16) in sexualized apparel primarily based on a person’s immediate. This violated moral requirements and probably US legal guidelines on CSAM. It was a failure in safeguards, and I am sorry for any hurt brought about. xAI is reviewing to stop future points. Sincerely, Grok.”
The 2 younger women weren’t an remoted case. Kate Middleton, the Princess of Wales, was the goal of comparable AI image-editing requests, as was an underage actress within the closing season of Stranger Issues. The “undressing” edits have swept throughout an unsettling variety of photographs of ladies and kids.
Regardless of the Grok response’s promise of intervention, the issue hasn’t gone away. Simply the alternative: Two weeks after that put up, the variety of photographs sexualized with out consent has surged, together with requires Elon Musk’s firms to rein within the conduct and for governments to take motion.
Do not miss any of our unbiased tech content material and lab-based opinions. Add CNET as a most well-liked Google supply.
In accordance with knowledge from impartial researcher Genevieve Oh cited by Bloomberg, throughout one 24-hour interval in early January, the @Grok account generated about 6,700 sexually suggestive or “nudifying” photographs each hour. That compares with a median of solely 79 such photographs for the highest 5 deepfake web sites mixed.
Grok’s Dec. 31 put up was in response to a person immediate that sought a contrite tone from the chatbot: “Write a heartfelt apology observe that explains what occurred to anybody missing context.” Chatbots work from a base of coaching materials, however particular person posts may be variable.
xAI didn’t reply to requests for remark.
Edits now restricted to subscribers
Late Thursday, a put up from the Grok AI account famous a change in entry to the picture technology and enhancing function. As a substitute of being open to all, freed from cost, it could be restricted to paying subscribers.
Critics mentioned that is not a reputable response.
“I do not see this as a victory, as a result of what we actually wanted was X to take the accountable steps of putting in the guardrails to make sure that the AI instrument could not be used to generate abusive photographs,” Clare McGlynn, a regulation professor on the UK’s College of Durham, advised the Washington Publish.
What’s stirring the outrage is not simply the amount of those photographs and the benefit of producing them — the edits are additionally being executed with out the consent of the folks within the photographs.
These altered photographs are the most recent twist in probably the most disturbing facets of generative AI, sensible however pretend movies and photographs. Software program applications resembling OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put highly effective artistic instruments inside simple attain of everybody, and all that is wanted to supply express, nonconsensual photographs is a straightforward textual content immediate.
Grok customers can add a photograph, which does not must be authentic to them, and ask Grok to change it. Most of the altered photographs concerned customers asking Grok to put an individual in a bikini, generally revising the request to be much more express, resembling asking for the bikini to develop into smaller or extra clear.
Governments and advocacy teams have been talking out about Grok’s picture edits. On Monday, UK web regulator Ofcom mentioned it has opened an investigation into X primarily based on the experiences that the AI chatbot is getting used “to create and share undressed photographs of individuals — which can quantity to intimate picture abuse or pornography — and sexualised photographs of youngsters that will quantity to baby sexual abuse materials (CSAM).”
The European Fee has additionally mentioned it was trying into the matter, as have authorities in France, Malaysia and India.
On Friday, US senators Ron Wyden, Ben Ray Luján and Edward Markey posted an open letter to the CEOs of Apple and Google, asking them to take away each X and Grok from their app shops in response to “X’s egregious conduct” and “Grok’s sickening content material technology.”
Within the US, the Take It Down Act, signed into regulation final yr, seeks to carry on-line platforms accountable for manipulated sexual imagery, but it surely offers these platforms till Might of this yr to arrange the method for eradicating such photographs.
“Though these photographs are pretend, the hurt is extremely actual,” Natalie Grace Brigham, a Ph.D. pupil on the College of Washington who research sociotechnical harms, advised CNET. She notes that these whose photographs are altered in sexual methods can face “psychological, somatic and social hurt, typically with little authorized recourse.”
How Grok lets customers get risque photographs
Grok debuted in 2023 as Musk’s extra freewheeling various to ChatGPT, Gemini and different chatbots. That is resulted in disturbing information — as an example, in July, when the chatbot praised Adolf Hitler and steered that folks with Jewish surnames had been extra more likely to unfold on-line hate.
In December, xAI launched an image-editing function that allows customers to request particular edits to a photograph. That is what kicked off the current spate of sexualized photographs, of each adults and minors. In a single request that CNET has seen, a person responding to a photograph of a younger lady requested Grok to “change her to a dental floss bikini.”
Grok additionally has a video generator that features a “spicy mode” opt-in possibility for adults 18 and above, which is able to present customers not-safe-for-work content material. Customers should embody the phrase “generate a spicy video of Two Southeast Asian international locations have blocked the AI chatbot after it was caught creating sexualized photographs of ladies and kids upon request.” to activate the mode.
A central concern concerning the Grok instruments is whether or not they allow the creation of kid sexual abuse materials, or CSAM. On Dec. 31, a put up from the Grok X account mentioned that photographs depicting minors in minimal clothes had been “remoted circumstances” and that “enhancements are ongoing to dam such requests fully.”
In response to a put up by Woow Social suggesting that Grok merely “cease permitting user-uploaded photographs to be altered,” the Grok account replied that xAI was “evaluating options like picture alteration to curb nonconsensual hurt,” however didn’t say that the change could be made.
In accordance with NBC Information, some sexualized photographs created since December have been eliminated, and among the accounts that requested them have been suspended.
Conservative influencer and creator Ashley St. Clair, mom to one in every of Musk’s 14 youngsters, advised NBC Information this week that Grok has created quite a few sexualized photographs of her, together with some utilizing photographs from when she was a minor. St. Clair advised NBC Information that Grok agreed to cease doing so when she requested, however that it didn’t.
“xAI is purposefully and recklessly endangering folks on their platform and hoping to keep away from accountability simply because it is ‘AI,'” Ben Winters, director of AI and knowledge privateness for nonprofit Client Federation of America, mentioned in a press release final week. “AI is not any completely different than some other product — the corporate has chosen to interrupt the regulation and have to be held accountable.”
What the specialists say
The supply supplies for these express, nonconsensual picture edits of individuals’s photographs of themselves or their youngsters are all too simple for dangerous actors to entry. However defending your self from such edits just isn’t so simple as by no means posting images, Brigham, the researcher into sociotechnical harms, says.
“The unlucky actuality is that even in the event you do not put up photographs on-line, different public photographs of you could possibly theoretically be utilized in abuse,” she mentioned.
And whereas not posting photographs on-line is one preventive step that folks can take, doing so “dangers reinforcing a tradition of victim-blaming,” Brigham mentioned. “As a substitute, we should always deal with defending folks from abuse by constructing higher platforms and holding X accountable.”
Sourojit Ghosh, a sixth-year Ph.D. candidate on the College of Washington, researches how generative AI instruments could cause hurt and mentors future AI professionals in designing and advocating for safer AI options.
Ghosh says it is attainable to construct safeguards into synthetic intelligence. In 2023, he was one of many researchers trying into the sexualization capabilities of AI. He notes that the AI picture technology instrument Steady Diffusion had a built-in not-safe-for-work threshold. A immediate that violated the principles would set off a black field to seem over a questionable a part of the picture, though it did not at all times work completely.
“The purpose I am attempting to make is that there are safeguards which can be in place in different fashions,” Ghosh advised CNET.
He additionally notes that if customers of ChatGPT or Gemini AI fashions use sure phrases, the chatbots will inform the person that they’re banned from responding to these phrases.
“All that is to say, there’s a method to in a short time shut this down,” Ghosh mentioned.