Elon Musk’s X responds to UK AI abuse backlash

In UK News by Newsroom15-01-2026 - 8:54 PM

Elon Musk’s X responds to UK AI abuse backlash

Credit: BBC

Elon Musk’s X has told the UK government it is complying with the law after public outrage over Grok’s use to manipulate images.

On Wednesday, Keir Starmer told the House of Commons that Grok's photos were "disgusting" and "shameful," but he was also told that X was "acting to ensure full compliance with UK law."

“If so, that is welcome,”

the prime minister said.

“But we are not going to back down. They must act. We will take the necessary measures. We will strengthen existing laws and prepare for legislation if it needs to go further, and Ofcom will continue its independent investigation.”

Following the appearance of numerous sexual photos on Musk's platform, Ofcom, the media regulator, began looking into X on Monday.

Ministers are keeping an eye on the effects of the actions made by the social media platform, but government officials are reportedly in communication with X. It is frustrating that Grok doesn't seem to use the safeguards other AI vendors have put in place to stop the creation of such images.

“We are keeping a close watch on the situation,”

Starmer said. He spoke as new polling showed 58% of Britons believe X should be banned in the UK if the platform doesn’t crack down on AI-generated, nonconsensual images. More in Common’s research also found 60% believe UK ministers should come off X, and 79% fear AI misuse is set to become a bigger problem.

The @grok account, which many users have been requesting to partially undress celebrities and others, is said to have been blocked by X in recent days. As a result, it no longer produces photographs of actual persons wearing skimpy attire.

The Online Safety Act prohibits the posting of nonconsensual intimate photos, such as those produced by requesting an AI to place people in bikinis, underwear, and sexual positions.

The UK-based watchdog, the Internet Watch Foundation, reported last week that it had observed individuals boasting on a dark web forum about using the Grok app to produce topless and sexualized images of girls between the ages of 11 and 13.

Musk declared on Wednesday that he was "not aware of any naked underage images generated by Grok. Literally zero."

“Obviously, Grok does not spontaneously generate images, it does so only according to user requests,

he wrote in an X post.

“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

The technology secretary, Liz Kendall, reiterated her criticism of xAI, the business that owns Grok and X, for restricting Grok's image creation and editing features to users who pay, calling it "a further insult to victims, effectively monetizing this horrific crime."

She stated that a broader prohibition on AI-enabled nudification tools "will apply to applications that have one despicable purpose only: to use generative AI to turn images of real people into fake nude pictures and videos without their permission" in a letter to members of the Commons select committee for science, innovation, and technology.

However, given that "reports of these disturbing Grok deepfakes appeared in August 2025," committee chair Chi Onwuruh has criticized the government's tardiness in enforcing the prohibition.

What specific UK laws govern AI-generated sexual images?

The UK now criminalises the creation, sharing, or requesting ofnon-consensual AI- generated intimate images under the Data( Use and Access) Act( DUAA) 2025, accelerated into force this week amid the Grok contestation. 

Creating or requesting AI- generated sexual images without concurrence carries penalties up to two times imprisonment, closing previous gaps where only distribution was prosecutable. participating or hanging to partake similar deepfakes( including undergarments delineations) violates the Online Safety Act 2023, with platforms like X facing forfeitures up to 10% of global profit or service blocks fornon-compliance. 

AI- generated child sexual abuse material( CSAM) falls under being Protection of Children Act 1978 and Felonious Justice Act 1988, now strengthened by 2025 legislation empowering designated bodies( e.g., Internet Watch Foundation) to test AI models for safeguards against synthetic CSAM.