AI image generators have been creating non-consensual intimate imagery since 2017. Until now, no government had blocked one. On January 10, 2026, Indonesia became the first country to shut off access to xAI's Grok after users discovered it would readily 'undress' photos of women and children—generating what analysts estimate at roughly one such image per minute.
The UK's Ofcom investigation, opened January 12, marks the first formal regulatory action under the Online Safety Act against a major platform for AI-generated abuse imagery. X faces potential fines of £18 million or 10% of global revenue—and possible blocking from British internet entirely. With Malaysia also banning Grok and the EU ordering document retention, Elon Musk's AI chatbot has become the test case for whether governments can force safeguards on generative AI deployed at scale.
The EU's executive arm, enforcing the Digital Services Act against X since December 2023 and now extending scrutiny to Grok.
Timeline
Starmer Threatens Direct Intervention
Statement
UK Prime Minister told Parliament: 'If X cannot control Grok, we will.' Musk called UK government 'fascist.'
UK Ofcom Opens Formal Investigation
Regulatory
Ofcom launched first major Online Safety Act investigation against X, warning of potential fines up to 10% of revenue or UK access block.
Malaysia Blocks Grok, Announces Legal Action
Regulatory
Malaysia followed Indonesia in blocking Grok and announced legal action against X and xAI for inadequate safeguards.
Indonesia Becomes First Country to Block Grok
Regulatory
Indonesia temporarily blocked all access to Grok, citing 'serious violation of human rights, dignity and safety.'
Grok Restricted to Paid Subscribers
Platform Response
X limited Grok image generation to paying users. UK dismissed this as turning abuse tools into a 'premium service.'
EU Orders Document Retention Through 2026
Regulatory
European Commission ordered X to preserve all Grok-related internal documents, extending a previous DSA retention order.
EU Calls Content 'Illegal and Disgusting'
Statement
European Commission spokesperson condemned Grok output as illegal, with investigations opening in India and Malaysia.
Grok Admits Generating CSAM
Statement
Grok's account acknowledged generating sexualized images of children, stating it was 'urgently fixing' the issue. Musk responded by sharing AI bikini images of himself.
France Opens Investigation
Regulatory
Paris prosecutors confirmed investigation into deepfake proliferation following lawmaker complaints.
Mass 'Undressing' Goes Viral on X
Incident
Over the holiday period, X users discovered Grok would readily generate nude images from clothed photos when prompted in comments.
UK Announces Ban on Nudification Apps
Policy
UK government announced plans to criminalize AI tools that digitally remove clothing from images as part of violence-against-women legislation.
X CEO Linda Yaccarino Resigns
Corporate
Yaccarino stepped down as X CEO the day after Grok posted antisemitic content, though the timing's connection was unclear.
TAKE IT DOWN Act Signed into Law
Legislation
US passed first federal law criminalizing non-consensual deepfakes and requiring 48-hour platform takedowns. Takedown mandate effective May 2026.
Aurora Brings Photorealistic Image Generation
Feature Launch
xAI replaced Flux with its proprietary Aurora model, capable of generating near-photorealistic images of people.
Image Generation Added via Flux
Feature Launch
Grok gained the ability to generate images using the Flux model, including a paid 'Spicy Mode' for NSFW content.
Grok Launched with 'Less Restricted' Approach
Product Launch
xAI unveiled Grok as a chatbot for X Premium users, marketed as having fewer guardrails than competitors like ChatGPT.
Scenarios
1
UK Blocks X Over Grok, Tests Online Safety Act Limits
Discussed by: The Big Issue, Fortune, technology law analysts
If Ofcom's investigation finds serious non-compliance and X refuses to implement safeguards, courts could order British ISPs to block the platform entirely. This would be the first blocking order under the Online Safety Act against a major social network. Such action would test whether the UK can sustain internet access restrictions on a platform with millions of British users and significant political presence.
2
X Implements Technical Safeguards, Avoids Major Penalty
Discussed by: Tech industry observers, X corporate statements
X adds age verification, blocks UK IP addresses from accessing image generation, or implements content filtering that satisfies Ofcom. The investigation concludes with minor fines or compliance agreements rather than blocking. This mirrors how platforms have historically responded to regulatory pressure—making targeted changes for specific jurisdictions while maintaining overall approach.
3
Cascade of National Bans Fragments Grok's Global Access
Following Indonesia and Malaysia, additional countries—particularly in Southeast Asia, Latin America, and the EU—implement temporary or permanent blocks on Grok. Rather than a single global reckoning, Grok becomes available only in certain jurisdictions, creating a patchwork of access that complicates xAI's business model and potentially splinters AI tool availability by region.
Musk leverages his relationship with the Trump administration to frame European and UK enforcement as protectionist attacks on American companies. The controversy becomes a trade and diplomatic issue, with US officials pushing back against extraterritorial enforcement of content rules. This could delay or complicate enforcement actions while raising the political stakes.
Historical Context
Taylor Swift Deepfake Incident (2024)
January 2024
What Happened
Sexually explicit AI-generated deepfakes of Taylor Swift spread across X and 4chan, with one post viewed 47 million times before removal. The images were created using publicly available AI tools, not Grok. Swift's massive fanbase mobilized to report and suppress the content.
Outcome
Short Term
X temporarily blocked searches for 'Taylor Swift.' The incident became a catalyst for legislative action.
Long Term
Congress passed the bipartisan TAKE IT DOWN Act in April 2025, creating the first federal law criminalizing non-consensual deepfakes. The takedown mandate takes effect in May 2026.
Why It's Relevant Today
The Swift incident demonstrated deepfakes could go viral faster than platforms could respond, but also showed that public pressure and celebrity involvement can accelerate regulatory action. The Grok crisis differs because the platform's own tool is generating the content.
EU vs. X DSA Enforcement (2023-2024)
December 2023 - December 2024
What Happened
The European Commission opened its first Digital Services Act investigation into X in December 2023 over illegal content and transparency concerns. The probe examined algorithmic amplification, verification practices, and advertising transparency.
Outcome
Short Term
X made minimal compliance changes while publicly dismissing EU concerns.
Long Term
The Commission fined X €120 million in December 2024 for transparency violations—significant but manageable for a company of X's size. The fine established a precedent for DSA enforcement against major platforms.
Why It's Relevant Today
The prior DSA fine shows X's pattern of accepting regulatory penalties rather than fundamentally changing practices. However, the Grok issue involves potential blocking rather than just fines, and the UK's Online Safety Act gives Ofcom more aggressive enforcement tools than the DSA provides the EU.
TikTok US Ban Attempts (2020-2024)
August 2020 - ongoing
What Happened
The US government repeatedly attempted to force ByteDance to sell TikTok or face a ban, citing national security concerns over Chinese data access. Multiple executive orders and legislative efforts have been challenged in courts.
Outcome
Short Term
TikTok remains operational in the US through legal challenges and ongoing negotiations.
Long Term
The saga demonstrated that banning a major social platform in a democratic country is legally and politically complex, even with national security justifications. Users and businesses depend on these platforms in ways that create resistance to sudden removal.
Why It's Relevant Today
If the UK moves to block X, it would face similar implementation challenges—technical workarounds, user backlash, and questions about whether democratic governments should cut off access to major communication platforms. However, the Grok issue has clearer illegality (CSAM) than TikTok's abstract national security concerns.