
On 12 January 2026, the UK regulator Ofcom launched a formal enforcement investigation into X over the outputs of its AI chatbot, Grok. The move represents the most aggressive enforcement posture to date under the Online Safety Act 2023 (OSA) against an agentic AI system.
Prime Minister Keir Starmer publicly described the outputs at issue as “disgusting and unlawful,” signalling that AI safety enforcement has shifted from guidance to sovereign priority.
For non-lawyer CEOs and board-adjacent decision-makers, this is not a technical dispute about content moderation. It is a binary commercial risk event.
Where an AI system integrated into an organisation’s workflow generates prohibited content, the OSA’s duty-of-care framework now leaves little room for proportionality arguments. In practice, the enforcement posture functions as near-strict liability, particularly where safeguards are found to be ineffective or inadequately governed.
This risk profile has been further sharpened by the government’s accelerated implementation of the Data (Use and Access) Act 2025. The statute introduces explicit criminal exposure for the creation of AI-generated intimate images and, in certain contexts, for their solicitation.
What was previously characterised as a software failure or model misalignment now carries the potential to establish a direct criminal nexus for the enterprise — and, critically, for those responsible for oversight.
The exposure extends well beyond consumer social media platforms. Any organisation using generative AI for customer interaction, internal decision-making, or content production now falls within the operational reach of Ofcom’s Business Disruption Measures.
These powers allow the regulator to bypass traditional fine-led enforcement and seek court-ordered interventions compelling payment processors, advertisers, and infrastructure partners to withdraw services from non-compliant platforms.
Where an AI system lacks demonstrable “safety-by-design,” regulatory scrutiny is no longer hypothetical. Under the current enforcement posture, it is a prioritisation signal.
The central liability shift exposed by the Grok investigation is structural. Responsibility has moved decisively away from end-users and toward infrastructure and model providers. Under Section 121 of the OSA, Ofcom may issue Technology Notices requiring the deployment of accredited tools to detect and remove illegal content generated by AI systems.
Failure to comply does not merely invite discretionary penalties. It gives the regulator a statutory basis to escalate enforcement rapidly. Financial exposure is no longer capped at manageable civil fines but scales to up to 10% of qualifying worldwide revenue. For most technology companies, this exceeds annual operating margins, converting compliance from a technical function into a core fiduciary obligation.
In effect, AI platforms are increasingly treated as the legal creators of the outputs their systems generate. This framing collapses the traditional defence that harmful content is merely user-initiated or incidental to platform function.
The accountability shift extends to capital markets. The Crime and Policing Bill 2026, as currently drafted, introduces offences targeting companies that supply AI tools designed or readily adaptable for intimate image abuse. The burden is placed squarely on developers and deployers to demonstrate that safeguards are not only present, but operationally effective.
The financial consequences of non-compliance are existential rather than punitive. Ofcom’s enforcement toolkit now includes powers to disrupt revenue flows directly by targeting the commercial dependencies of a platform.
| Former Status Quo | Trigger Event | Immediate Reality |
|---|---|---|
| Safe-harbour assumptions for user-generated content | Formal Ofcom probe into AI-generated intimate imagery | AI providers treated as legally responsible for system outputs |
| Predictable civil penalties | Explicit political mandate to assert regulatory control | Revenue exposure scaled to global turnover |
| “Best-effort” safety filters | Criminal exposure under the Data Act framework | Mandatory, auditable safeguards or market exclusion |
A defining feature of the 2026 enforcement landscape is the personalisation of corporate failure. Section 103 of the OSA requires regulated services to designate a named Senior Manager responsible for safety compliance. Where offences occur and oversight failures are identified, Section 109 provides a basis for individual criminal exposure.
For General Counsel, Chief Risk Officers, and compliance leaders, this represents a fundamental shift. Ofcom’s investigations now routinely examine the internal governance structures and sign-off processes surrounding AI deployment. In the Grok matter, scrutiny has extended to the specific safety oversight protocols approved by senior leadership.
The practical effect is the erosion of the corporate veil in the AI context. Personal indemnity and D&O coverage increasingly hinge on the demonstrable adequacy of AI governance frameworks.
Ofcom’s Information Notice powers under Section 102 of the OSA introduce a further layer of exposure. These notices compel disclosure of training data provenance, model behaviour logs, and prompt-response records. Failure to comply — or the provision of misleading information — constitutes a criminal offence.
For many organisations, this creates a transparency paradox. Compliance may require the disclosure of proprietary IP or trade secrets, yet claims of commercial confidentiality carry limited weight where Priority Illegal Content is alleged. The 2026 enforcement posture makes clear that regulatory access will override most internal secrecy assumptions.
The Grok investigation marks the end of public-facing AI experimentation under a “beta” mindset. Guidance has been replaced with enforcement, and discretion with compulsion. For CEOs, the strategic priority must shift from AI efficiency to AI sovereignty. Deploying third-party models without verified safety logs now imports regulatory risk you do not control.
For General Counsel and boards, the precedent confirms that Section 121 of the OSA will be used to test the financial viability of platforms that fail safety audits. The obligation to designate a Senior Manager is no longer administrative. Failure to do so — or failure to ensure that role is meaningfully empowered — constitutes a governance failure with criminal implications under the Data Act framework.
In 2026, speed is no longer an advantage. Every automated interaction is a potential liability event. Boards that have not revisited AI indemnity provisions, insurance exclusions, and oversight protocols within the last 30 days are operating without a safety net.
The era of AI self-regulation is over. What replaces it is not ethical aspiration, but enforceable accountability — and the costs of misjudging that shift will be borne personally, institutionally, and immediately.





