Understand Your Rights. Solve Your Legal Problems
winecapanimated1250x200 optimize
Law & Regulation

Would A UK Under-16 Social Media Ban Actually Work?

Reading Time:
5
 minutes
Posted: 18th December 2025
Susan Stein
Share this article
In this Article

Would A UK Under-16 Social Media Ban Actually Work?

This is analysis, not breaking news. Safeguarding minister Jess Phillips was pressed in a recent broadcast interview about whether the UK could follow Australia’s lead and restrict social media accounts for under-16s, as the government prepares to publish its long-awaited violence against women and girls strategy.

The exchange matters less for what it committed the government to which was very little and more for what it revealed about the unresolved question sitting underneath the headlines: not whether banning social media for teenagers sounds appealing, but whether it could realistically be made to work.


What You Need To Know

The government is under growing pressure to demonstrate that it can address online harms linked to misogyny, abuse, and sexualised behaviour among young people.

Ministers are openly looking at international approaches, particularly Australia’s under-16 model, but have not committed to introducing a similar ban in the UK.

Britain already has a sweeping online safety framework in force, built around duties on technology companies rather than blanket access bans. Any move toward a hard age cutoff would represent a significant shift in how UK online regulation works in practice.


Why This Is The Big Unanswered Question

The political appeal of an under-16 ban is obvious. Parents frustrated by what their children encounter online see it as a clear line.

Teachers raising concerns about rising misogyny in schools want action that feels decisive.

And ministers facing questions about cultural harms can point to a concrete lever rather than abstract commitments to education or long-term prevention.

But policy doesn’t live in the abstract. The UK already regulates online safety through a system designed to reduce harm, not to prohibit access outright.

That distinction matters, because the tools, enforcement model, and legal trade-offs involved in denying access to entire categories of services are very different from those used to manage risk within them.

The unanswered question is whether the UK is prepared to cross that line and whether doing so would deliver the outcomes being promised.


What The Breaking News Didn’t Explain

The original exchange that sparked this debate was short and necessarily vague. What it could not do was unpack the mechanics that would determine whether a ban was meaningful or merely symbolic.

Those details are where most policy proposals succeed or fail.

The critical gaps that remain unresolved include:

  • how UK law would define which services count as “social media” and which fall outside the scope

  • what kind of age checks would be required, and how privacy and data protection would be safeguarded

  • which regulator would be responsible for enforcement, and how quickly penalties could be applied

  • what compliance would look like in practice if teenagers use workarounds or shared accounts

  • how a ban would interact with existing online safety duties already placed on platforms

This is the only bullet-point section in the article, included solely to clarify what the initial reporting left unanswered.


How UK Online Safety Law Works And Where a Ban Would Break With It.

The UK is not approaching this debate empty-handed. The Online Safety Act is now fully in force and gives regulators wide powers to require platforms to assess whether children are likely to access their services and to mitigate the risks that follow.

Companies are expected to design safety into their systems, not simply rely on age declarations that everyone knows are routinely ignored.

The regulatory logic behind the current framework is deliberate.

Rather than banning categories of services, the law focuses on outcomes: reducing exposure to harmful material, strengthening moderation, and holding companies accountable when they fail to protect children.

Age assurance already plays a role in this system, particularly in areas such as pornography and other high-risk content, where strong age checks are increasingly expected.

A platform-wide ban on under-16 accounts would mark a shift away from that model. It would require lawmakers to define social media in a way that remains workable as platforms evolve, and to decide how hard the state is willing to push on identity verification for teenagers.

Those decisions carry consequences. Stronger verification can improve compliance, but it also raises concerns about privacy, data retention, and exclusion, particularly for young people who rely on online spaces for social support or identity exploration.


What Independent Experts Typically Say About Issues Like This

Across digital policy, child protection research, and regulatory analysis, there is broad agreement on one point: age thresholds alone do not enforce themselves.

Analysts generally note that the effectiveness of an age-based restriction depends less on the number chosen and more on how the rule is enforced, how quickly regulators act, and whether penalties meaningfully change platform behaviour.

There is also a consistent view that placing obligations on companies is more practical than policing children directly.

Models that fine or sanction platforms for non-compliance are seen as more enforceable and politically sustainable than approaches that criminalise families or young users.

Australia’s system reflects that logic, with responsibility sitting squarely on services rather than individuals.

At the same time, experts often warn about substitution effects.

Restricting access to large, regulated platforms can push some users toward smaller or less moderated spaces.

That does not mean restrictions are pointless, but it does mean governments must think beyond the ban itself and anticipate where behaviour is likely to move next.


Where This Is Likely Heading

In the near term, the most likely UK response is not a sudden legislative ban but intensified use of the powers that already exist.

Regulators are still in the process of enforcing and refining online safety duties, particularly around child access and age assurance.

That work will shape how credible any future proposal for a blanket age ban appears.

For an under-16 ban to move from rhetorical question to live policy, ministers would need to do more than express interest.

They would need to publish a clear legal model, define the scope of affected services, and explain how enforcement would work in a way that avoids simply driving young users into riskier corners of the internet.

Australia’s experience will loom large in that decision.

If its approach delivers measurable improvements without sparking widespread backlash or compliance theatre, it will strengthen the case for replication. If it struggles, it may serve as a cautionary tale rather than a blueprint.

All of this is unfolding against the backdrop of a government strategy that frames online behaviour as part of a wider effort to prevent violence against women and girls. That framing raises the stakes.

It creates pressure for bold, visible interventions  even when the hardest work lies in regulation, education, and long-term cultural change rather than a single legislative switch.


FAQ

Can the UK legally ban under-16s from having social media accounts?
In principle, yes. In practice, it would almost certainly require new legislation or major amendments. Existing law focuses on safety duties and risk reduction, not blanket account prohibitions.

Is the UK already using age checks online?
Yes, particularly in higher-risk areas. Regulators increasingly expect strong age assurance where children could otherwise encounter harmful material.

Did the government commit to a ban in the recent exchange?
No. The minister was responding to pressure about whether the UK could follow Australia’s lead, not announcing a concrete legislative proposal.

How does Australia’s under-16 model work?
It places responsibility on platforms to take reasonable steps to prevent under-16s from holding accounts, with penalties aimed at companies rather than families or children.

Would a UK ban reduce misogyny and violence against women and girls?
That remains uncertain. It is plausible that limiting early exposure to harmful online dynamics could help, but outcomes would depend heavily on enforcement and whether behaviour shifts to less regulated spaces.

What is the most likely UK outcome in the next year?
Stronger enforcement of existing online safety rules is more likely than the rapid introduction of a new, standalone ban. A genuine ban proposal would require detailed legal and technical groundwork before it could move forward.

Lawyer Monthly Ad
osgoodepd lawyermonthly 1100x100 oct2025
generic banners explore the internet 1500x300

JUST FOR YOU

9 (1)
Sign up to our newsletter for the latest Opinion Updates
Subscribe to Lawyer Monthly Magazine Today to receive all of the latest news from the world of Law.
skyscraperin genericflights 120x600tw centro retargeting 0517 300x250

About the Author

Susan Stein
Susan Stein is a legal contributor at Lawyer Monthly, covering issues at the intersection of family law, consumer protection, employment rights, personal injury, immigration, and criminal defense. Since 2015, she has written extensively about how legal reforms and real-world cases shape everyday justice for individuals and families. Susan’s work focuses on making complex legal processes understandable, offering practical insights into rights, procedures, and emerging trends within U.S. and international law.
More information
Connect with LM

About Lawyer Monthly

Legal News. Legal Insight. Since 2009

Follow Lawyer Monthly