What Does the UK’s Online Safety Crackdown Mean for Wikipedia?

By Shantanu Mukherjee Published: Sept. 17, 2025 Last Updated: Sept. 17, 2025
What Does the UK’s Online Safety Crackdown Mean for Wikipedia?

The U.K.'s Online Safety Act (OSA) has faced significant criticism for its broad categorisation framework that potentially captures platforms beyond its intended scope.

On 11 August 2025, the first legal challenge to the U.K.'s Online Safety Act’s (OSA) categorisation regulations was determined in Wikimedia Foundation vs. Secretary of State for Science, Innovation and Technology.

Wikimedia, the American non-profit behind Wikipedia, challenged its potential classification as a Category 1 service, arguing this would compromise contributor privacy and fundamentally disrupt its collaborative model through mandatory identity verification requirements.

The U.K. High Court of Justice, presided over by Justice Johnson, largely dismissed the challenge. However, the court granted permission on one narrow ground: that the Secretary of State failed to properly consider user numbers and platform functionalities when drafting the regulations.

Background Wikimedia announced its challenge to the OSA's lawfulness on 8 May 2025, targeting the categorisation regulations that outline platform duties.

The Act creates three categories (Category 1, 2A and 2B) based on perceived risk determined by visitor numbers and specific features such as content resharing. Each category carries varying compliance requirements.

The primary concern with Category 1 classification is the mandatory identity verification requirement, which could exclude users unwilling to verify their identities and restrict access and participation for UK users.

WIKIMEDIA'S CHALLENGE

Wikimedia's primary contention was that categorising the platform as a Category 1 service is logically flawed. The foundation argued the law was intended to regulate "large, profitable social media companies where anonymous content can go viral." Including platforms such as theirs would fundamentally disrupt operations.

The organisation further argued that the regulation is incompatible with Articles 8, 10, and 11 of the European Convention on Human Rights (ECHR), as well as Article 14, because it fails to differentiate between distinct types of online providers.

According to lead counsel Phil Bradley-Schmieg, the user verification obligation, allowing users to only see content from contributors whose identities have been verified, is incompatible with the platform's model. It would be impossible to isolate content created by verified users from that of anonymous contributors without making articles incoherent or unusable.

The foundation also pointed to Ofcom's research showing that among users who experienced online harm, 56% reported harm from social media, 9% from sites hosting user-posted videos, and 8% from webmail. Only 2% reported harm from the general "other" category, which includes their platform. Highlighting its global reach, the organisation noted that it offers content in over 300 languages, with millions of articles viewed an estimated 15 billion times per month worldwide. Compliance with such regulations would divert significant resources away from what it describes as "digital public goods."

The mandatory identity verification requirement could also jeopardise the privacy and safety of volunteer contributors, exposing them to risks such as data breaches, stalking, lawsuits, penalties, and imprisonment. The foundation stressed that such measures would threaten knowledge-sharing and undermine fundamental rights to privacy and free speech. This case demonstrates the type of matters a technology law firm often advises on, balancing innovation, compliance, and contributor rights.

THE DEFENDANT'S RESPONSE

The defendant claimed that Ofcom's research to inform the categorisation regulations under the UK's Online Safety Act was structured around four key themes.

1. Consistency, where Ofcom applied the same data sources and research for Category 1 conditions as for other category conditions;

2. Objectivity, wherein research was conducted in a "service-agnostic" manner, which means that data was analysed without linking it to any specific identified service, to maintain impartiality and neutrality in regulatory assessment;

3. Scope, aimed to use comprehensive and reliable data sources to cover a broad range of relevant factors;

4. Transparency, by publishing the data and additional research sources.

The defence argued that the regulations were designed to apply consistently and fairly across services based on objective criteria such as user numbers and functionality, aiming to protect users, especially children and vulnerable groups.

It stated that the claimants have not shown they are victims of a breach of the ECHR and therefore lack sufficient standing to bring the claim, making it hypothetical. It also clarified that it is not yet known whether the claimant will fall within Category 1 or be treated in the same way as social media companies.

THE COURT'S RULING

The court refused permission to pursue claims on the grounds of compatibility with Articles 8, 10, and 11 of the ECHR and breach of Article 14, as these grounds were considered to lack arguable merit.

However, it granted permission on the grounds of challenge that the Secretary of State, in accordance with paragraph 1(5) of Schedule 11 of OSA, had failed to take into account the likely impact of the number of users of the user-to-user part of the service, and its

functionalities, on how easily, quickly and widely regulated user-generated content is disseminated by means of the service.

It also dismissed the motion that the classification was irrational. However, it emphasised that this decision does not authorise a regime to significantly hinder operations.

Should such a regime be introduced, it would need to be justified as proportionate to avoid breaching the right to freedom of expression. It was agreed that the decision for classification is within Ofcom's authority.

WHAT LIES AHEAD

The court’s partial dismissal leaves key issues unresolved. While it denied permission on human rights grounds, it allowed a challenge over whether the Secretary of State properly considered user numbers and platform functionalities when drafting the categorisation regulations.

Importantly, Ofcom has yet to decide which services will be Category 1, but such decisions will be subject to judicial review, including human rights concerns. Justice Johnson also noted that any decision substantially hindering Wikipedia’s operation would be unlawful under section 6 of the Human Rights Act 1998 read with Article 10 of the ECHR.

If Wikipedia is classified as Category 1 and burdened with strict requirements, those measures must pass a proportionality test, or Wikimedia could seek exemptions or challenge the regulations. This case also highlights how data protection law intersects with online platform regulation and user privacy, particularly when mandatory verification or personal data collection is involved.

Any Questions?

Connect with lawyers and seek expert legal advice

All Posts

Share

GOT A LEGAL QUESTION?

Connect with lawyers and seek expert legal advice

Find Article by Practice Area

Browse articles by practice area