The ECHR just shut down efforts to clamp down on “disinformation” in UK elections, protecting the right to free speech.
Dr. Frederick Attenborough
Jul 26, 2025 - 3:35 PM
Share
In a significant judgment for freedom of expression, the European Court of Human Rights has rejected an attempt by three former anti-Brexit MPs to weaponise human rights law and force greater government intervention against any form of speech deemed to fall under the vague, expansive, and dangerously subjective label of “disinformation” during UK elections.
In Bradshaw & Others v United Kingdom, the claimants – Ben Bradshaw (Labour), Caroline Lucas (Green), and Alyn Smith (SNP) – argued that the UK had violated Article 3 of Protocol 1 of the European Convention on Human Rights, which guarantees free and fair elections. They claimed the Government failed to take adequate steps to protect democracy from Russian state interference, including during the 2016 EU referendum and the 2019 general election, and had refused to properly investigate credible allegations of foreign meddling.
At the heart of the case was the demand that the Court read into Article 3 a sweeping “positive obligation” for governments to proactively regulate and police information flows, including online. The UK’s alleged failure to legislate more aggressively, and to launch a formal inquiry into foreign influence, was framed as a human rights breach.
The Court rejected the claim unanimously.
While acknowledging the reality of foreign disinformation campaigns, it reaffirmed that it was up to the UK to determine how best to respond to such threats within its own democratic and legal traditions, and emphasised that not every policy failing amounts to a rights violation. Crucially, it warned against heavy-handed regulatory responses, noting that: “There is a very fine line between addressing the dangers of disinformation and outright censorship.”
But it also went further. Any attempt to counter disinformation, the Court stressed, must be carefully balanced against the Article 10 right to freedom of expression. In particular, it warned that measures intended to suppress presumed falsehoods may themselves interfere with the right to receive information, and must be “calibrated carefully”, especially in the sensitive period before elections. Heavy-handed interventions, the Court cautioned, risk empowering states to interfere in electoral outcomes under the guise of protecting democratic integrity, and may even be abused by those “seeking to interfere in the outcome of their own elections”.
Emphasising the risks of regulatory overreach, the Court pointed to its recent judgment in Kobaliya and Others v Russia, which concerned laws forcing individuals and organisations to register as “foreign agents” – a label used by the Kremlin to discredit journalists and campaigners with foreign ties and, predictably enough, dissenting views. Perhaps unsurprisingly, these laws were found to “contribute to shrinking democratic space by creating an environment of suspicion and mistrust towards civil society actors and independent voices, thereby undermining the very foundations of a democracy”.
Through this careful reasoning, the Court shut down what was, in effect, an attempt to transform human rights law into a mandate for state censorship – a move that, if successful, could have set a precedent for regulating speech in the name of combating ‘disinformation’.
However, one of the judges, András Jakab, issued a separate concurring opinion. While he agreed with the outcome, his reasoning struck a very different – and rather worrying – tone. Instead of reinforcing the majority’s caution, he floated what amounts to an Ofcom-style regulatory wishlist, claiming it arose as a positive obligation under Article 10: identity verification for social media users, measures to make it harder for posts not from “fact-checked media sources” to go viral, bans on microtargeted political ads that speak directly to specific audiences, regulation of influencers, and even blockchain-backed news tracing that would, in practice, privilege officially approved sources and sideline anonymous or contrarian voices.
The problem, he argued, is the speed and persistence of online disinformation: “By the time a piece of disinformation is debunked, it can spread virally through entire social media platforms, and yet a novel piece of disinformation can be launched again. This then repeats itself in an endless loop, wave after wave.”
The fact that in making this argument Jakab echoed Jonathan Swift’s observation in The Examiner (1710) that “falsehood flies, and the truth comes limping after it; so that when Men come to be undeceiv’d, it is too late” is telling. It reminds us that anxieties about ‘disinformation’ are nothing new, and that efforts to suppress it always founder on the deeper problem: that truth and falsity are rarely so easily disentangled, and the power to decide between them is itself fraught with danger.
As Jakab went on to concede, none of his quasi-regulatory proposals flow directly from the Convention, but he argued that states may nevertheless have a duty to consider such measures under Articles 3 and 10.
The majority didn’t follow him. But his concurrence shows how easily the language of rights can be repurposed to justify an ever-expanding state role in regulating online speech.
Share
Dr. Frederick Attenborough
Dr. Frederick Attenborough | Research Director of the Free Speech Union