16.1 What freedom of speech actually is
Legal protection, not entitlement
Freedom of speech is a legal protection against certain forms of interference, not a promise that anyone must provide you with an audience or a platform. In the UK, the most commonly cited protection is Article 10 of the European Convention on Human Rights, which is incorporated into UK law through the Human Rights Act. It protects the right to hold opinions and to receive and impart information and ideas. But it also allows restrictions that are “necessary in a democratic society”, such as laws against incitement to violence, harassment, or the disclosure of certain confidential information.
This framing matters in everyday life. If a local council tries to cancel a peaceful public meeting because it dislikes the organiser’s views, that can be challenged as a legal issue. If a newspaper refuses to publish a letter to the editor, there is no legal entitlement to force them to do so. The right limits what the state can do; it does not compel private actors to host or amplify your speech.
A common misunderstanding is that freedom of speech means freedom from consequences. It does not. If you publish inaccurate claims about a neighbour, you may face a defamation claim. If you disclose an employer’s confidential client list, you may be sued for breach of contract. These are not “exceptions” that break the principle; they are the normal boundaries of how the law balances speech with other rights such as privacy, reputation, and safety.
Negative rights and positive rights
Legal rights are often divided into negative and positive rights. A negative right requires others to refrain from interfering. A positive right requires others to take action. Freedom of speech is primarily a negative right: the state should not censor or punish lawful expression. It does not, by default, obligate the state to provide you with a printing press, a broadcast licence, or a guaranteed place in a public debate.
There are areas where positive obligations exist, but they are narrower and often policy-driven rather than absolute. For example, public bodies might be required to act fairly and to consider a range of views in certain consultations. A publicly funded broadcaster may have duties around impartiality and access. These obligations create opportunities for speech, but they are not the same as an open-ended entitlement to be heard everywhere.
Understanding the difference helps make sense of disputes around access. If a university hosts a public lecture, it may have obligations around free expression on campus, but it can still set rules about safety, security, and scheduling. If a private conference rejects a speaker, the issue is usually contractual and reputational, not constitutional.
Speech versus reach
Being able to speak and being able to reach people are not the same thing. Speech refers to the act of expressing or publishing. Reach is about distribution: how far a message travels, who sees it, and how it is ranked or surfaced. In a digitally monitored world, reach is often shaped by technology as much as by law.
Consider two practical examples. A community group can publish a statement on its own website. That is speech. Whether it appears in search results, is shared on social platforms, or is recommended by an algorithm is reach. Similarly, a journalist can post a story on a personal blog without interference. The reach of that story may still be limited by factors like platform policies, advertising rules, or the design of recommendation systems.
It is easy to assume that limited reach is a form of censorship. Sometimes it is. Sometimes it is the by-product of unrelated design choices, such as filters that prioritise trusted sources during a public health emergency, or ranking systems that downrank spam and misleading content. The risk is that opaque or poorly designed systems can suppress legitimate speech without anyone intending it. This is not a purely technical problem; it is a governance problem.
Mitigations are mostly practical rather than legal. Use multiple channels rather than relying on a single platform. Keep copies of your work and publish on sites you control. Where possible, learn how moderation and ranking systems work so you can avoid unintentional triggers, such as repetitive posting patterns that look like spam. None of this guarantees reach, but it reduces the chance that your speech disappears due to avoidable technical or policy quirks.
Courts versus platforms
Courts decide whether speech is lawful. Platforms decide whether it is allowed on their services. The two are related but not the same. A court might decide that a statement is legal, while a platform can still remove it for breaching its community rules. Conversely, a platform might allow content that is lawful but controversial, while users and advertisers pressure it to change course.
This distinction becomes clear in ordinary situations. A charity might share graphic images to raise awareness of abuse. A court would likely protect that as lawful expression, but a social media platform may restrict it because of its rules on violent or disturbing content. Or a local activist may be cleared of wrongdoing after police investigation, yet a platform keeps a post removed because it was reported in large numbers and flagged as “misleading” during a heated news cycle.
The risk here is twofold. First, platform decisions can be inconsistent or poorly explained. Appeals processes often exist but may be slow or opaque, especially for smaller accounts. Second, legal protections are often hard to enforce against private companies, particularly when decisions are made automatically and at scale. You can complain, you can use formal reporting tools, and in some cases you can pursue legal remedies, but the process is rarely quick.
Practical mitigations focus on resilience. Keep an off-platform archive of your work. If you rely on a platform for income or community, diversify your presence and build direct contacts such as mailing lists or RSS feeds. When you publish sensitive material, document sources and context so that you can contest removals with evidence rather than argument. None of this removes the power imbalance, but it reduces the cost of a bad decision.
In the UK context, cultural expectations around civility and harm are often reflected in platform policies, even when the law itself is less restrictive. That means context matters: a comment that is lawful may still be removed if it is seen as harassing or as encouraging hostility, especially in local community spaces. Understanding those cultural norms is not about self-censorship; it is about knowing where friction is likely to appear and choosing the trade-offs that make sense for you.