16.3 Platforms as speech gatekeepers
Much of contemporary public speech is mediated by a handful of large platforms. They are not courts, yet their decisions can decide which ideas are visible, which communities can organise, and which livelihoods continue. The gatekeeping is rarely framed in constitutional terms; it arrives through terms of service, automated ranking systems, and uneven paths for appeal. Understanding how these mechanisms work does not require a legal background, but it does require careful attention to how the rules are written and applied in practice.
Terms of service as law
When you sign up for a social network, video site, marketplace, or app store, you agree to a contract: the terms of service and related policies. In the UK this is not a statute, but it functions like a private rulebook with real-world consequences. Platforms can remove posts, delete accounts, restrict features, or demonetise content based on those terms. This is not theoretical; a journalist can have years of work removed from a platform archive, a charity can lose its fundraising tools, and a small business can see its shop listing disappear overnight because a policy changed.
Terms often mix broad values with specific prohibitions. “Hate speech”, “misinformation”, and “harm” may be defined, yet the definitions are intentionally flexible to cover new harms and evade obvious loopholes. That flexibility is why the rules can be enforced quickly, but it is also why outcomes can feel inconsistent. Two posts with similar content can be treated differently because the platform’s internal context is not yours. For example, a UK nurse discussing vaccine side effects in a professional group might be flagged under a misinformation rule if the system cannot distinguish it from conspiracy content elsewhere.
A common misunderstanding is that a platform’s rules are identical to the law. They are not. Something may be lawful in the UK but still prohibited by a platform, and something that would be illegal in one country may remain visible in another because the platform applies local filtering. The result is a patchwork where the practical boundaries of speech are set by private policy, not by Parliament or the courts. This is not always sinister; platforms are often trying to keep services usable and safe. But it does mean that the ordinary expectations many people have about free expression do not transfer neatly to a privately run digital space.
Mitigations in this area tend to be about predictability and resilience rather than perfect protection. Reading the policies you rely on most is tedious but practical, especially for professionals whose income depends on a platform. Keeping a copy of your work offline, maintaining a presence on more than one platform, and using your own website or mailing list reduce the impact of sudden enforcement. None of these changes the power imbalance, but they reduce the damage when the rules move or are applied unexpectedly.
Algorithmic suppression
Gatekeeping is not only about what stays up. On most platforms, visibility is governed by ranking algorithms that decide what people see first, if at all. The term “algorithm” here simply means a set of rules and models for sorting content. It does not require deliberate censorship to suppress speech. A system tuned for engagement may quietly demote content that is less likely to keep people scrolling, even if it is accurate, lawful, and socially valuable.
Consider a teacher posting a detailed thread about GCSE revision methods. It is informative, but it does not provoke quick reactions, so it sinks below a flood of short clips. Or take a local councillor sharing a sober update about roadworks: it may receive few comments and therefore be shown to fewer people. In both cases, the platform has not banned the speech, but it has effectively reduced its reach. That is a form of gatekeeping, and it happens constantly.
Suppression can also be intentional but opaque. Platforms may reduce the distribution of content that triggers a policy boundary without fully removing it. This is sometimes called “visibility filtering” or “downranking”. It is used for a range of reasons: to slow the spread of coordinated manipulation, to limit graphic content while keeping it for documentation, or to reduce the amplification of harassment. The risk is that it is difficult to detect from the outside. Users may infer a “shadow ban” when their posts perform badly, even when the more likely cause is algorithmic indifference, timing, or competition for attention. The myth of universal, deliberate suppression distracts from the more mundane reality: most content is simply not prioritised.
Mitigations here are limited because ranking systems are proprietary and constantly adjusted. You can, however, reduce dependence on a single feed by encouraging direct subscriptions, newsletters, or RSS where possible. When sharing sensitive or important information, redundancy helps: mirror the content elsewhere, or provide a stable, linkable source that can be shared even if the original post is downranked. For organisations, designing communications so that essential details sit in the first few lines or in the linked source improves resilience against partial visibility.
Appeal asymmetry
When enforcement goes wrong, the ability to appeal matters. In practice, appeal processes are often asymmetrical: a platform can take action instantly, but users face delays, automated replies, or no practical route to a human review. This is not always the result of malice; at the scale of millions of reports per day, manual review is expensive and slow. But the asymmetry shapes behaviour. Knowing that a takedown might be difficult to reverse, people self-censor or avoid controversial but lawful topics.
Appeal asymmetry is most visible in cases where accounts are tied to income or public service. A UK musician who relies on video revenue may lose months of earnings after a mistaken copyright claim. A community organiser can lose access to event tools during a local emergency because their account was automatically flagged. Even when appeals succeed, the damage is often already done. The risk is not only loss of speech but loss of time, attention, and trust.
There are partial mitigations. Maintaining a clear archive of your original content, timestamps, and licences makes appeals easier when intellectual property disputes arise. Using platform-provided verification or professional accounts can give access to better support channels, though this often favours larger organisations. For individuals, building relationships across platforms and keeping off-platform contact methods reduces the impact of sudden restrictions. None of these steps guarantees a fair outcome, and that is important to acknowledge. The trade-off is that platforms prioritise speed and scale over due process, and users must decide how much risk they are willing to accept when choosing where to speak and build a community.
In the UK context, proposals such as the Online Safety regime have increased attention on platform responsibilities, but the practical experience for users is still shaped by company policy and platform design. The limits are not only legal; they are technical and organisational. Understanding that distinction helps set realistic expectations and supports better choices about where and how to communicate.