7.1 Messaging applications
Encryption is not invisibility
Modern messaging apps are full of promises: “end-to-end encryption”, “private by design”, “disappearing messages”. These are useful protections, but they are not the same as invisibility. A message can be encrypted and still reveal who spoke to whom, when, on what device, and for how long. That information can be enough to build a detailed picture of a person’s life, even when the actual words remain unreadable. Understanding what encryption does and does not hide is the difference between reasonable confidence and false comfort.
End-to-end encryption, explained without mystique
End-to-end encryption (often shortened to E2EE) means only the people in the conversation can read the message contents. The app provider, the mobile network, and anyone watching the connection should see only encrypted data. In practice, this usually works by each device creating long-term identity keys and short-lived session keys. When you send a message, your app uses the session key to encrypt it; the recipient’s app uses its matching key to decrypt it. Even if a company is compelled to hand over stored messages, an end-to-end encrypted service should only be able to provide unreadable ciphertext.
In real life, this protects ordinary conversations from routine collection or accidental exposure. A journalist arranging an interview, a doctor following up with a patient, or a parent sharing sensitive details with a school can do so without the service provider reading the contents. That is a genuine benefit. The common misunderstanding is that E2EE hides everything else. It does not. It is a strong lock on the message contents, not a cloak around the existence of the conversation.
End-to-end encryption also depends on endpoint security. If one person’s phone is compromised, the attacker can read messages after they are decrypted on the device. This is not a failure of the encryption itself; it is a reality of where decryption must happen. The practical mitigation is boring but effective: keep devices updated, use device passcodes, avoid sideloaded apps you do not trust, and lock down backups.
Metadata exposure: the trail around the message
Metadata is information about communication rather than the content itself. It includes who you messaged, when, how often, the size of messages, the device used, and sometimes location indicators such as IP addresses. Even if a service cannot read your messages, it may still store and share metadata under lawful requests or internal policy. In the UK, telecoms and service providers can be required to retain certain data under specific legal regimes. That does not automatically mean access is constant or casual, but it does mean metadata can be a significant exposure.
Consider a simple example. A worker in a small company messages a union representative in the evenings for a few weeks. The content is encrypted, but the metadata shows a pattern of contact. That pattern can be enough to infer the topic of discussion. Another example is domestic abuse: a survivor may move house and use a private chat app to arrange help. Even without message content, metadata can reveal ongoing contact with support services if the pattern is consistent.
Mitigations exist, but they involve trade-offs. Some services minimise stored metadata by design, and some offer settings to hide contact discovery or reduce logging. Where possible, prefer apps with published, audited policies around metadata retention. Using the same app on multiple devices can increase metadata spread, as each device creates its own connection trail. If you need to limit exposure, keep sensitive conversations on a single device and turn off optional features such as location sharing or read receipts. These do not make you invisible, but they reduce the amount of information produced.
Group chat risks: the weakest link is real
Group chats are practical for family updates, project work, and community organising. They are also riskier than one-to-one conversations because every participant becomes a potential point of failure. Even with end-to-end encryption, any member can copy text, forward screenshots, or leave their phone unlocked. The moment a message is decrypted, it becomes ordinary text on a device. There is no technical control that forces a human to keep it private.
Group membership itself is sensitive information. Many apps keep a record of who belongs to a group and when they joined. That can be revealing in a professional setting, a support group, or a local campaign. If a group grows, the likelihood of accidental exposure rises. A new member might not understand the context, or might be using an old handset with poor security. In the workplace, an employee might be subject to a device management policy that archives messages or backs them up to a company system.
Practical mitigation here is mostly social and procedural. Keep groups small where sensitivity is high, agree clear boundaries about forwarding or screenshots, and avoid posting details that would harm someone if shared. Many apps allow “admins only” settings for group membership changes; using them can prevent surprise additions. If you must share a file that is sensitive, consider whether it should be sent at all, or whether a more controlled channel is appropriate.
Disappearing messages and the lure of false safety
Disappearing messages can be helpful for reducing long-term storage. They limit what remains on a device if it is lost or accessed later. They do not stop someone taking a screenshot, photographing the screen, or copying the text elsewhere. They also do not necessarily erase server-side logs or backups. A message can vanish from the app while still existing on a recipient’s phone backup or a linked device that has not come online to receive the deletion command.
In everyday use, disappearing messages are best seen as a convenience feature, not a security guarantee. For example, a landlord arranging access to a property may choose a short message timer so door codes do not sit in a chat history for months. That reduces casual exposure if a phone is borrowed or sold. It does not protect against intentional misuse. If the recipient wants to keep the code, they can.
Mitigation here is about matching the tool to the risk. Use disappearing messages to limit clutter and reduce accidental retention, but do not rely on them to protect you from a determined recipient or from device compromise. If a conversation is sensitive enough that retention is unsafe, the safest option may be not to send it at all, or to use a different channel with tighter operational control.
Putting the pieces together in daily life
For most people in the UK, the practical goal is not secrecy from everyone, but a sensible reduction of risk. End-to-end encryption is a strong baseline; it prevents mass reading of messages by intermediaries. The remaining risks sit in metadata, group dynamics, device security, and human behaviour. These are manageable when you acknowledge them.
A realistic approach might look like this: choose a reputable app with E2EE enabled by default, keep your phone updated, limit how many devices are logged in, and be cautious about what goes into large groups. Turn on disappearing messages for conversations that do not need to be archived, while accepting they are not magic. If your context changes — for example, a new job with stricter device monitoring, or a situation where someone else controls your phone plan — reconsider the level of exposure you can accept. Privacy in messaging is not a switch; it is a set of decisions made in context.