12. Living under surveillance states

City skyline at night
Living under pervasive surveillance.

Living in a persistent monitoring environment changes the texture of ordinary life. It does not always feel dramatic. The more common experience is a steady hum of observation: cameras at transport hubs, mobile networks logging connections, and routine checks at workplaces or housing blocks. The impact depends on context. A person’s profession, community standing, travel patterns, and even their social circle can all affect how visible they are to authorities and how likely they are to be scrutinised.

This section looks at the practical realities of surveillance states: the difference between passive and active monitoring, how informants and honeypots operate, the kinds of social engineering used by authorities, and ways people reduce risks in everyday life without falling into paranoia. The aim is to be accurate about what is possible and what is common, while staying grounded in how people actually live.

Persistent monitoring environments

Sensors to action.
Sensors to action.

Persistent monitoring means observation that does not switch off when a specific investigation ends. It is embedded in infrastructure: CCTV coverage, automatic number plate recognition (ANPR), bulk collection of communications metadata, travel databases, and workplace access logs. These systems can be run by state agencies, or by private organisations required to share data on request. In the UK, for example, data retention and lawful access powers shape how communications and location records may be stored and obtained, even when there is no immediate suspicion against an individual.

One of the key realities is that visibility is uneven. A community organiser who regularly meets people in public spaces may be recorded more often than someone working from home. A person who travels across borders frequently will create more official records than someone who stays local. A routine stop at a train station can become part of a pattern that matters later, long after the day itself has been forgotten.

Passive versus active surveillance

Passive surveillance is data collection that happens by default. It includes the fact that mobile networks know which mast your phone is connected to, or that building entry systems record who used a keycard and when. It also includes large-scale data sets such as licence plate records and video archives. Passive systems do not, by themselves, imply a person is being targeted. They exist because they are efficient for administration, security, or commercial reasons, and because the data can be useful if a later inquiry arises.

Active surveillance begins when someone chooses to look closely: pulling a person’s records, tailing them in person, or tuning facial recognition systems to match a specific target. Active surveillance is more resource-intensive. It is used when authorities believe a case warrants it, or when automated systems flag something as unusual. A low-key example is when a border officer quietly reviews a traveller’s history because their itinerary resembles known patterns of fraud. A higher‑intensity example is the use of physical observation teams and device searches during an investigation.

A common misunderstanding is that passive surveillance means “no one is watching”. In practice, data can be queried later, sometimes long after the event. Another misconception is that active surveillance is always sophisticated. In reality, a lot of active monitoring is manual and basic: repeated police visits, pressure on employers for information, or routine checks on neighbours. The technical and the social often combine.

Informants and honeypots

Informants are people who provide information to authorities, sometimes for payment, sometimes under pressure, and sometimes because they genuinely believe it is the right thing to do. This is not confined to criminal contexts. In a workplace or community group, an informant might simply report who attends meetings, who asks certain questions, or who appears to be organising others. The reliability of informants varies, and authorities are aware of this. As a result, reports are often treated as prompts for further inquiry rather than as definitive evidence.

Honeypots are environments or opportunities designed to attract people for monitoring. In digital security, the term often refers to systems set up to observe attacks. In social and political contexts, a honeypot can be a website, a community group, or a seemingly “safe” channel that is in fact monitored. These do not need to be highly technical. A local meeting advertised with vague promises of protection might be enough to draw people who are then identified and assessed.

Recognising these dynamics is not about treating everyone with suspicion. It is about understanding the incentives. A person facing legal trouble might inform to reduce their own risk. A new contact who quickly pushes for sensitive information or asks for attendance lists is a warning sign. A group that discourages basic safety practices — such as meeting in public or keeping records minimal — might be seeking to isolate participants for easier monitoring.

Social engineering by authorities

Social engineering is the use of human interaction to extract information or influence behaviour. Authorities use it because it works. It can be as simple as a friendly conversation with a neighbour to build a picture of someone’s habits, or a casual question at work framed as routine checks. In some cases, it involves impersonation or the strategic use of official language to gain compliance.

Examples in everyday life include a landlord being asked to confirm a tenant’s visitors, or a university administrator being approached for records under a legal request they do not fully understand. Another common tactic is to ask a person to provide information “just to clarify a few points”. Even when the request is lawful, the framing can make it sound informal and harmless, encouraging people to offer more than they need to.

A practical mitigation is to have a clear personal boundary about what information you share and with whom. That does not mean refusing all requests; it means understanding the difference between what you know, what you are required to say, and what you choose to disclose. In the UK, certain authorities have powers to request data, but they still operate within legal boundaries and often need specific conditions to be met. When unsure, it is reasonable to say you will respond later, or to ask for the request in writing.

Plausible deniability in daily life

Plausible deniability is often misunderstood. It does not mean doing something and hoping you can talk your way out of it. In practice, it means structuring your activities so that normal, legitimate explanations exist and so that sensitive intent is not obvious. This is less about deception and more about careful separation of roles and data.

For example, a person who does volunteer advocacy might keep that activity on a separate device from their paid work, and avoid mixing accounts or calendars. If asked about a meeting, they can state the ordinary purpose without needing to reveal private communications. In a more everyday setting, someone who values privacy might use cash for small purchases or avoid linking a loyalty card to every transaction. These are normal choices that reduce how much of a personal profile is created.

The limitation is that plausible deniability does not help if evidence is direct and specific. It also does not protect against broad inference: if you are regularly present at a location linked to a group, that pattern may be meaningful regardless of your stated reasons. The practical aim is to reduce unnecessary linkage, not to believe that you can erase all traces of your life.

Low‑tech safety

Low‑tech safety measures are often underestimated. They are also usually the most sustainable, because they rely on habit rather than complex tools. Simple behaviours can reduce exposure without isolating you from normal life. Examples include choosing public locations for meetings where you are comfortable being seen, or arranging sensitive conversations on walks rather than in a room with obvious recording devices.

In a monitored environment, control over context matters more than exotic technology. If you must discuss something private, choose a place with normal background noise rather than a silent corner that draws attention. If you are concerned about being followed, vary your routine slightly, not dramatically; huge changes attract attention, while modest variation blends in. If you are worried about being asked questions by people who do not need the answers, keep a simple, consistent explanation of your activities that you can repeat without stress.

Low‑tech does not mean reckless. Paper records can be safer than digital ones in some circumstances, but they can also be lost or seized. Face‑to‑face communication avoids many digital traces, but it creates opportunities for observation in physical space. The trade‑off is always between convenience, risk, and the likelihood of being noticed. People who live under sustained monitoring often find that routine, modest precautions are more effective than dramatic gestures.

Everyday trade‑offs and limits

It is useful to be clear about what can and cannot be controlled. You cannot avoid appearing on CCTV in most urban areas. You cannot prevent your phone from creating network records if it is switched on. You can, however, choose which device you carry, when you carry it, and how much personal information you tie to it. You can choose when to meet people and where, how much you say by text, and how much you keep in person.

Another limitation is that reducing visibility can itself be a signal. Constantly switching devices, refusing to use standard services, or travelling in elaborate patterns can look unusual. A balanced approach usually blends the ordinary with the cautious: enough normality to avoid standing out, and enough care to avoid making your life transparent.

For those with limited technical confidence, it is still possible to make meaningful choices. Use straightforward devices with minimal apps. Keep personal and public identities distinct where possible, such as separate email accounts for different roles. Ask simple questions before sharing: who needs this, how long will they keep it, and what else could it reveal? These habits are not foolproof, but they often reduce risk in the ways that matter most.