Why We Accept Facial Scanning Like It's Normal

11 min read
surveillanceprivacynormalizationconsent

Airport gates now scan your face before you board. Retail stores run behavioral analytics on how long you linger in an aisle. Concert venues cross-reference attendees against law enforcement databases. Smart cities log your movement through intersections.

Nobody signed up for any of this. Almost nobody is opting out.

That's not apathy. It's something more deliberate — a slow, engineered normalization that took decades to build and now runs so deep most people can't feel it anymore.

The Normalization Gradient

The surveillance state didn't arrive all at once. It crept in through a gradient — each step small enough to feel reasonable, each one making the next step easier to accept.

It started at the airport. TSA PreCheck made the pitch explicit: give us your biometrics, get through security faster. Millions enrolled voluntarily. By spring 2026, facial recognition is active at 65 U.S. airports. The TSA Administrator has said the quiet part out loud: "Eventually we will require biometrics across the board." The voluntary phase was just the setup.

From airports it moved to retail. Stores aren't just watching you on camera — they're running those feeds through analytics platforms that track where you look, how long you pause, whether you match a shoplifter profile from another location across the country. No sign tells you this is happening. No opt-out mechanism exists.

Then workplaces. Remote monitoring software became normalized during the pandemic and never retreated — keystroke logging, periodic webcam captures, "productivity scores" generated by AI watching your face for signs of distraction. Employees signed consent forms buried in onboarding packets. The consent was real. The information about what they were consenting to was not.

Now smart cities. London has over 600,000 CCTV cameras. New York's Domain Awareness System links cameras, license plate readers, and facial recognition into a single monitoring infrastructure. These systems weren't put to a public vote. They accumulated, piece by piece, until opting out would mean opting out of the city itself.

This is how normalization works. Not through a single decision, but through a gradient of small ones — each normalized by the last.

The Consent Theater

Technically, you consented.

The sign at the concert venue entrance says "by entering these premises you consent to biometric surveillance." It's printed in fourteen-point font at the bottom of a sign that also lists the bag policy and tells you no outside food is permitted. You read the bag policy. You did not read the rest.

The terms of service you accepted when you downloaded the app — the one that asks for camera access to let you try on sunglasses virtually — runs 47 pages. Page 31 covers how your facial geometry data may be shared with affiliated partners for identity verification purposes. You clicked "I agree." You did not read page 31.

This is consent theater. It has the formal structure of informed consent without the substance. The legal box gets checked. The power imbalance stays intact.

Real informed consent requires that people understand what they're agreeing to, have a genuine option to refuse without penalty, and make an affirmative choice rather than being opted in by default. Airport facial recognition fails all three. Retail surveillance fails all three. The concert venue fine print fails all three.

The Traveler Privacy Protection Act of 2025 would have required affirmative consent before any biometric collection at airports. It would have barred passive surveillance and required deletion of stored images. It has not passed. The companies that profit from the current arrangement have spent substantial money ensuring it doesn't.

The Convenience Trap

Here's what the TSA enrollment booth actually says: skip the line, fly faster, no ID needed. That's the whole pitch. The cost — permanent biometric enrollment in a federal database, with no clear deletion pathway — gets a single sentence in smaller type.

People take the deal. Of course they do. Thirty seconds saved at a security checkpoint feels immediate and real. The risk of your facial geometry persisting in a government database for an unknown period feels abstract and theoretical.

This asymmetry is not accidental. Surveillance infrastructure is designed to make the immediate benefit vivid and the long-term cost invisible. The checkout line that moves faster when you pay with your palm. The apartment building door that unlocks when it sees your face. The phone that wakes up when you look at it. Each convenience is real. Each one expands the infrastructure.

The trap isn't that people are stupid. The trap is that the costs are diffuse, future-oriented, and hard to visualize — while the benefits are immediate, personal, and concrete. Human brains are not built to weight these categories equally. The architects of this infrastructure understand that. They designed around it.

Bentham's Panopticon, Updated

In 1791, Jeremy Bentham proposed a prison design called the Panopticon. The structure: a central observation tower surrounded by cells. Guards could see every prisoner at any time. Prisoners couldn't tell when they were being watched. The effect Bentham theorized — and Michel Foucault later analyzed — was that prisoners would internalize the surveillance and police themselves. You don't need guards in every tower if the prisoners believe someone might be watching.

The modern version doesn't need towers. People carry the cameras voluntarily.

Your phone knows your location continuously. Your smart TV has a camera. Your laptop's camera is on when you open the lid. The doorbell camera photographs everyone who walks past your house, uploads the footage to Amazon's servers, and shares it with local police departments on request. You bought all of this. You installed it. You keep it plugged in.

The shift Bentham couldn't have imagined: the surveilled became enthusiastic participants in building the surveillance infrastructure. It's not that the state forced cameras into homes. It's that consumer electronics companies made cameras desirable enough that people wanted them there. The result is functionally identical — comprehensive monitoring with no blind spots — but it arrived through the market rather than through coercion. This makes it significantly harder to resist.

You can refuse a cop at your door. You already agreed to the terms of service.

The Reveal Moment

FaceTwin is a digital art project built to test exactly one thing: when people think they're looking at their face in a real facial recognition database, do they question the technology or just look at the results?

The answer, consistently, is that they look at the results.

Someone sends you a link. You click it. A site called "FaceTrace" tells you it found three people who share your facial geometry — and shows you photos with 94.7% match confidence. Some people immediately clock it as fake. Most don't. The ones who don't: they laugh, share it to their story, maybe post it to their group chat. Almost nobody asks how the site got their face. Almost nobody wonders what database they were just indexed into.

The acceptance is the point. The experience works because we've been so thoroughly conditioned to accept facial recognition as a normal background feature of the internet that a fake version doesn't even register as alarming.

You can read more about how FaceTwin was built and what it found. The short version: the shrug is the data.

The "Nothing to Hide" Argument

The most common deflection: "I don't care if they have my face. I'm not doing anything wrong."

This argument fails on several levels, and it fails in ways that matter.

First, the threat model isn't about what you're doing right now. It's about what could be done with persistent records of your identity, location, and associations over time. People who were "not doing anything wrong" in 2010 couldn't have predicted which associations, beliefs, or movements would become legally or professionally risky in 2026. Surveillance infrastructure built today will be operated by governments and corporations whose values and agendas in ten years are unknown.

Second, "nothing to hide" assumes the rules stay stable and the judges stay fair. Facial recognition systems have documented accuracy disparities across racial categories. A system that performs well on white faces performs significantly worse on darker skin tones — meaning the people most likely to be falsely identified are also the people most likely to face serious consequences from a false identification.

Third, and most fundamentally: the argument concedes the premise. It accepts that the right to privacy is contingent on behavior — that you earn privacy by being good, and surveillance is a reasonable cost for people who have something to hide. That's not how rights work. The value of privacy isn't that it protects secrets. It's that it creates space for autonomy, dissent, association, and identity formation outside the gaze of institutions with power over you.

The nothing-to-hide argument is what surveillance capitalism wants you to believe. It transforms a structural power imbalance into a personal virtue test, and it works extremely well on people who haven't needed privacy yet.

What the Panopticon Produces

Foucault's insight about the original Panopticon wasn't that it was evil. It was that it was efficient. You don't need to watch everyone all the time if everyone believes they might be watched. The behavior change is the goal. The surveillance produces conformity as a side effect.

The modern version produces the same effect at scale. People who know they're being tracked alter their behavior. They search for things differently. They go to different places. They associate with different people. They hold back. They self-censor. They perform normalcy.

This doesn't require a conspiracy. It doesn't require anyone to actively use the surveillance data against anyone. The behavior change happens in anticipation. The panopticon works because you can't tell when the tower is occupied. The modern version works because you can never tell who's watching the feed.

The question FaceTwin keeps asking, in its small way: do you know you're in the tower's view? And if you know — does it change anything?

If the answer is no, that's not a personal failure. That's the system working exactly as designed.

Try it yourself at pleasejuststop.org. See what it feels like when your face surfaces somewhere you didn't put it. See if you flinch.


Frequently Asked Questions

Why do people accept facial scanning at airports if they could opt out?

The opt-out exists, but it requires knowing about it, asking for it, and accepting the friction of manual screening while everyone else moves through the biometric lane. The default is enrollment. The opt-out is available; it's just made inconvenient enough that most people don't bother. This is intentional design — systems that want participation structure the default to produce it.

Is surveillance normalization actually harmful if nothing bad has happened to most people?

The harm isn't always direct or immediate. Surveillance infrastructure built now will be operated under future governments with unknown values. Documented racial disparities in facial recognition accuracy mean false identifications fall disproportionately on specific communities. The chilling effect on free expression and association is real even when no one is actively reviewing your data. "Nothing bad has happened yet" describes a risk posture, not an absence of risk.

How is the "consent" in surveillance systems different from real consent?

Informed consent requires that you understand what you're agreeing to, have a genuine option to refuse without significant penalty, and make an affirmative choice rather than being opted in by default. Airport facial recognition, retail analytics, and most workplace monitoring fail all three criteria. You can technically opt out of most systems, but the information about how to do so is not prominently disclosed, the penalty for opting out (slower lines, reduced access, employment consequences) is real, and the default is participation. This is consent theater — the legal structure without the substance.

What can I actually do about facial recognition normalization?

At the individual level: opt out when the option exists (TSA allows manual screening on request). Use privacy-protective defaults where possible. At the systemic level: support legislation like the Traveler Privacy Protection Act and state biometric privacy bills. Understand that companies like Clearview AI built billion-image databases legally, and that changing this requires legal change, not individual behavior. Artists and researchers building tools that make surveillance visible — like the community pushing back on facial recognition infrastructure — are doing work that matters. The problem is structural. The solutions are too.