The Face You Can't Change: Why Your Facial Data is Permanently Vulnerable
Your password got leaked? Reset it. Credit card compromised? Cancel it, get a new number in two days. Social Security number exposed? Painful, but there are steps.
Your face? There are no steps.
This is the thing nobody talks about when they say "I have nothing to hide." It's not about what you're hiding. It's about what happens when that data — the geometry of your skull, the distance between your eyes, the exact curve of your jaw — gets copied, stored, sold, and leaked. And then exists forever in systems you'll never see, controlled by people you'll never meet.
Biometric data is permanently vulnerable. Not "harder to fix than a password." Permanently. Let's talk about why that matters right now.
Passwords vs. Faces: A Very Unfair Fight
When a company suffers a data breach and your password leaks, the damage is real but bounded. You change the password. You enable two-factor auth. You move on. The leaked credential becomes useless.
When your biometric data leaks, the damage is unbounded. And permanent.
You can't issue yourself a new face.
This is the core asymmetry that makes facial recognition uniquely dangerous. Traditional credentials are revocable. Biometrics are not. Your fingerprint, iris pattern, and facial geometry are fixed identifiers that you carry everywhere, visible to cameras, logged by systems, and stored in databases you know nothing about.
A leaked password is a problem for today. Leaked facial data is a problem for the rest of your life.
What "Biometric Data Permanently Vulnerable" Actually Means in Practice
Here's a concrete example. In 2020, Clearview AI — a company you've probably never heard of — suffered a data breach. Their entire customer list was exposed.
What is Clearview AI? They scraped 50+ billion photos from the public internet. Facebook profiles, news photos, LinkedIn headshots, old MySpace pages. Billions of faces, matched to names, mapped into a searchable database. They sold access to law enforcement, private investigators, and, it turned out, hundreds of other organizations.
The breach exposed who was using the system. But the database itself — 50 billion-plus faces — that's still out there. Being used. Growing.
And here's the part that should stop you cold: most of those photos were public. The people in them didn't consent to being in a facial recognition database. They just posted a picture on Facebook in 2011.
That photo is still working against them today.
This is what "biometric data permanently vulnerable" means. Not that a specific company got hacked. That once your face is in a system, it doesn't matter if that company gets fined, shut down, or goes bankrupt. The data persists. It gets copied. It gets sold.
Clearview faced a €30.5 million fine from Dutch regulators in 2024 and a $51.75 million settlement in the US in 2025. They're still operating. They signed a $9.2 million contract with ICE in 2025.
Fines don't delete databases.
The "It Was Already Public" Trap
Here's the argument you'll hear: "If you posted it publicly, you can't complain."
This argument sounds reasonable on the surface. It's actually doing a lot of heavy lifting to justify something deeply wrong.
When you post a photo on Instagram, you're sharing it with friends, family, maybe strangers who follow you. You're not consenting to have your facial geometry extracted, vectorized, stored in a commercial database, and matched against millions of surveillance cameras.
The context matters. Legal scholars call this "contextual integrity" — information shared in one context (friends seeing your vacation photo) doesn't automatically carry consent for every other use of that information (law enforcement running your face through a recognition system).
But the technical reality is blunter: the systems don't care. They'll take what they can crawl. Public = fair game, as far as they're concerned.
This is exactly what FaceTwin is built to demonstrate. The experience shows you something uncomfortable: a stranger pastes a link to your publicly accessible photo, and within seconds, AI is working on your face. The photo was always public. You just didn't think about all the things that could be done with it.
See what happens when your public photo meets AI at pleasejuststop.org.
What Makes a Facial Recognition Data Breach Worse Than Other Breaches
Not all data breaches are created equal. When your facial data ends up in the wrong hands, a few things make it categorically worse than most other leaks.
You can't opt out retroactively. With financial data, you can freeze your credit. With email, you can change addresses. There is no equivalent for your face. It's been scanned, it's been stored, and you have no mechanism to recall it.
The harm compounds over time. A stolen password is dangerous right now, less dangerous next month (after you change it). Stolen facial data gets more dangerous as recognition technology improves. A mediocre facial recognition system today becomes a highly accurate one tomorrow — running against the same database of stolen faces.
The data is self-verifying. If someone steals your password, they still need to know where to use it. If someone has your facial geometry, every camera you walk past is a potential verification point. Airports. Stores. Streets. The stolen data works everywhere you go.
No one will tell you. When a credit card gets fraudulently used, you get an alert. When a facial recognition system matches your face against a database you never knew existed, you get nothing. There's no notification. There's no opt-out. There's frequently no law requiring either.
What Ordinary People Should Actually Do
This isn't a post about wrapping your head in tinfoil. But "nothing to hide" isn't an argument anymore — it's a way of avoiding an uncomfortable truth.
A few things worth considering:
Audit your public photos. Not to delete everything. Just to know what's out there. That LinkedIn headshot from 2018. The profile picture that's been your face online for years. These are training data for systems you'll never see.
Understand that public doesn't mean consented. Just because something is technically accessible doesn't mean you endorsed every possible use of it. You can feel that distinction and act on it — different platforms, different photos for different purposes.
Push back on biometric collection where you have a choice. Loyalty programs that want your face. Phones with facial unlock tied to third-party apps. Venues using facial recognition for entry. You sometimes have a choice. Use it.
Expect things to get worse before they get better. There's still no federal facial recognition law in the US. The patchwork of state laws is thin. The companies moving fast have a significant head start over regulators.
The uncomfortable reality is that most of the facial recognition risk you face today was locked in years ago, the last time you posted a photo publicly. The photo was already public. The data is already out there.
What we do from here is the only part we can still control.
FAQ
What makes facial data different from other personal data in a breach?
Most personal data can be changed or contained. You can close a bank account, change an email address, freeze your credit. Your face is fixed. It's an identifier you carry everywhere, and once it's in a database, there's no mechanism to revoke it. A facial recognition data breach is permanent in a way that other breaches aren't.
Can I get my face removed from a database like Clearview AI's?
Technically, some jurisdictions give you the right to request deletion. In practice: it's unclear whether requests are honored, whether deleted records stay deleted, and whether data has already been copied to other systems. Europe's GDPR gives you stronger rights than US law does, but enforcement is slow. Clearview has been fined multiple times by European regulators and continues to operate.
Is facial recognition data collection legal?
Mostly yes, in most of the US. There's no comprehensive federal law governing facial recognition. Illinois has BIPA (Biometric Information Privacy Act), which is why most of the major biometric privacy lawsuits have been filed there. A handful of cities have banned government use of facial recognition. Private companies scraping public photos operate in a largely unchecked space.
I only post photos on private accounts. Am I safe?
Safer, not safe. Data breaches happen on platforms too — the platform's database is a centralized target. People you share content with can screenshot and share further. "Private" is a setting, not a guarantee. But yes, limiting what's publicly accessible reduces your surface area for scrapers like Clearview.
What's FaceTwin trying to show people?
FaceTwin — the project behind pleasejuststop.org — makes the abstract concrete. Someone pastes a link to a publicly accessible photo of you. Within seconds, AI has your face. The reveal at the end shows you exactly what happened, including where the photo came from. It's designed to produce the feeling that a blog post can only describe. The photo was already public. That's the whole point.