I Built an Experiment to Test If We've Given Up on Privacy
I built an experiment to test whether we've already given up on privacy.
It's a fake "face twin finder" — a website that supposedly finds strangers who look like you. Someone pastes a public link to a friend's photo. AI generates the lookalikes. They send their friend a link.
Their friend clicks it. A website they've never visited instantly shows them strangers who look just like them. They never uploaded a photo. Never signed up. Never consented to anything.
Nobody questioned how the site got their face. Nobody asked about the database of millions of faces it would need. Nobody worried their face was probably just added to it.
They just shrug and keep scrolling.
Before you say "isn't sharing someone's photo the problem?" — that's the point. The photo was already public. Already scrapable. Already trainable. This just forced someone to notice. The reveal shows them the exact URL their photo came from. All data deleted within 24 hours.
Has anyone else tested how people respond to AI-generated identity tools?
What Actually Happens
Here's the full arc. You go to pleasejuststop.org and paste a URL to a friend's publicly available photo. That's it — no file upload, no account, no friction. The AI generates three altered face variants: same person, different contexts. A leather jacket against a wall. A candid outdoors. A winter snapshot. Each one deliberately degraded to look like a real internet photo, not an AI portrait.
Your friend gets a link to what looks like a legitimate facial recognition product called "FaceTrace." Professional UI, confidence percentages, match quality scores — the whole corporate surveillance aesthetic. They see their own face staring back at them with three "matches" from a database that doesn't exist.
Then they hit the reveal. A stark data receipt — what happened, how it happened, and the source URL proving their photo was already public. The link burns after the reveal. You can't go back.
The joke is the delivery mechanism. The data receipt is where the art lives.
Why Making Fake Photos Was the Hardest Part
The entire experience depends on the recipient believing the "matches" are photos of real strangers. The moment an image looks AI-generated, the whole thing falls apart.
Here's the problem: AI models want to make beautiful images. You give them a face and they hand back a portrait. Smooth skin. Soft lighting. Shallow depth of field. Perfect composition. That's the opposite of what a real internet photo looks like.
Real photos from facial recognition databases look terrible. They're JPEGs compressed four generations deep. Overhead fluorescents. Off-center framing. Someone's Facebook profile pic from 2013. A car selfie forwarded through WhatsApp. The visual signature of "found on the internet" is specific, and AI doesn't produce it naturally.
So you fight the model. Negative prompts to kill the bokeh. Post-processing to add compression artifacts, color shifts, resolution drops. Each generated image gets run through a degradation pipeline that simulates a different source — an Android phone from 2015, a WhatsApp forward, a screenshot repost. I wrote about the full technical breakdown separately because the process of making AI images look worse on purpose turned out to be genuinely interesting.
The prompt engineering had its own death spiral. Every time a minor issue appeared — a woman looking slightly older, a color cast — the instinct was to add a defensive instruction. "Don't age the person." But that draws the model's attention to aging, and the output gets worse. More instructions, more diluted attention, worse results. The fix was always removing words, not adding them. A good prompt is two sentences. More than three and you've already lost.
The Real Version Isn't Fiction
One question that keeps coming up: if facial recognition without consent is the problem, why isn't it illegal?
Short answer: it mostly isn't. Illinois has BIPA. The EU has the AI Act. At the federal level in the US, there's nothing. No opt-out mechanism. No right to know if your face has been indexed. No restriction on how companies can collect, store, or sell your facial data.
Products substantially similar to the fictional "FaceTrace" exist and operate legally right now. Clearview AI scraped over 30 billion public photos to build a searchable facial recognition database. PimEyes lets anyone search a face against the open internet. Neither asks consent from the people in the database.
FaceTwin's corporate design is deliberately close to these real products. Because the real products are already close enough to be mistaken for satire.
Your Face Is the One Thing You Can't Change
There's a cruelty specific to facial recognition that sets it apart from other forms of surveillance.
You can change your password. Cancel your credit card. Delete your account. Use a VPN. Opt out of cookies.
Once it's in a database — commercially indexed, scraped by a startup, stored by a government — it stays there. It ages with you. Every camera you've ever walked past becomes a potential data point. Every photo you've ever been tagged in becomes linkable. Facial recognition doesn't track what you do. It tracks who you are, where you are, and when.
FaceTwin uses that permanence as its mechanism. Your friend's face works because their face is their face. There's no revoking it after the fact.
Try It Yourself
pleasejuststop.org. Paste any public photo URL. The whole thing takes under a minute.
The sender and the recipient are both experiencing something real — just from different angles. One is demonstrating how easy it is to weaponize a public face. The other is experiencing what it feels like to have their face surface somewhere they didn't put it.
Neither requires a real database. That's the uncomfortable part.
Frequently Asked Questions
Does FaceTwin actually store or use anyone's face data?
No. Photos are used only to generate the altered variants. All data — the original photo, the generated images, the session record — expires and is deleted within 24 hours. The project simulates what real facial recognition feels like. It doesn't actually perform it.
Is this legal? Can I actually send this to people?
FaceTwin is a digital art project. You're generating altered images from a publicly available photo and sending someone a link to a fake product page. The reveal makes clear what happened. Use your judgment about who you send it to.
What makes the AI-generated "match photos" convincing?
Making them look bad in the right ways. AI defaults to polished portraits, but real database photos are degraded — compressed, cropped, poorly lit. The generation pipeline deliberately adds compression artifacts, resolution drops, and color shifts to simulate real internet photos. Full technical writeup here.
Why does this matter if people already know facial recognition exists?
Knowing something exists abstractly and experiencing it firsthand are different. Most people have never felt the specific feeling of seeing their own face surface on a website they've never visited, with a confidence percentage attached to it. That gap — between abstract knowledge and felt experience — is where the project lives.
Could a real version of FaceTrace actually exist?
It already does. Clearview AI, PimEyes, and others operate searchable facial recognition databases built from publicly scraped photos. No consent required. FaceTwin's design is deliberately close to the real thing because the real thing is already indistinguishable from what most people would call dystopian fiction.