The Privacy Loophole: Why There's No US Federal Facial Recognition Law
Your email is protected by federal law. Your phone calls, too. Your medical records, your financial data — Congress has weighed in on all of it.
Your face? Nothing. No federal statute. No consent requirement. No right to sue.
In 2026, there is still no comprehensive US federal facial recognition law. Companies can scrape your photo from Instagram, build a profile, and sell access to whoever wants it — and unless you live in Illinois, Texas, or a handful of other states, there is no law stopping them.
This isn't an oversight. It's a choice.
What the Rest of the World Has Done
The contrast with other developed economies is hard to ignore.
The EU's AI Act entered into force in August 2024. By February 2025, its prohibitions were already in effect. The list of banned practices reads like a surveillance capitalist's wish list that got vetoed: no untargeted scraping of the internet or CCTV footage to build facial recognition databases. No real-time remote biometric identification by law enforcement in public spaces (with narrow exceptions). No using AI to infer sensitive characteristics from a person's face.
These aren't aspirational goals. They're hard legal prohibitions with meaningful fines attached.
The full AI Act applies from August 2026, bringing compliance requirements for high-risk biometric systems — transparency obligations, accuracy standards, human oversight requirements. Companies operating in the EU are already rebuilding products around these rules.
Meanwhile in the US, Congress is still on version one of the discussion.
Illinois Proved It Works
The argument against biometric privacy laws — that they'd chill innovation, create legal uncertainty, burden businesses — collided with reality in Illinois.
Illinois passed the Biometric Information Privacy Act (BIPA) in 2008. BIPA requires written consent before any company can collect or use biometric identifiers: fingerprints, retinal scans, facial geometry. It gives individuals a private right to sue. It sets statutory damages of $1,000-$5,000 per violation.
For a decade, it mostly sat on the shelf. Then Clearview AI happened.
Clearview scraped over 30 billion photos from the public internet — Facebook, Instagram, LinkedIn, news sites — and built a facial recognition database it licensed to law enforcement, private investigators, and businesses. No consent. No notice. No opt-out.
BIPA made that illegal in Illinois. The ACLU sued on behalf of Illinois residents. Five years of litigation followed.
In 2025, a federal court approved a $51.75 million settlement — structured as a 23% equity stake in Clearview AI, a creative outcome the court called "fair, reasonable, and adequate." Class members don't get cash today; they get ownership in the company that violated their rights. It's an unusual mechanism, but the underlying message is clear: collecting biometric data without consent has a price.
That price only existed because Illinois had a law. Everywhere else, Clearview's scraping was perfectly legal.
If you've ever wondered what happens when surveillance capitalism meets actual regulation — that's what happens. The Clearview story is worth understanding in full.
The Federal Graveyard
It's not that Congress hasn't tried. The bills pile up and die.
The Facial Recognition and Biometric Technology Moratorium Act has been introduced in multiple sessions. Various facial recognition bills targeting federal agency use, law enforcement databases, and commercial deployment have been proposed. The 119th Congress (2025-2026) has at least three relevant bills circulating, including the Facial Recognition Act of 2025.
None have passed. Several haven't made it out of committee.
The reasons are predictable: industry lobbying, disagreement on scope, debates over law enforcement carveouts, and — most fundamentally — the absence of political urgency. Facial recognition doesn't have a constituency the way financial data or healthcare data does. The harms are diffuse, slow-moving, and not yet visible enough to generate the kind of pressure that moves legislation.
The result is a patchwork where your rights depend entirely on your zip code.
What States Are Doing Instead
In the absence of federal action, 23 states have now passed or expanded laws restricting the mass collection of biometric data.
Three have broad, meaningful laws:
Illinois (BIPA, 2008) — The gold standard. Private right to sue. Written consent required. Statutory damages. The Clearview settlement happened because of this law.
Texas (2009, expanded 2025) — Requires consent for commercial biometric data collection. Texas passed new AI legislation in 2025 that similarly prohibits collection without permission, though enforcement relies on the attorney general rather than private suits.
Washington (2017) — Consent and transparency requirements, but no private right to sue. The missing piece is significant: without the ability to sue, enforcement is left to regulators who may or may not prioritize it.
The 2025 wave added more states to the list. Colorado passed what advocates called the strongest new law, requiring explicit consent. Delaware and New Jersey added opt-in requirements. Oregon approved rules requiring consumer opt-in before companies collect facial, eye, or voice data.
But the gaps are massive. Most of these laws apply only to private companies. Police, TSA, immigration enforcement — they continue operating largely without consent requirements or warrant obligations in most states. Montana and Utah are outliers, requiring warrants for most law enforcement deployments. Maryland limits police use to serious crime investigations with notice and transparency provisions.
And enforcement without private rights to sue is weak by design. A law that requires a regulator to notice, investigate, and prosecute every violation will cover a fraction of the cases a private right of action would surface.
Why This Matters Right Now
Facial recognition is not a future technology. It's already deployed at airports, stadiums, retail stores, and apartment buildings. It's embedded in social media platforms, photo apps, and security cameras that connect to the cloud.
The databases feeding these systems were built before meaningful regulation existed. Your face is already in them. You can't change your face the way you can change a password or cancel a credit card.
The EU figured this out early enough to write rules before the infrastructure was fully entrenched. The US did not. And the longer a federal standard is delayed, the harder it becomes to unwind what's already been built.
The project at pleasejuststop.org makes this concrete. The whole premise works because people have normalized clicking links, sharing photos, and assuming that "publicly available" means "harmless." It tests whether we've already given up on privacy — and the results are not reassuring.
FAQ
Is there a US federal facial recognition law in 2026?
No. There is no comprehensive federal law governing facial recognition or biometric data collection in the United States. Multiple bills have been introduced in Congress but none have passed. Your rights depend entirely on which state you live in.
What is BIPA and why does it matter?
The Illinois Biometric Information Privacy Act (BIPA) is a 2008 state law requiring written consent before any company can collect biometric identifiers including facial geometry. It's significant because it includes a private right to sue, which has made it the most enforced biometric privacy law in the country. The Clearview AI settlement — a $51.75 million equity stake awarded to Illinois residents — happened entirely because of BIPA.
What does the EU AI Act say about facial recognition?
The EU AI Act, which began applying in phases from February 2025, prohibits building facial recognition databases through untargeted scraping of the internet or CCTV footage. It also bans real-time remote biometric identification by law enforcement in public spaces except in narrowly defined circumstances. Full compliance requirements for high-risk biometric systems apply from August 2026.
Which US states have the strongest biometric privacy protections?
Illinois has the strongest protections, with BIPA's private right to sue and statutory damages. Texas and Washington have consent requirements but rely on government enforcement rather than private suits. Colorado, Delaware, and New Jersey passed new laws in 2025 requiring explicit opt-in consent for biometric data collection.
Can companies legally use facial recognition on publicly posted photos?
In most US states: yes. If a photo is publicly accessible on social media, a news site, or anywhere on the open internet, companies can scrape it and use it to build facial recognition profiles without your consent. Illinois is the main exception under BIPA. This is precisely what Clearview AI did to build its 30+ billion photo database — and it was only illegal for the Illinois residents who could sue under BIPA.