If you are going to promise users privacy, then you really need to follow through. Tea Dating Advice, a service that advertised itself as a safe space for women to anonymously share information about former partners—to warn others about abuse and cheating—says that it is locked down. Users are not allowed to take screenshots, and the app says it verifies that its users are women. So why did Tea let me, a middle-aged man, create an account just a few days after suffering two major security breaches?
Last month, hackers wormed their way into Tea and accessed sensitive user data; 70,000 user images and more than 1 million private messages reportedly were leaked, including communications about abortions, users’ driver’s-license photos, and phone numbers that had been shared in private messages. Even after all of this became public, I was still able to fool the app’s verification feature with a basic approach: I found a generic photograph of a woman on Google and held my phone’s selfie camera up to it. Whether Tea uses a facial-recognition algorithm or a human to approve the verification photos is unclear; the company did not clarify when I asked. Either way, 30 minutes after my selfie trick, I had an account. I quickly deleted the app, but a malicious user (a stalker, for example, or a man curious about whether his abuse had been mentioned on the app by his girlfriend) would have had free rein. Sonia Portaluppi, a spokesperson for Tea, declined to comment, noting “the legal nature of this subject matter.” (I had included my question about gaining access to Tea in an email about the recent security breaches; it was not clear what, specifically, Portaluppi was referring to.)
Tea’s security failures are notable given the app’s sales pitch. But we are entering an era when such problems may become routine. As an anti-surveillance advocate and lawyer, I am concerned by the normalization of facial surveillance and identification-check mandates, which limit access to digital services and dissolve the status quo of anonymity—all for a flawed idea of internet “safety.” In the United Kingdom, for instance, the newly enacted Online Safety Act requires many websites and apps to verify that users are 18 or older, in many cases by having them submit government-issued ID or selfies. In the United States, federal lawmakers are weighing a similar bill, the Kids Online Safety Act, or KOSA. And some states have already put similar mandates in place. Although keeping minors away from harmful content is a noble goal, the result will be a constellation of online services with invasive, insecure, and ultimately ineffective security checks.
Under the U.K.’s safety act, companies that refuse to implement age-surveillance software could face billions in fines or jail time. Owners of porn sites may not get much sympathy as a persecuted class, but the law’s targeted “primary priority content” is so general that it applies to far less controversial platforms, such as Bluesky and Discord. Wikipedia has said it may have to limit access to its site in the U.K. as a consequence, and other platforms have enacted broad restrictions to avoid any potential violation of the law. According to TechDirt, people in the U.K. have had to verify their age in order to access protest videos on X or Reddit communities about substance abuse and menstruation.
Unsurprisingly, all of this has led people to circumvent the verification features, much as I did with Tea. Some use VPNs, which lead sites to believe that you are accessing them from a different country; Forbes and PCGamer reported that U.K. teens have also managed to fool “live” facial-recognition scans by pointing their phones at realistic video-game characters. In principle, a cumbersome identity-verification process makes users feel safe and prevents unauthorized access. In practice, these verification processes are easily subverted by people with the will and technical know-how. A website with sensitive content arguably becomes less safe to use, because the stakes of a breach become much higher. (Sites requiring identity verification can use third parties—many of which say that personal information is encrypted and not permanently stored, although those intermediaries may bring their own privacy concerns—but they do not have to.)
Here in America, numerous states are implementing age-verification requirements for adult content and social-media apps. Ambiguous laws in Utah, Louisiana, and other states also broadly block children from “harmful” content. Speaking about pornography, Senator Todd Weiler, the Utah measure’s sponsor, has said, “I don’t think it’s helpful when a kid is forming their impressions of sex and gender to have all of this filth and lewd depictions on their mind.” That he referenced gender in particular feels to me like cause for concern: Given their ambiguity, these statutes could and very well may be abused as a political tool to block LGBTQ content, medically accurate information about abortion, and other such material.
This pattern isn’t cabined to conservative states. Last year, New York enacted the “Safe for Kids Act,” requiring social-media firms to monitor users’ ages. While lawmakers’ impulse to protect teens from social-media toxicity is easy to understand, as is the U.K.’s, the devil is in the details: It is unclear how New York will verify which users are children, and there’s a risk that lawmakers will repeat the mistakes from other states and countries by mandating technology such as facial recognition.
There are plenty of reasons to be worried about social media’s effects on younger users. But lawmakers could instead push for generally applicable safety measures that would protect kids and adults alike. Social media can be a trying and toxic place whether you’re 13 or 23. Certainly the effects may be more pronounced for teens, but there’s little evidence to suggest the brain is inoculated against the technology’s effects the moment users turn 18, and many vulnerable adults are at risk as well, such as those with sustained mental-health challenges. The measures that New York is proposing for children—curtailing late-night notifications, a restriction on algorithmic feeds—could, in theory, apply to adults. And if the law was evenly applied to everyone, there would be no need for invasive surveillance.
The truth is there are very few ways to verify someone’s identity online well, and no ways to do it both effectively and anonymously. With all the awful content floating on the internet today, giving up anonymity might feel like a small price to protect public safety, but the adoption of these laws would mean sacrificing more than privacy. Losing anonymous internet access means giving companies and government agencies more power than ever to track our activities online. It means transforming the American conception of the open internet into something reminiscent of the centralized tracking systems we’ve long opposed in China and similar countries. At this moment, the prospect of an internet linked to our real identity has never felt so threatening.
The post The Facial-Recognition Sham appeared first on The Atlantic.