DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Your Body Is Betraying Your Right to Privacy

March 24, 2026
in News
Your Body Is Betraying Your Right to Privacy

Know thyself. It’s an old adage that has new resonance in the digital age. Today, you can buy smart devices that monitor your heartbeat, blood pressure, exercise habits, water intake, sleep, mood, menstrual cycle, sexual activity, and meditation patterns, not to mention your poop. The internet of things has turned into what academic and author Andrea Matwyshyn has termed the “Internet of Bodies” with the promise of selling you insights about your “quantified self.”

The desire for self-awareness is not new, but these data offer a dif­ferent twist on enlightenment. Millions of Americans live with a smartwatch that reminds them to stand, breathe, and take a few more steps to meet their daily exercise goals. This helpful (and healthful) algorithmic prompt only works, of course, because your smart device is tracking your bodily activity. It literally knows you are breathing, which can be helpful to police if for some reason you stop. The data we produce—from our step count to our DNA—is increasingly coming under surveillance.

Not all of this surveillance is unwelcome. Many medical professionals have embraced digital tracking to help their patients. Smart pacemakers measure heartbeats. Digital pills record when someone last took their medication. Smart bandages can warn of early infection. These innovations offer the potential to improve medical outcomes by linking data in and on our bodies to our digital health records. They rely on small sensors that can be placed in watches or implanted in medical devices, allowing you to monitor your own vital signs or to check on friends and family members with health issues.

Of course, there are potential downsides to making medical data so available. Digital pills might inform your doctor (or parole officer) that you’ve stopped taking your psychiatric medication; it’s no coincidence that the first such pill approved by the FDA treats schizophrenia and other mental health disorders. In addition to helping with your marathon training, the data from your smartwatch can identify times when you are using cocaine or having sex.

Buy This Book At:

  • Amazon
  • Books-a-Million
  • Walmart

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

Recent laws criminalizing abortion raise the stakes of collecting this kind of information. Almost a third of women use period trackers to monitor their reproductive health. Many of these apps—such as Flo, used by 48 million women—collect information about the user’s mood, body temperature, symptoms, ovulation, and sexual partners, as well as their location. Even if a user kept the result of her pregnancy test off the app, her missed period, combined with weeks of recorded nausea, would offer a pretty good clue as to her condition. In states that have restricted abortion access, prosecutors could use this data as evidence of a crime.

In states where abortion remains legal, reproductive information might find its way into the hands of marketers instead. In 2023, the Federal Trade Commission fined the “femtech” company Premom for selling data to third parties, including Google and companies in China. Premom, like Flo, which also settled a complaint by the FTC, did not disclose the fact that it was sharing this personal data—which, in the case of Premom, included information about “sexual and reproductive health, parental and pregnancy status, as well as other information about an individual’s physical health conditions and status.”

Some femtech companies have tried to protect personal data by limiting the amount they collect and localizing it on the device, refusing to log IP addresses, or creating an anonymous mode, but companies and users are still at the mercy of court orders. US companies are bound by US laws, and when abortion is criminalized in a state, data that could provide evidence of an abortion is subject to warrant requests by investigating agents. The only way to avoid turning over the data is by not collecting it, which is difficult for a business predicated on collecting data.

The rise of mental health apps and online therapy has exposed another vector of self-surveillance. The online therapy company BetterHelp has over 2 million users who benefit from their online and mobile mental health services. You can sign up and answer questions about your mental health issues (such as problems with depression, intimacy, or medications), and they provide connections, advice, and resources to help. Then, they turn around and sell your personal data to Facebook and other targeted advertising companies—or at least they did until 2022, when the FTC brought a complaint against BetterHelp and its subsidiaries to stop the practice and ultimately imposed $7.8 million in fines.

BetterHelp was not alone in marketing information about its users’ mental health. As the Mozilla Foundation reported after an in-depth investigation into the industry, many mental health apps are lax on privacy. Most failed privacy audits, failing to secure (or even outright profiting from) personal mental health data. Even online suicide prevention services turned out to be providing data to Facebook, through automated pixel capture technologies. While there might be nuanced arguments to make about anonymity when it comes to suicide prevention, it’s hard to make the case that advertisers should get access to people in crisis for commercial gain. And of course, if data is available for sale, it is also available to law enforcement and the government. Just imagine how mental health data could be used to establish motive in a crime or embarrass a political opponent.

Police are intensely interested in the secrets our bodies can reveal. The FBI has invested billions of dollars in its Next Generation Information (NGI) biometrics database, billed as the largest such database in the world. Through this system, the FBI collects “voice profiles, palm prints, faceprints, iris scans, tattoos, and, of course, fingerprints,” with the goal of using this information to identify suspects (and victims). The system also pulls in genetic information from CODIS—the agency’s Combined DNA Index System—which contains 21.7 million DNA profiles of offenders and arrestees (almost 7 percent of the US population). Many states have built their own similar databases using samples from arrestees, victims, and other sources, which are sometimes collected in ethically dubious ways. The district attorney’s office in Orange County, California, for example, had a program where they would dismiss misdemeanor violations in return for a DNA sample. That “spit and acquit” sample, of course, could later be used to match suspects in future prosecutions.

New Jersey police went one step further. Under state law, all newborn babies are required to provide a blood sample to be screened for certain life-threatening genetic disorders. The blood sample goes to the Newborn Screening Laboratory, operated by the New Jersey Department of Health, which shares the results with parents as needed. After the testing is completed (and unbeknownst to many parents), the lab retains the DNA for 23 years. The result is a rich trove of genetic information that has uses far beyond disease screening—including as evidence in criminal cases. In one instance, state police subpoenaed the laboratory for the DNA of a newborn in order to link the baby’s father to a 15-year-old crime. In turning over the infant’s DNA, the laboratory provided a critical biological link to identify a suspect. The New Jersey public defender’s office sued to challenge this DNA matching and the laboratory’s lack of transparency, and state lawmakers are working to limit the retention of genetic data to two years. The case—and others like it—demonstrates the danger of large-scale biometric collection. If available, DNA samples will be used for prosecution.

Soon, blood samples may not even be necessary. Next-generation DNA matching can snatch genetic material from the physical environment to test it. Since we all leave our DNA everywhere we go, this will make collection both easier and largely inescapable. New technologies are also allowing DNA to be processed much more quickly. Developed for military use (to identify human remains of US soldiers on the battlefield), these technologies can help identify or exclude suspects and victims in minutes rather than months, offering police valuable clues early in the investigation of a crime.

Biometrics are not new, of course. Police have relied on DNA for decades, and fingerprints longer than that. Digitization at scale, however, has changed the game. More powerful computers can search through massive databases with relative ease, combining DNA evidence with location information and other personal data. To understand the gravity of these shifts, consider your fingerprints. It has long been technically possible for investigators to lift fingerprints off various surfaces, upload those fingerprints to the national NGI database, and create a map of identified people. But doing so would be difficult, time-consuming, and perhaps not very revealing. New DNA technology gives police more information with significantly less effort. So does another growing area of biometric collection: face recognition.

The potential of face recognition for law enforcement can be seen in a run-of-the-mill theft case in Manhattan. On an ordinary day in September, Luis Reyes strolled into an apartment building on West 113th Street, entered the mail room, and stole a few packages. His crime would have gone unsolved but for security footage that recorded the theft. Detectives converted the surveillance video into still photographs and ran those photos through the NYPD’s face-recognition system. The system alerted to a match, and the detective obtained the police file associated with the suspect. The detective could see that the photo in the police file indeed matched the photo captured from the video. An arrest was made. Case closed. Note, however, that everyone else in the building could also be identified using the same technology. Whether it is a mailroom or a medical waiting room, face recognition removes anonymity and enhances surveillance power.

Across the river in New Jersey, a much less promising—if not outright terrifying—case unfolded. A man named Nijeer Parks was falsely arrested for shoplifting after police ran a photo identification card collected at the scene through their face-recognition system. Parks was completely innocent—he’d been 30 miles away when the crime occurred—but he spent 10 days in jail before his lawyers could prove the mistake. The case is troubling on multiple levels. First, police accepted what turned out to be a fake identification card as a real photograph of the suspect. Second, they sought a warrant to arrest Parks based solely on a face-recognition match to the fake photo. Third, a judge signed off on that arrest warrant without demanding more evidence. Finally, Parks had to spend $5,000 on a lawyer to convince the legal system that they had the wrong man. Unfortunately, Parks is not alone. Several other men have been falsely arrested based on erroneous face-recognition matches, and there are likely more cases we don’t know about.

Notably, in both the New York and New Jersey cases, humans were “in the loop” when it came to identifying the suspect, but the algorithmic identification drove the suspicion. In addition, neither case involved a terribly serious crime. If this recently adopted technology is already being used to prosecute low-level offenses, it’s easy to imagine it becoming the default investigative tool in future years, particularly given that more and more of our public life is lived under video surveillance. This is troubling, because as the Georgetown Law Center for Privacy and Technology has reported, face-matching systems are rife with error, in terms of both the quality of input photos and the accuracy of image matching. A NYPD investigator once substituted the actor Woody Harrelson’s face for a suspect’s because they looked similar, and claimed a match. And because the early AI models were largely trained on white male datasets, they are even more inaccurate when identifying women and people of color.

Age and hairstyle can throw off the system, too. Yet, face-recognition matching has been used in many high-profile cases, including the prosecutions of the January 6 rioters at the US Capitol and in deportation investigations. Biometric databases are just the beginning. In addition to real-time face recognition, which can identify members of a crowd on sight, there are technologies that can identify a person by their gait, or even by their perceived emotional affect. The latter are being sold to police as tools for preventing crime. The pitch is that by analyzing someone’s facial expressions or mannerisms, the algorithm could identify would-be mass shooters and alert police, who could intervene before the violent act. Of course, maybe the person flagged was just having a bad day—which is now about to get a whole lot worse.

Our ability to control our own bodies is core to human autonomy and identity. You might think, then, that our bodies and the data they produce—from our sleep patterns to our DNA—would receive significant constitutional protection. You would be wrong.

Part of the problem, as we’ve seen, is that we live our lives in public. Whenever we go to work, the grocery store, the gym, or the bar, we are exposing our faces to the world, sharing our outward-facing identity with everyone present. We shed DNA every time we touch or eat anything or sit anywhere. If we do these things while wearing a smart device, our location maps onto our digital health and biometric trails. Under most theories of the Fourth Amendment, anything that happens in public is free for others, including police, to watch. This is so even if we aren’t purposely exposing our bodies’ intimate secrets in public—we just can’t help doing so.

The law has not quite figured out what to do with this conundrum. As a matter of constitutional law, the Fourth Amendment has not spoken to large-scale biometric surveillance in public. As a matter of statutory law, the federal government has not agreed on a response. The same is largely true when it comes to genetic surveillance through shed DNA and the digital trails created by our smart health devices. This is a significant problem, because evidence from face-recognition systems, shed DNA, and smart devices is already being introduced into criminal cases.

Courts will need to grapple with these issues sooner rather than later.

Traditionally, the Fourth Amendment has not protected publicly exposed human attributes from police observation. In a 1973 case, the Supreme Court wrote: “Like a man’s facial characteristics, or handwriting, his voice is repeatedly produced for others to hear. No person can have a reasonable expectation that others will not know the sound of his voice, any more than he can reasonably expect that his face will be a mystery to the world.” Extending that logic, if a police officer can identify a suspect on the street without violating that person’s expectation of privacy, why shouldn’t a face-recognition camera be able to do the same?

This might have been a reasonable stance in 1973, when cameras were expensive and produced grainy images on film. It’s far less reasonable in 2026, when police can deploy tens of thousands of high-definition cameras, networked together and equipped with sophisticated face-recognition algorithms. If, as the Supreme Court has said in Carpenter v. United States and United States v. Jones, long-term tracking by cell signal or GPS is a search for Fourth Amendment purposes, one might think long-term tracking via face recognition is also a search. Both use some unique identifier to track a person’s location over time. Face-recognition camera systems might, in fact, reveal much more than the systems at issue in Carpenter and Jones.

After all, in addition to location, cameras can also capture video of what you were doing at the location. Yet, as it currently stands, there is no clear Fourth Amendment protection from having your face scanned in public or matched against some face-recognition system.

Your genetic code is about as sensitive as it gets, revealing clues about your health and biological makeup. Genetic code is also stored within a person—and “persons” are explicitly protected under the text of the Fourth Amendment. In a predigital age, the Supreme Court held in Schmerber v. California that the police could not forcibly withdraw blood from a suspect to test their blood-alcohol content, stating that the “overriding function of the Fourth Amendment is to protect personal privacy and dignity against unwarranted intrusion by the State.”

In subsequent cases that involved breathalyzers, the court also considered forced exhalations to be searches. Roadside sobriety tests, which involve collection of biological material, are permissible, but they retain some Fourth Amendment protections. Urine drug screening, too, has been deemed to be a Fourth Amendment search, requiring the government to offer some justification for the collection and testing.

The power of police to search and surveil more people is growing far faster than any constitutional protection.

At the core of these constitutional protections is the recognition that the government should not compel people to reveal their biological secrets without a warrant—though there are some caveats. The Supreme Court has largely allowed the government to require drug testing of federal employees and others whose jobs affect public safety, such as airline pilots. In Maryland v. King, the Supreme Court allowed Maryland police to collect DNA through cheek swabs from anyone arrested for a felony in the state, reasoning that this was analogous to fingerprinting a suspect when they are taken into police custody. And, of course, with a warrant police can compel suspects to produce any of these biological products (blood, breath, urine).

While DNA inside the “person” is protected, DNA abandoned by that person is essentially up for grabs. Courts have allowed police to collect “abandoned DNA,” meaning the DNA from skin or hair or saliva that humans necessarily shed as they go about their lives. As you might imagine, since everyone sheds DNA every day, this is a large loophole for police. It might make sense that a person who breaks into your home to steal your stuff would not have an expectation of privacy for the DNA they leave behind there. But the same logic holds for the DNA in your office bathroom, or on the coffee cup you just tossed into a trashcan on the sidewalk. Police can collect and test that biological material without having to show probable cause that it is connected to a crime.

When it comes to the data revealed by our devices, the legal issues are trickier still. Remember, the Fourth Amendment protects personal property—the objects we possess—as “effects.” But what happens when the data collected by our effects is derived from our person? When our smartwatch tracks our heartbeat, it’s our heart health, not the watch, that we are concerned about revealing. If considered part of the person, this data would likely require a warrant. If not—or if it was voluntarily shared with a third party—the Fourth Amendment question becomes more difficult.

The closest case on tracking physical bodies was decided on rather narrow grounds. In Grady v. North Carolina, the Supreme Court was asked to determine whether strapping a GPS device onto a convicted sex offender permanently as a condition of supervised release violated his Fourth Amendment rights. The Court stated that it did, but on the grounds that the physical placement of the device on Grady constituted a search.

Following the narrow logic of Jones, the Court stated that “it follows that a State also conducts a search when it attaches a device to a person’s body, without consent, for the purpose of tracking that individual’s movements.” Left open was the harder question of whether the interception of data from a similar tracking device (say, from a smart pacemaker or a Fitbit) would be a search for Fourth Amendment purposes. All of this is to say that the short answer to whether our biological information and data are available for police collection is yes. Without a warrant, there are open questions about whether face prints and medical data shared with third parties should be considered abandoned like shed DNA and thus accessible for collection (or not). With a warrant, all biological material is available for criminal investigation.

Even more concerning is that the scale and scope of digital surveillance technologies continue to advance. Whereas a human police officer can observe a few hundred people a day, an AI-assisted face-recognition system can identify millions of faces in the same period. While DNA from a crime scene can identify the few people who were present there, an investigator looking for DNA in the FBI’s databases can identify millions of biometric samples. The power of police to search and surveil more people is growing far faster than any constitutional protection.

The story of our bodies evolving into sources of biometric evidence is a familiar one in the digital age. It is in part a story of our own choice to share our personal biometric data, and in part a story of the companies that have commercialized that data. It is also a story of government collection, digitization, and centralization of police data. As in many of the examples in this book, the scale of the problem has been supercharged by technological change, which in some cases we’ve welcomed freely and in other cases have found imposed on us.

Let’s begin with our culpability. Over 30 million Americans have voluntarily given their DNA to a for-profit, private corporation, ostensibly to gain insight into their genetics and heritage. For a small fee and their genetic code, they can learn about their ancestors and family medical history, and, on occasion, discover family secrets. The problem, of course, is that giving up your genetic data to a company means giving up control of what happens to that information. If police want to link you to a crime, a for-profit company with legal obligations to law enforcement now has DNA evidence that can help them do so.

As for face recognition, almost every adult in the United States has uploaded a picture online. Your LinkedIn page links your best professional photo to your name and work history. The pictures and videos you uploaded to Facebook, Instagram, Snapchat, X (formerly Twitter), Threads, Bluesky, TikTok, or YouTube put your face out there on the web, where they also connect you to other people, places, and events.

When Hoan Ton-That set out to design a new face-recognition system, he turned to those billions of images, scraping them from the internet and using them to train the artificial intelligence that became Clearview AI. The accuracy of the technology’s matching ability was impressive. Perhaps the only thing scarier than a face-recognition system that gets things wrong is one that gets it right. As Kashmir Hill reported in her book Your Face Belongs to Us, licenses for the technology were given free to law enforcement. After an early spike in interest, many (but not all) local police departments have backed away from using the service because of the ethical concerns it raises—both about the possibility of police misusing the technology and the fact that it was trained on some copyrighted images taken without permission. But, of course, initial reluctance of police use can easily shift with the political winds, and federal authorities have been known to use it and equivalent services. At the same time, the growth of private face-recognition services in other areas of our lives has only increased.

Imagine accompanying your daughter to the Rockettes’ famed “Christmas Spectacular” at Radio City Music Hall in Manhattan, only to be summarily kicked out by face-recognition technology. That is what happened to a lawyer named Kelly Conlon in 2022. Conlon’s law firm was involved in a lawsuit against MSG Entertainment, which owns several large event venues, including Radio City and Madison Square Garden.

Apparently, the company’s executives decreed that any lawyer working for any law firm involved in ongoing litigation against the company should be barred from attending any event—from Knicks games to pop concerts and iconic Christmas celebrations. When Conlon entered Radio City with her daughter’s Girl Scout troop, face-recognition technology matched her face to her picture on the firm’s website, and security blocked her entry. The ban covered almost 90 law firms and thousands of lawyers, whether or not they personally had anything to do with the cases against MSG Entertainment. Face recognition was used to bounce lawyers just doing their jobs from attending events they had paid to attend.

This story is not just about one petty litigant. Many other event venues use face recognition for security purposes. Visitors to stadiums in Cleveland, Atlanta, San Diego, and Miami can opt into face recognition to avoid long security lines and get to their seats more quickly. Some venues envision using your face not only as your entry ticket but also for concessions, linking your bar tab to your face (and wallet). Of course, those cameras will also know who has been overserved with alcohol or gotten into a brawl; it’s all potential evidence to be used against you.

Some of the most sophisticated video surveillance systems in the world come from big-box stores like Target and Walmart. Investigators can read the time on your wristwatch as you attempt to steal a dif­ferent one. The technology is that good. Many of these stores use face recognition to guard against theft, keeping a most-wanted list of suspected shoplifters who can be identified and making the footage available to law enforcement interested in following up with prosecution.

Again, sometimes such technology gets things wrong. RiteAid has been banned from using face recognition for five years because the FTC found that the company’s flawed system erroneously targeted innocent women and people of color for suspicion. All of these companies, from Clearview AI to 23andMe, are in the data extraction business—they take data that is either given to them freely or taken from another source and monetize it. The services they provide add value, but they come with real costs to privacy and anonymity. Once commodified, biometric data becomes just another thing to be bought, sold, or used by third parties, including the government.

Parallel to the rise of private monitoring systems are governmental face-verification systems. Face-recognition kiosks now guard international borders, collecting data from all who pass. Government buildings and other secure facilities use face recognition to limit access. The logic is that these systems are essentially just replacing a security guard asking you to sign a logbook after cursorily looking at your ID. Because the person must be present to verify their identity in this way, the matching technology does not reveal much more than a human guard would naturally observe. But while these systems may have begun as a way to speed up the line at customs, their ability to expand into new areas of life is limited only by money and political will. Those checkpoints are training and improving AI face-recognition systems that can eventually be placed pretty much anywhere. Think about how many places you sign into each month, from schools to hospitals to office buildings. Sure, it’s all a form of “security theater” (making you feel safe without actually making you safe), but it will soon be replaced by “surveillance theater” or worse. It could easily lead to bans if the government wanted to weaponize the same power by banning certain people from federal buildings or restricting travel through airports or train stations.

The architecture of face-recognition surveillance is being built with few limits on its use. A 2022 Government Accountability Office report found that 18 dif­ferent federal agencies used face-recognition technology. The Departments of Justice, Homeland Security, Treasury, and Interior used face recognition for domestic law enforcement, and the Departments of Agriculture and Commerce, and the Environmental Protection Agency used face recognition for digital access. Ten of the 18 agencies are expected to expand access to face recognition in the coming years as part of an identity management system. And, of course, a government that wanted to control its citizens could use the existing technology to restrict protest, limit travel, and monitor dissent.

Face recognition is a good example of surveillance that generates broad and diffuse privacy harms. Police see face recognition as a tool for catching the lone bad guy. But everyone else is captured in the net. After all, to capture a single face on video, you need a camera system that scans everyone. That means even though only the suspect will experience the tangible cost of surveillance (getting caught), everyone else loses that much more of their privacy. This collateral, collective harm is a community harm, and not one that is easily addressed under existing law.

The same is true when it comes to DNA databases. When you upload your DNA to a private database like GEDmatch, you are also uploading clues about close family members, whether they want their DNA in the government’s hands or not. Several high-profile cases have relied on this familial DNA evidence.

The science behind DNA evidence is complicated, but it works something like this: Traditional forensic DNA techniques rely on short tandem repeats (STRs), units of genetic material that repeat a dif­ferent number of times in dif­ferent people, allowing similar DNA samples to be matched. Under most protocols, analysts are looking for between 16 to 27 matching STRs, which provides a very high degree of certainty that the two samples were taken from the same person (or their identical twin). In one North Carolina sexual assault trial, the expert testified that “the probability of randomly selecting an unrelated individual with a DNA profile that matches the DNA profile obtained from [the sample] … is approximately one in 28.0 thousand trillion in the North Carolina Caucasian population, one in 398 trillion in the North Carolina black population, one in 6.00 thousand trillion in the North Carolina Lumbee Indian population, and one in 330 thousand trillion in the North Carolina Hispanic population.” Such evidence is highly convincing to a jury, and it’s the typical kind of DNA evidence that we think about being used in trials. All well and good.

We can ditch our cars or phones or Echo Dots. We can’t ditch our DNA, or our hearts, or our faces.

In contrast, familial genetic genealogy involves looking for matching single nucleotide polymorphisms (SNPs) between the biological samples. These are mutations in the DNA that are shared among family members. By comparing how many shared DNA points exist within samples, investigators can make connections between them, suggesting that these two people might be cousins, brothers, or great-grandfather and great-grandson (the mutation runs through the male chromosome). At best, genetic genealogy gets you close to a suspect. From there, analysts have to narrow down the connection using old-fashioned detective work.

The sheer size of these databases has been game-changing for police. As the Los Angeles Times reported, by 2019 consumer DNA had been used to close 66 cases involving 14 suspected serial killers and rapists. But while genetic genealogy was initially used only for the most serious of cases, the cheaper and easier it becomes, the more often it will be used. The data analyzed by the Los Angeles Times showed that DNA was also used to identify the remains of a miscarried pregnancy, a troubling development in an anti-abortion environment where even women who experienced spontaneous miscarriages have been subject to prosecutorial scrutiny.

Genetic testing companies vary in their willingness to provide data to law enforcement. Several companies allow police access as a matter of practice, while others attempt to limit such access. In Orlando, a police officer frustrated with a new policy limiting police access to GEDmatch went to court to obtain a search warrant allowing him to search through all the million-plus DNA samples in that database. To find one suspect, the officer searched millions of DNA samples from people (like your relatives) who gave no meaningful consent to such an action. But because the officer had a warrant, those people have little recourse. As he casually told a reporter: “It’s Big Brother, but Big Brother’s been here for decades … ​Everyone’s trying to focus in on this because it’s DNA, but it’s no dif­ferent than anything else that we do in our everyday lives. Police with a piece of paper and the judge can override almost anything.”

As of 2018, almost 90 percent of white people in the United States could be identified through genetic genealogy, even if they had not personally given their DNA samples to a commercial database. In part because of the impossibility of truly opting out of these databases—you can’t stop your cousins from mailing in the tube of spit that might eventually implicate you in a crime—governments have started to put limits on their use. Maryland, for example, has limited genetic genealogy investigation to the most serious crimes, such as murder, rape, and felony sexual assault, and the state requires that DNA collected for a case be destroyed after use. Those who break Maryland’s laws concerning DNA evidence can face criminal prosecution. Montana and Utah passed less-sweeping laws requiring police to get a judicial warrant before accessing commercial DNA databases. At the federal level, the Department of Justice under the Biden administration was also attempting to rein in use of the technology. The idea of searching the nation’s entire family tree for one bad apple was too much.

Yet, even as governments and companies place limits on the use of genetic information from commercial databases, there are other ways to create a genetic surveillance net. One of the most interesting is a nonprofit that encourages people to donate DNA samples for the explicit purpose of criminal investigation. Two leaders of the genetic genealogy movement, CeCe Moore and Margaret Press, started the DNA Justice Foundation to replicate the scale of commercial DNA databases. The hope is that the database can be used to identify both victims and perpetrators of crime. Because participants will have actively consented to police use of their genetic information, the restrictions placed by courts or private companies won’t apply.

In many ways, the DNA Justice Foundation perfectly encapsulates the troubling relationship between self-surveillance and police surveillance. A private dataset of genetic material is being created with the express purpose of identifying people who did not put their DNA into the system. While the goal of solving cold cases is noble, the cost of giving the government unfettered access to this genetic information undermines biometric anonymity and enhances police power. As a private undertaking, the project avoids any legislative or constitutional oversight, existing outside a legal framework, subject only to whatever rules the individuals in charge deem just.

Our ability to capture and analyze more and more information about our bodies and our health has important upsides. Technological advancements like smart pacemakers and smart glucose monitors improve—and even save—lives. But the fact that such personal data is available to our doctors does not mean that it should be available to police. Perhaps the government should not be allowed to use our heartbeats against us. In the language of the Fourth Amendment, we might consider that unreasonable. Similarly, just because our faces can be scanned and sorted in public face-recognition systems does not mean that they should be (and certainly not without regulation). In a free society, such constant, persistent surveillance is arguably unreasonable.

The emergence of new technologies requires the development of new constitutional and statutory protections. The first state to enact a law protecting consumer biometric information is Illinois. The Biometric Information Privacy Act (BIPA) has been a national example of how to regulate biometric surveillance by private companies. The law protects against the private collection of biometric identifiers like fingerprints, voiceprints, and scans of hands, faces, retinas, or irises without formal notice of collection and written retention policies. In addition, the law forbids selling or otherwise profiting from a person’s biometric identifier or biometric information. The law provides for civil liability if biometric information is shared without permission, which means that it cannot be easily commercialized or commodified without risking monetary damages. Lawsuits under BIPA have challenged corporate use of face recognition, retention of images, and biometric collection without consent, resulting in significant civil penalties against tech companies both big and small. The law is silent on government use of the same biometric data, however, leaving police access to it unaffected.

Some might argue that this is for the best. The stories here involve wrongdoers held to account. Face recognition, for all the risks of misidentification, has also identified guilty suspects. DNA and other biometric data have solved otherwise unsolvable crimes, granting victims some degree of closure. If something were to happen to us, we may well be glad our cousin’s DNA was in a database somewhere.

Still our biometric data is perhaps the most personal data we have, and allowing police and others to have access to it carries significant costs for our privacy, security, and autonomy. Protests against the government take place in public, and face recognition technologies will discourage dissent. Constitutionally protected activities from praying at religious institutions to practicing at shooting ranges can be virtually gated by government surveillance. We can ditch our cars or phones or Echo Dots, at least in theory. We can’t ditch our DNA, or our hearts, or our faces. That makes protecting them all the more important.


Excerpt adapted from Your Data Will Be Used Against You: Policing in the Age of Self-Surveillance by Andrew Guthrie Ferguson. Published by arrangement with NYU Press. Copyright © 2026 Andrew Guthrie Ferguson.

The post Your Body Is Betraying Your Right to Privacy appeared first on Wired.

Paying tribute requires respect
News

Trump says peace talks progressing as Iran officials deny negotiations

by Washington Post
March 24, 2026

Egypt, Pakistan and Turkey have taken the lead in efforts to broker a peace deal between the United States and ...

Read more
News

Why America’s Catholic Bishops Started Sounding Liberal

March 24, 2026
News

Why This College Kid Sacrificed Himself to a Massive Swarm of Mosquitoes

March 24, 2026
News

The 25 countries with the tallest populations, ranked

March 24, 2026
News

Delivery Robot Allegedly Smashes Through Bus Stop Window, Keeps Driving Covered in Broken Glass

March 24, 2026
Saudi Prince Is Said to Push Trump to Continue Iran War in Recent Calls

Saudi Prince Is Said to Push Trump to Continue Iran War in Recent Calls

March 24, 2026
Exclusive: Cambridge Mobile Telematics secures $350 million from TPG, Allianz to make driving safer

Exclusive: Cambridge Mobile Telematics secures $350 million from TPG, Allianz to make driving safer

March 24, 2026
Distinctive Glasses and an Inviting Sweater

Distinctive Glasses and an Inviting Sweater

March 24, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026