The face-recognition app Mobile Fortify, now used by United States immigration agents in towns and cities across the US, is not designed to reliably identify people in the streets and was deployed without the scrutiny that has historically governed the rollout of technologies that impact people’s privacy, according to records reviewed by WIRED.
WIRED has made this article free for all to read because it is primarily based on reporting from Freedom of Information Act requests. Please consider subscribing to support our journalism.
The Department of Homeland Security launched Mobile Fortify in the spring of 2025 to “determine or verify” the identities of individuals stopped or detained by DHS officers during federal operations, records show. DHS explicitly linked the rollout to an executive order, signed by President Donald Trump on his first day in office, which called for a “total and efficient” crackdown on undocumented immigrants through the use of expedited removals, expanded detention, and funding pressure on states, among other tactics.
Despite DHS repeatedly framing Mobile Fortify as a tool for identifying people through facial recognition, however, the app does not actually “verify” the identities of people stopped by federal immigration agents—a well-known limitation of the technology and a function of how Mobile Fortify is designed and used.
“Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive identification, that it makes mistakes, and that it’s only for generating leads,” says Nathan Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project.
Records reviewed by WIRED also show that DHS’s hasty approval of Fortify last May was enabled by dismantling centralized privacy reviews and quietly removing department-wide limits on facial recognition—changes overseen by a former Heritage Foundation lawyer and Project 2025 contributor, who now serves in a senior DHS privacy role.
DHS—which has declined to detail the methods and tools that agents are using, despite repeated calls from oversight officials and nonprofit privacy watchdogs—has used Mobile Fortify to scan the faces not only of “targeted individuals,” but also people later confirmed to be US citizens and others who were observing or protesting enforcement activity.
Reporting has documented federal agents telling citizens they were being recorded with facial recognition and that their faces would be added to a database without consent. Other accounts describe agents treating accent, perceived ethnicity, or skin color as a basis to escalate encounters—then using face scanning as the next step once a stop is underway. Together, the cases illustrate a broader shift in DHS enforcement toward low-level street encounters followed by biometric capture like face scans, with limited transparency around the tool’s operation and use.
Fortify’s technology mobilizes facial capture hundreds of miles from the US border, allowing DHS to generate nonconsensual face prints of people who, “it is conceivable,” DHS’s Privacy Office says, are “US citizens or lawful permanent residents.” As with the circumstances surrounding its deployment to agents with Customs and Border Protection and Immigration and Customs Enforcement, Fortify’s functionality is visible mainly today through court filings and sworn agent testimony.
In a federal lawsuit this month, attorneys for the State of Illinois and the City of Chicago said the app had been used “in the field over 100,000 times” since launch.
In Oregon testimony last year, an agent said two photos of a woman in custody taken with his face-recognition app produced different identities. The woman was handcuffed and looking downward, the agent said, prompting him to physically reposition her to obtain the first image. The movement, he testified, caused her to yelp in pain. The app returned a name and photo of a woman named Maria; a match that the agent rated “a maybe.”
Agents called out the name, “Maria, Maria,” to gauge her reaction. When she failed to respond, they took another photo. The agent testified the second result was “possible,” but added, “I don’t know.” Asked what supported probable cause, the agent cited the woman speaking Spanish, her presence with others who appeared to be noncitizens, and a “possible match” via facial recognition. The agent testified that the app did not indicate how confident the system was in a match. “It’s just an image, your honor. You have to look at the eyes and the nose and the mouth and the lips.”
Agents described the Oregon operation as part of “Operation Fortify the Border” and referred to enforcement in the Pacific Northwest specifically as “Operation Blackrose.”
“Facial recognition can be wrong, and it has been wrong in the past,” says Mario Trujillo, a senior staff attorney at the digital-rights nonprofit Electronic Frontier Foundation. “Here, the safeguards you’d expect—confidence scores, clear thresholds, multiple candidate photos—don’t appear to be there.”
The appearance of multiple identities is not an error but a feature of how Mobile Fortify operates in the field: Regardless of how agents use it, the system is built to generate candidate matches, not confirmations. Rather than exhaustively searching vast biometric galleries, Fortify converts a photo into a mathematical template and returns only entries that score high enough to be treated as possible matches. That threshold could be set manually or adjusted dynamically in response to response-time requirements and system load.
When images are taken outside controlled conditions, even small differences—head tilt, lighting, shadow, cropping, focus, or expression—can alter the template and reshuffle the pool of candidates. Quicker responses also require smaller candidate pools. This is a significant trade-off of demanding real-time results.
When a poorly framed street photo causes the subject to be excluded by the system early, it is a mathematical certainty that any match will be a miss.
Mobile Fortify’s primary function is to expand the number of photos and biometric data that DHS collects, including fingerprints, by shifting its collection from ports of entry to routine ICE encounters occurring far from US borders. The data is stored in databases linked by a centralized platform known as the Automated Targeting System (ATS). CBP says the data are retained for up to 15 years but may persist longer if shared with other agencies beyond CBP’s control.
Among other biometric systems, ATS is linked to the Traveler Verification System (TVS), used by CBP for facial comparison at ports of entry, during pre-arrival vetting, and in other screenings tied to border crossings. Under CBP policy, photos and biometric data of US citizens who opt out of biometric identification are supposedly deleted from TVS in under a day.
Internal records show that data collected through Fortify may also be stored in the Seizure and Apprehension Workflow (SAW), which is described as a “biometric gallery of individuals for whom CBP maintains derogatory information.” Unlike TVS, SAW is used for intelligence purposes and lead generation. A “derogatory hit” does not indicate undocumented status, criminal conduct, or probable cause for arrest. US citizens are not explicitly excluded, and records are retained for up to 15 years.
ICE agents are instructed to photograph subjects for facial recognition before trying to match their fingerprints, which court records show is done in-office rather than on the street, even though fingerprints are a stronger biometric for confirming identity. The sequence prioritizes speed and ease of collection over positive identification. When fingerprints are taken, they are routed through ATS to the IDENT database and retained for a minimum of 75 years.
Records also indicate the existence of another derogatory watch list controlled by CBP and fed by Fortify, known as the Fortify the Border Hotlist. The list is mentioned in only a single publicly released document and was first revealed by 404 Media last year. The record does not describe the criteria for placement on the watch list nor any removal or appeals process. It is unclear whether US citizens are included.
DHS did not respond to questions about the criteria that governs the watch list, whether US citizens are added, or whether a redress process exists for individuals mistakenly included.
A letter released Tuesday by US senator Ed Markey warns that DHS officials have suggested building a database to catalog people who protest or observe immigration enforcement. It cites public statements and internal directives instructing agents to collect images and personal information on protesters and bystanders. Markey said in a statement on Wednesday that DHS has deployed an “arsenal of surveillance technologies” that it was using to monitor “both citizens and noncitizens alike,” calling it “the stuff of nightmares.”
“There’s a cascading set of problems with this app and what ICE and CBP are doing,” says Trujillo. “Field facial-recognition scans in the interior are incredibly invasive. You’re taking a measurement of a person’s face and comparing it against millions of photos in a database. It gives a veneer of certainty when there isn’t certainty behind the scenes.”
Trujillo also believes that “there’s a straightforward argument that DHS and its components are exceeding their authority here.”
Fortify relies specifically on matching algorithms developed by the NEC Corporation of America, the US subsidiary of a Japanese multinational headquartered in Tokyo, as first reported by WIRED last month.
Testing by federal scientists at the National Institute of Standards and Technology, conducted with DHS and CBP, shows face-recognition accuracy drops sharply when images are taken outside controlled settings, including for top-performing NEC models. The tests distinguish between “high quality visa-like photos” taken in immigration offices and “wild” images captured in real-world conditions, such as immigration lanes or at registered traveler kiosks.
At ports of entry, CBP relies on tightly controlled visa photos: fixed cameras, cooperative subjects, neutral expressions, plain backgrounds, and uniform lighting. Images are rejected if a subject’s head falls outside a narrow size range. Even highly accurate systems, NIST says, struggle with poor framing or head tilt.
Street photos taken by cell phone lack those controls. Lighting varies, angles shift, and motion blur is likely common in the field, shaped as much by environment as the steadiness of the person holding the phone. High resolution alone does not correct for these constraints.
A WIRED review of recent face-recognition patents assigned to NEC shows the company’s technology is designed less to conclusively verify identity than to operate at scale under imperfect conditions. The patents describe systems that convert face images into biometric templates and compare them against stored records using similarity scores and adjustable thresholds, with the explicit goal of maintaining capacity when image quality varies.
The patents also acknowledge a core trade-off. NEC describes tuning match thresholds and system behavior to balance speed, scale, and accuracy, noting that lowering thresholds can reduce delays and failed transactions while increasing the risk of false positives, and that tightening them can have the opposite effect. thresholds may be adjusted dynamically based on operational factors such as system load, frequency of use, or performance speed.
To support real-time use, the patents describe systems configured to stop searching after a short, fixed time window—on the order of seconds—if no match is found. In those cases, the system may still surface the highest-scoring candidate pair for human review, even though the score was insufficient for an automated match. Those limits reduce computing demands and enable rapid responses, but they also constrain how thoroughly large databases are searched, producing results that are suggestive rather than definitive.
NEC did not respond to questions about its licensing of facial recognition technology to US immigration agencies, including questions of how its systems are designed to perform in uncontrolled field conditions, what safeguards or usage constraints it provides to government customers, and whether it evaluates civil-liberties or human-rights implications prior to licensing its products.
Emily Peterson-Cassin, director of the Demand Progress Education Fund, a pro-privacy nonprofit, warned that unchecked facial recognition threatens free expression and civil liberties. “Privacy safeguards are essential to stopping wrongful targeting by unvetted tools such as Mobile Fortify,” she says, “and are the minimum protection needed to prevent this technology from becoming a choke hold on our most basic freedoms.”
In the days and weeks after the start of Trump’s second term, DHS officials began dismantling policies and oversight checks that had constrained the use of facial recognition, including those aimed at enforcing congressional mandated privacy protections. If DHS has an enterprise-wide policy today that governs when, how, and under what safeguards facial recognition can be used, it isn’t public.
Online archives show the last such directive, implemented in 2023, disappeared from the agency’s website three weeks after Trump’s inauguration. Among other constraints, Directive 026-11 stated that facial recognition should not be used as the sole basis for law or civil enforcement actions and that US citizens should have the right to opt out when collection isn’t for a law enforcement purpose.
“DHS has not publicly replaced its previous facial recognition directive, and it appears DHS has no policy or even restrictions on the use of facial-recognition technology,” says Jeramie Scott, senior counsel for the nonprofit Electronic Privacy Information Center and director of its surveillance oversight program.
The directive also prohibited “systemic, indiscriminate, or wide-scale monitoring, surveillance or tracking.” It said facial recognition could not be used to “profile, target, or discriminate against individuals for exercising their constitutional rights.” It forbade the use of tools that claim to analyze people’s faces to infer personal traits and characteristics (with a narrow exception for age). It also tied any use of facial recognition by ICE and CBP to a headquarters-level review and authorization process, to be carried out by DHS’s chief privacy and information officers, respectively.
Fortify was fast-tracked less than two months after Directive 026-11 disappeared, in May 2025. CBP and ICE privacy officers alone concluded that no new privacy assessment was required under federal law—an authority it did not previously possess. Records show that such determinations historically rested with the DHS senior director of privacy compliance, a headquarters official acting on behalf of the department’s chief privacy officer and independent of operational agencies.
A disclaimer in records first revealed by 404 Media documents the swap. CBP states that it “assumed responsibility for the review and adjudication” of privacy reviews on March 3, 2025, citing internal policy guidance issued by an office led by Roman Jankowski, who was appointed DHS’s chief privacy officer the day Trump was inaugurated.
Federal guidelines typically require a privacy assessment when an agency deploys a new technology that collects identifiable information about the public or materially changes how, where, or from whom that data is collected. They also flag “new uses of an existing IT system” that introduce “new privacy risks,” including changes that would open new avenues for data exposure.
Prior to joining the administration, Jankowski worked for the Heritage Foundation and its Oversight Project, where he was a contributor to the group’s Project 2025 blueprint for Trump’s second term. The document variously recommends merging ICE and CBP; weakening the department’s centralized oversight; and transferring civil rights and privacy review functions from DHS to the agencies it is meant to police.
“This cavalier approach to the use of facial recognition has real-world consequences to our privacy, civil liberties, and civil rights that are exacerbated by the undermining of what little oversight is in place,” Scott says. “The result, as we see with Mobile Fortify, is a failure to meaningfully scrutinize the technology.”
Senator Markey and colleagues this week introduced legislation aimed at prohibiting ICE and CBP from using certain facial-recognition and biometric surveillance tools, saying the agencies have built a sweeping surveillance apparatus that is being used far from the border to scan people without consent, accountability, or clear legal limits. The full text of the bill, short-titled the ICE Out of Our Faces Act, was unavailable at time of writing.
“Facial recognition technology sits at the center of a digital dragnet that has been created in our nation over the past year,” Markey said during a press conference on Wednesday. “It’s dangerous, it’s authoritarian, and it’s unconstitutional.”
Additional reporting by Matt Giles.
The post ICE and CBP’s Face-Recognition App Can’t Actually Verify Who People Are appeared first on Wired.




