President Trump’s declaration of a “crime emergency” in Washington, D.C., will further entwine the U.S. military—and its equipment and technology—in law-enforcement matters, and perhaps expose D.C. residents and visitors to unprecedented digital surveillance.
Brushing aside statistics that show violent crime in D.C. at a 30-year low, Trump on Monday described a new level of coordination between D.C. National Guard units and federal law enforcement agencies, including the FBI, ICE, and and the newly federalized D.C. police force.
“We will have full, seamless, integrated cooperation at all levels of law enforcement, and will deploy officers across the district with an overwhelming presence. You’ll have more police, and you’ll be so happy because you’re being safe,” he said at a White House press conference.
Defense Secretary Pete Hegseth, standing beside Trump, promised close collaboration between the Pentagon and domestic authorities. “We will work alongside all DC police and federal law enforcement to ensure this city is safe.”
What comes next? The June 2020 deployment of National Guard units to work alongside D.C. police offers a glimpse: citywide use of sophisticated intelligence-gathering technologies normally reserved for foreign war zones.
Some surveillance platforms will be relatively easy to spot, such as spy aircraft over D.C.’s closely guarded airspace. In 2020, authorities deployed an RC-26B, a military-intelligence aircraft, and MQ-9 Predator drones. The FBI contributed a Cessna 560 equipped with “dirtboxes”: devices that mimic cell towers to collect mobile data, long used by the U.S. military to track terrorist networks in the Middle East.
Other gear will be less obvious.The 2020 protests saw expanded use of Stingrays, another type of cellular interception device. Developed to enable the military to track militants in Iraq and Afghanistan, Stingrays were used by the U.S. Secret Service in 2020 and 2021 in ways that the DHS inspector general found broke the law and policies concerning privacy and warrants. Agency officials said “exigent” circumstances justified the illicit spying.
Now, with federal agencies and entities working with military personnel under declared-emergency circumstances, new gear could enter domestic use. And local officials or the civilian review boards that normally oversee police use of such technologies may lack the power to prevent or even monitor it. In 2021, the D.C. government ended a facial-recognition pilot program after police used it to identify a protester at Lafayette Square. But local prohibitions don’t apply to federalized or military forces.
Next up: AI-powered surveillance
How might new AI tools, and new White House measures to ease sharing across federal entities, enable surveillance targeting?
DHS and its sub-agencies already use AI. Some tools—such as monitoring trucks or cargo at the border for contraband, mapping human trafficking and drug networks, and watching the border—serve an obvious public-safety mission. Last year, DHS used AI and other tools to identify 311 victims of sexual exploitation and to arrest suspected perpetrators. They also helps DHS counter the flow of fentanyl; last October, the agency cited AI while reporting a 50 percent increase in seizures and an 8 percent increase in arrests.
TSA uses facial recognition across the country to match the faces and documents of airline passengers entering the United States in at least 26 airports, according to 2022 agency data. The accuracy has improved greatly in the past decade, and research suggests even better performance is possible: the National Institute of Standards and Technology has shown that some algorithms can achieve 99%-plus accuracy under ideal conditions.
But conditions are not always ideal, and mistakes can be costly. “There have been public reports of seven instances of mistaken arrests associated with the use of facial recognition technology, almost all involving Black individuals. The collection and use of biometric data also poses privacy risks, especially when it involves personal information that people have shared in unrelated contexts,” noted a Justice Department report in December.
On Monday, Trump promised that the increased federal activity would target “known gangs, drug dealers and criminal networks.” But network mapping—using digital information to identify who knows who and how—has other uses, and raises the risk of innocent people being misidentified.
Last week, the ACLU filed a Freedom of Information Act request concerning the use of two software tools by D.C.’s Homeland Security and Emergency Management Agency. Called Cobwebs and Tangles, the tools can reveal sensitive information about any person with just a name or email address, according to internal documents cited in the filing.
Cobwebs shows how AI can wring new insights from existing data sources, especially when there are no rules to prohibit the gathering of large stores of data. Long before the capability existed to do it effectively, this idea was at the center of what, a decade ago, was called predictive policing.
The concept has lost favor since the 2010s, but many law-enforcement agencies still pursue versions of it. Historically, the main obstacle has been too much data, fragmented across systems and structures. DHS has legal access to public video footage, social media posts, and border and airport entry records—but until recently, these datasets were difficult to analyze in real time, particularly within legal constraints.
That’s changing. The 2017 Modernizing Government Technology Act encouraged new software and cloud computing resources to help agencies use and share data more effectively, and in March, an executive order removed several barriers to interagency data sharing. The government has since awarded billions of dollars to private companies to improve access to internal data.
One of those companies is Palantir, whose work was characterized by the New York Times as an effort to compile a “master list” of data on U.S. citizens. The firm disputed that in a June 9 blog post: “Palantir is a software company and, in the context of our customer engagements, operates as a ‘data processor’—our software is used by customers to manage and make use of their data.”
In a 2019 article for the FBI training division, California sheriff Robert Davidson envisioned a scenario—now technologically feasible—in which AI analyzes body-camera imagery in real time: “Monitoring, facial recognition, gait analysis, weapons detection, and voice-stress analysis all would actively evaluate potential danger to the officer. After identification of a threat, the system could enact an automated response based on severity.”
The data DHS collects extends well beyond matching live images to photos in a database or detecting passengers’ emotional states. ICE’s Homeland Security Investigations unit, for instance, handles large volumes of multilingual email. DHS describes its email analytics program as using machine learning “for spam classification, translation, and entity extraction (such as names, organizations, or locations).”
Another DHS tool analyzes social-media posts to gather “open-source information on travelers who may be subject to further screening for potential violation of laws.” The tool can identify additional accounts and selectors, such as phone numbers or email addresses, according to DHS documentation.
Meanwhile, ICE’s operational scope has expanded. The White House has increased the agency’s authority to operate in hospitals and schools, collect employment data—including on non-imigrants, such as “sponsors” of unaccompanied minors—and impose higher penalties on individuals seen as “interfering” with ICE activities. Labor leaders say they’ve been targeted for their political activism. Protesters have been charged with assaulting ICE officers or employees. ICE has installed facial-recognition apps on officers’ phones, enabling on-the-spot identification of people protesting the agency’s tactics. DHS bulletins sent to local law enforcement encourage officers to consider a wide range of normal activity, such as filming police interactions, as potential precursors to violence.
Broad accessibility of even legally collected data raises concerns, especially in an era where AI tools can derive specific insights about people. But even before these developments, government watchdogs urged greater transparency around domestic AI use. A December report by the Government Accountability Office includes several open recommendations, mostly related to privacy protections and reporting transparency. The following month, DHS’s inspector general warned that the agency doesn’t have complete or well-resourced oversight frameworks.
In June, Sen. Ed Markey, D-Mass., and several co-signers wrote to the Trump White House, “In addition to these concerning uses of sentiment analysis for law enforcement purposes, federal agencies have also shown interest in affective computing and deception detection technologies that purportedly infer individuals’ mental states from measures of their facial expressions, body language, or physiological activity.”
The letter asks the GAO to investigate what DHS or Justice Department policies govern AI use and whether those are being followed. Markey’s office did not respond to a request for comment.
Writing for the American Immigration Council in May, Steven Hubbard, the group’s senior data scientist, noted that of DHS’ 105 AI applications, 27 are “rights-impacting.”
“These are cases that the OMB, under the Biden administration, identified as impacting an individual’s rights, liberty, privacy, access to equal opportunity, or ability to apply for government benefits and services,” Hubbard said.
The White House recently replaced Biden-era guidance on AI with new rules meant to accelerate AI deployment across the federal government. While the updated guidelines retain many safety guardrails, they do include some changes, including merging “privacy-impacting” and “safety-impacting” uses of AI into a single category: “high impact.”
The new rules also eliminate a requirement for agencies to notify people when AI tools might affect them—and to offer an opt-out.
Precedents for this kind of techno-surveillance expansion can be found in countries rarely deemed models for U.S. policy. China and Russia have greatly expanded surveillance and policing under the auspices of security. China operates an extensive camera network in public spaces and centralizes its data to enable rapid AI analysis. Russia has followed a similar path through its “Safe Cities” program, integrating data feeds from a vast surveillance network to spot and stop crime, protests, and dissent.
So far, the U.S. has spent less than these near-peers, as a percent of GDP, on surveillance tools, which are operated under a framework, however strained, of rule-of-law and rights protections that can mitigate the most draconian uses.
But the distinction between the United States and China and Russia is shrinking, Nathan Wessler, deputy director with the ACLU’s Speech, Privacy, and Technology Project, said in July. “There’s the real nightmare scenario, which is pervasive tracking of live or recorded video, something that, by and large, we have kept at bay in the United States. It’s the kind of thing that authoritarian regimes have invested in heavily.”
Wessler noted that in May, the Washington Post reported that New Orleans authorities were applying facial recognition to live video feeds. “At that scale, that [threatens to] just erase our ability to go about our lives without being pervasively identified and tracked by the government.”
The post How Trump’s DC takeover could supercharge surveillance appeared first on Defense One.