As the Trump administration seeks to sweep away obstacles to developing artificial intelligence, the president’s team has brought its zeal for the new technology to the federal government itself.
Orders came down from the White House budget office in April urging every corner of the government to deploy AI. “The Federal Government will no longer impose unnecessary bureaucratic restrictions on the use of innovative American AI in the Executive Branch,” the White House said in a statement announcing the push.
Officials across the government answered the call, according to a Washington Post analysis of more than two dozen recent agency disclosures on AI use. On top of automating rote tasks, government agencies have launched hundreds of artificial intelligence projects in the past year, many of them taking on central and sensitive roles in law enforcement, immigration and health care.
The Department of Homeland Security has adopted new, more sophisticated facial recognition tools. The FBI has purchased novel systems to sift through reams of images and text to generate leads for investigators. And the Department of Veterans Affairs is developing an AI program to predict whether a veteran is likely to attempt suicide.
Revoking — even scorning — the Biden administration’s caution, the White House has directed government departments to cut through any red tape that might slow the adoption of AI. “Simply put, we need to ‘Build, Baby, Build!’” the Trump administration’s AI action plan says.
Federal agencies are doing just that: The 29 that had posted data last week listed 2,987 active uses for AI by the end of 2025, up from 1,684 the year before. The disclosures are required by the budget office and provide basic details about each use of AI. Hundreds of those uses were marked as “high impact,” meaning they are being used as the main basis for making significant decisions or have implications for people’s rights or their safety, according to federal standards.
The White House argues the technology is a way to make the government vastly more efficient, though it’s impossible to tell from the disclosures how well used any of the thousands of tools are.
The practical value of many of these tools remains uncertain. The public, meanwhile, remains deeply skeptical of the technology.
The administration’s focus on speed may come at the expense of ensuring the tools are being used safely, said Suresh Venkatasubramanian, a Brown University computer science professor. AI could spit out erroneous information, leading officials to make bad decisions, or a facial recognition tool could lead to someone being wrongfully placed on a watch list, he said. Venkatasubramanian, who worked on AI safety in the Biden administration, argued that officials previously placed a greater emphasis on oversight and managing risks.
“It’s not the use case itself that raises the question, it’s do you have the guardrails in place to use what can be very noisy and powerful tools in the right way,” he said. “Any particular use case — even the most innocuous sounding ones — could backfire.”
The White House Office of Management and Budget, which is overseeing the government’s AI rollout, did not respond to a request for comment. Its April memo directs agency leaders to ensure “that rapid AI innovation is not achieved at the expense of the American people or any violations of their trust.”
Search through the full list of almost 3,000 AI uses disclosed by 29 federal agencies.
Turbocharged law enforcement
As the administration has dramatically ramped up its deportation efforts, DHS has increasingly turned to advanced technology to turbocharge its work. The department’s disclosures reveal a suite of facial recognition tools deployed in the past year and another system to help identify people to deport. In all, 151 AI use cases mention either “immigration” or “border” or were filed by immigration and customs agencies.
Immigration and Customs Enforcement, which is part of DHS, reported adding new facial recognition functions, including the Mobile Fortify app, which is used to scan individuals’ faces in the field. It also disclosed its use of an unspecified system to identify “vulnerable populations,” which the agency defined as including “unaccompanied minors who have crossed the border.”
ICE also said it began in June using a new generative AI system from the defense contractor Palantir that trawls through handwritten records such as rap sheets and warrants, to automatically extract addresses to aid Enforcement and Removal Operations, the agency’s deportation division. The AI-powered system, called Enhanced Leads Identification and Targeting for Enforcement (ELITE), is not supposed to serve as a “primary basis for enforcement actions,” the agency said. Officers manually review the data and make decisions, it added.
Another Palantir system helps quickly review ICE’s tip line, summarizing and categorizing each tip, whatever language it is submitted in.
“Employing various forms of technology in support of investigations and law enforcement activities aids in the arrest of criminal gang members, child sex offenders, murderers, drug dealers, identity thieves and more, all while respecting civil liberties and privacy interests,” DHS previously said in a statement.
The Justice Department disclosed multiple tools designed to generate leads for investigators, including a facial recognition system at the FBI and another to prioritize tips coming into bureau offices around the country. But many of the department’s descriptions are vague: The output of one FBI tool is described merely as “text.”
Valerie Wirtschafter, a fellow at the Washington-based think tank Brookings, said a lack of detail in some agency disclosures makes it difficult to fully judge some of their more sensitive uses of AI.
The Justice Department and Palantir did not respond to requests for comment.
A Veterans Affairs boom
The Department of Veterans Affairs listed more high-impact uses of AI than any other agency, disclosing 174 such tools either in development or operation to revamp how it provides health care and benefits. The department said it is developing AI helpers to prepare patients for surgery, use computer vision to more precisely measure wounds and identify potential suicide risks that human clinicians might have missed.
Another system is designed to help veterans claim their benefits. “This project harnesses the power of artificial intelligence to analyze vast amounts of data, providing personalized recommendations and streamlined access to a wide array of veteran benefits,” the department said in its disclosure.
Pete Kasperowicz, a VA spokesman, said those four systems “are still being assessed for their viability and have not been tested or deployed.” He said the department uses AI only as a “support tool,” leaving final health care and benefits decisions to agency staff.
Chris Macinkowicz, an official at Veterans of Foreign Wars, a service group, said that while VA’s use of AI promises to help the agency serve millions of veterans more efficiently, it needs to be carefully overseen.
“Our experience has shown that, although AI can be a valuable tool, it is not infallible,” Macinkowicz said in an email. “Human judgment is essential to ensure accuracy, fairness, and accountability in decisions that have a direct and lasting impact on veterans and their families.”
The Department of Health and Human Services disclosed an additional 89 projects connected to medical care. They include using AI to oversee clinical trials and to track the availability of vaccines. The department did not respond to a request for comment.
Chatbots
Many government uses are similar to those available to the general public. Agencies operate at least 180 chatbots designed to not only help federal employees complete mundane tasks such as scheduling travel and IT help, but also support them in more sensitive work like understanding labyrinthine internal rule books. Several agencies are using similar tools to help with writing federal rules and deciding how to award contracts.
In a year that saw hundreds of thousands of federal employees laid off or take buyouts under cuts engineered by the Trump administration, a Defense Department official described at a conference last month how one team was able to use AI to still get a mandatory report finished despite losing the help of a team of about 20 contractors.
“There’s four people, and guess what?” said Jake Glassman, a senior Pentagon technology official. “They generated the report, and I would dare anyone to see any type of difference on that.”
National security
The Pentagon is exempt from the disclosure process, but other government records show how it is aggressively accelerating its AI experimentation. Defense Secretary Pete Hegseth ordered officials to avoid being hamstrung by undue concerns of risks in a memo issued last month. Future AI contracts with vendors must allow for “any lawful use,” he wrote, without further usage constraints.
“We must eliminate blockers to data sharing,” the memo said. “… We must approach risk tradeoffs … and other subjective questions as if we were at war.” The Pentagon told vendors in recent weeks that it is seeking to acquire cutting-edge “agentic” AI systems that exhibit “decision-making capabilities” and “human-like agency” for its elite Special Operations forces. One potential use for such systems is to weigh various “constraints” that govern when units can initiate or continue combat and the risk of killing or injuring civilians.
“These constraints overlap and sometimes include conflicting guidance,” the department said in a request for industry input, adding that the AI agents should understand how certain constraints have priority over others.
The request said the tools are expected to adapt and learn in real time, though they will be prohibited from “online” learning in contexts such as “kinetic fires” — the use of live ammunition — “since it may lead to undesired behavior.”
The Defense Department did not respond to a request for comment.
Science and research
Government scientists are experimenting with using AI to solve problems in hundreds of niche areas, including eight related to whales and dolphins. Some at the National Oceanic and Atmospheric Administration are working on “Automated whale blow detections” — part of a population-tracking effort. (Some of these biologists are having fun, titling one project “Artificial Fintelligence: Automating photo-ID of dolphins in the Pacific Islands.”) Some 49 other projects use AI to evaluate satellite and aerial imagery to detect ice seals, track invasive species, estimate soybean yields, and locate cooling towers that might be vectors for the spread of Legionnaires’ disease.
NOAA did not respond to a request for comment.
Federal archivists have also turned to AI to help make the nation’s history more accessible.
Jim Byron, a senior adviser at the National Archives and Records Administration, said the agency launched an AI-powered tool last month to let the public search through newly digitized records. They include documents related to the assassinations of President John F. Kennedy and Martin Luther King Jr., as well as the disappearance of pioneering aviator Amelia Earhart.
Byron said in a statement that the agency plans to build on its work, calling the tool a “giant leap into the present.”
The post Trump set off a surge of AI in the federal government. See what happened. appeared first on Washington Post.




