When it comes to AI, the Trump Administration has largely positioned itself as the opposite of the Biden White House—criticizing what Trump’s tech policy advisors saw as overly burdensome AI safety efforts and licensing regimes, and embracing an anti-regulation approach. Former Trump “AI and crypto czar” David Sacks best embodied this policy ethos. But the Trump Administration, according to multiple news reports, is now about to engage in a head-spinning policy pirouette. Driven by concerns about the national security implications of Anthropic’s new “Mythos” AI model, with its ability to identify and exploit cyber security vulnerabilities—as well as broader fears around cyber capabilities and dangerous misuse—the administration is now reportedly considering oversight for advanced AI models. The policies under discussion, according to news reports, include an executive order that would create a government-industry working group to examine how frontier AI systems should be evaluated before release.
At the same time, the Center for AI Standards and Innovation (CAISI) — the Trump administration’s renamed version of the Biden-era United States AI Safety Institute — announced partnerships with Google, Microsoft, and xAI to evaluate some AI models before deployment.
According to an agency press release, CAISI’s agreements with frontier AI developers “enable government evaluation of AI models before they are publicly available, as well as post-deployment assessment and other research.” The agency said it has completed more than 40 such evaluations, including on state-of-the-art models that remain unreleased.
In an interview on Fox Business this morning, White House National Economic Council Director Kevin Hassett said the administration is studying a possible executive order that would create “a clear road map” for how advanced AI systems should be evaluated before release.
“We’re studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AIs that also could potentially create vulnerabilities should go through a process so that they’re released to the wild after they’ve been proven safe — just like an FDA drug,” Hassett said. “Mythos is the first, but it’s incumbent on us to build a system so U.S. AI can be the leader in AI and be safe at the same time. That’s really pretty much what we’re working on almost full-time right now.”
From criticizing oversight to championing it
The current debate carries with it a strong sense of déjà vu. The original U.S. AI Safety Institute was created by Joe Biden through his November 2023 AI Executive Order, with the goal of helping the federal government evaluate and better understand frontier AI systems from companies like OpenAI, Anthropic, and Google. The order also invoked the Defense Production Act to require companies training the largest AI models to share certain safety testing results with the government.
In other words, the administration that once criticized Biden’s AI oversight efforts is now considering adopting broadly similar policies, even though the original U.S. AI Safety Institute was systematically rebranded and restructured (the word “safety” was notably removed) and its inaugural director, Elizabeth Kelly, stepped down shortly after Trump’s inauguration in January 2025. (She subsequently joined Anthropic as head of “beneficial deployments,” one of several hires of former Biden officials that may have contributed to the acrimonious relationship between Trump’s tech policy team and Anthropic.)
At the end of April, Chris Fall, who served as an Energy Department official in the first Trump administration, was tapped to lead the rebranded CAISI, with a Commerce Department spokesperson saying “Dr. Fall brings the scientific leadership needed to ensure America leads the world in evaluating frontier AI models and advancing the technical standards that protect our national and economic security.” Fall replaced Collin Burns, a former member of Anthropic’s technical staff, who was dismissed from his position after just days on the job, with unnamed Trump administration officials telling reporters that they had not been informed of Burns’ appointment.
Fall spent nearly four years as vice president for applied sciences at technology research nonprofit MITRE.
“The is a 180 for the Trump administration, that has very explicitly been anti-any sort of regulation and also has explicitly tried to block states from enacting any kind of regulation,” said Rumman Chowdhury, an CEO of Humane Intelligence and former US Science Envoy for AI.
A focus on national security risks
Still, the renewed push for evaluations is being framed less around AI ethics concerns and worry about existential dangers, which was a strong focus of the Biden Administration, and more around immediate national security risks.
That backdrop includes the uproar over Anthropic’s Mythos model and a broader shift in Washington toward viewing frontier AI systems through the lens of cyberwarfare, infrastructure security, and geopolitical competition. Anthropic itself was labeled a national security threat by the administration after refusing to grant the Pentagon unrestricted use of its technology—a designation the company is now challenging in court. Trump recently struck a more conciliatory tone, telling CNBC that Anthropic was “shaping up” and that “I think we will get along with them just fine.”
Chowdhury said the current White House efforts to offer “sensible oversight” over frontier AI models may sound good, but the devil is in the details. “It depends on their interpretation of these words,” she said. “Evaluations are a policy tool, they are not actually data-driven. My concern is that this is another political tool that the administration wants to own and wield.”
But it remains unclear whether CAISI has the funding and authority needed to fulfill its mission. In 2024, The Washington Post published an investigation into National Institute of Standards and Technology (NIST), the agency that houses CAISI, finding that budget constraints had left the 123-year-old institution understaffed in key technology areas and many facilities at its Gaithersburg, Maryland, and Boulder, Colorado campuses below acceptable building standards.
At the time, now Senate minority leader Chuck Schumer had announced that an appropriations bill included up to $10 million for the establishment of the USAISI at NIST.
In January 2026, Congress approved funding increases for NIST’s AI work including $55 million for NIST AI research and measurement efforts and up to $10 million specifically to expand the agency, rebranded as CAISI. But one policy analysis this year, from conservative think tank America First Policy Institute, said CAISI remains underfunded compared with peer institutes internationally and lacks “appropriate funding.”
AI model vetting does not mean secure systems
The challenge is compounded by the fact that much of the government’s evaluation effort depends on cooperation from the same companies building the models.
“In 2024, BIML identified 23 LLM security risks that are located inside the black box of the frontier models (and thus managed by the vendors themselves),” Gary McGraw, CEO of the AI security nonprofit Berryville Institute of Machine Learning (BIML), said in an email to Fortune. “In our view, any regulatory guidance should systematically address these risks by opening the black box to scrutiny.”
McGraw added that BIML is “deeply concerned that the foxes might be asked to guard the chicken house even though they already designed and constructed it in secret.”
In addition, while AI model vetting is useful, it should not be mistaken for AI system security, said Rob van der Veer, founder of the the OWASP (Open Worldwide Application Security Project) AI Exchange and chief AI officer at global technology consultancy Software Improvement Group.
“AI model vetting can motivate model makers to invest more in resilience, and it can help expose obvious weaknesses,” he said by email. “But AI models will remain fragile, no matter how much we test them…so yes, test the models. Vet them. Improve them. But design the system as if the model can still fail. Because it can.”
The post Trump’s AI policy team came into office opposing everything Biden did. Now it’s on the cusp of implementing many of the same policies appeared first on Fortune.




