The arrival of a new generation of powerful artificial intelligence models, like Anthropic’s Mythos, has begun to crack the White House’s hard-line stance on promoting the technology, as top officials confront security risks posed by tools that can easily find flaws long buried in computer code.
President Donald Trump’s team is considering an executive order to tackle those risks, and National Economic Council director Kevin Hassett compared the approach this week to how the FDA tests new drugs “so that they’re released to the wild after they’ve been proven safe.”
Within hours, White House chief of staff Susie Wiles used her fourth-ever post on X in an apparent attempt to clarify the comments, saying that the president would not be “in the business of picking winners and losers.”
Details about how the system might work were still being hashed out, according to an official familiar with the planning, and experts said the FDA-style system outlined by Hassett could face major hurdles, including requiring Congress to pass a law.
The White House is aiming to buy time to address new risks, as future generations of AI become more powerful, said the official, who spoke on the condition of anonymity to describe internal deliberations. They will not likely seek a clear vote on releases, according to the official.
But there are other signs that the administration is looking to tighten its oversight of AI. The Commerce Department revitalized a Biden-era program to test new models this week, and an agency that handles IT contracts has proposed sweeping new language to give the government more control over AI used in federal work.
The potential shift has thrown the world of AI policy experts into a mad scramble, said Nathan Calvin, general counsel and vice president of state affairs at Encode, a nonprofit AI advocacy group, and “people are trying to figure out how to ride this wave in a productive way without knowing exactly where it’s all going.”
“We just heard a bunch of top Cabinet officials saying the words ‘safety’ and ‘AI’ in the same sentence, which is not how the admin was talking about these issues even a few months ago,” said Calvin. “There’s a real sense where ‘safety’ isn’t a bad word anymore.”
In response to a request for comment, a White House official said the administration was collaborating with the top AI companies and “exploring the balance between advancing innovation and ensuring security.”
Some in the tech industry are alarmed about the prospect of new limits in an industry where leaps forward can be made in a matter of days. Hassett’s comparison to the FDA, which many in Silicon Valley see as a bloated entity that stands in the way of getting life-saving drugs to patients, was especially triggering.
“This would be a complete rejection of Trump’s current AI approach,” Neil Chilson, the head of AI policy at the Abundance Institute, wrote on X. “It would be more precautionary and innovation-chilling than anything the Biden admin ever proposed.”
Some of the moves would have been all but unthinkable when Trump took office last year with the support of many in the tech industry and pledged to ease hurdles for tech entrepreneurs. Trump quickly tore up a government order on AI safety, declaring that his administration would strip away barriers to the development of new technology erected under President Joe Biden.
The move set the tone for what was to come: San Francisco and Washington working hand in hand to unleash AI on the economy, and turning back efforts in Congress and state capitals to tie the new industry up in regulation. In December, the president signed an executive order that included the threat of lawsuits against states that passed what the administration deemed to be onerous regulations.
Even as skepticism of the technology and its makers divided Trump supporters, the White House forged ahead.
That view is now in flux. While the details of what the administration is planning remain uncertain, experts say it is increasingly clear that the White House is undergoing a shift in thinking.
“It’s a big reversal from where they were before, but it’s also the correct decision,” said Chris McGuire, a senior fellow at the Council on Foreign Relations. “The idea that we’d just put these extremely powerful tools out into the ether was always a non-starter.”
The government has a range of options to gain more control over the release of new models. Experts said it could seek voluntary cooperation from the companies developing cutting-edge models or use provisions in federal contracts to push the industry to adopt new safety measures. Officials could also turn to the Defense Production Act, a law that potentially gives the president sweeping powers over private companies in times of emergency.
The tumult emerged last month when Anthropic, one of the leading AI firms, announced that it had developed Mythos, a new model adept at finding security flaws in computer code. The company said it was too dangerous to release Mythos to the general public and that it was instead teaming up with a small group of businesses to help them work through the risks. Anthropic rival OpenAI quickly declared that its latest models had similar capabilities — claims backed up by reviewers at the British government’s AI Security Institute.
The Trump administration summoned the bosses of big banks to make sure they were taking the new technology seriously, and hosted leaders from the two companies for rounds of briefings — including a meeting between Wiles and Anthropic chief executive Dario Amodei at the White House. Officials chose the Office of the National Cyber Director, a small agency responsible for computer security, to oversee the government’s response.
The dilemma over security came amid a shift in leadership. The investor David Sacks, the face of the administration’s hands-off approach, had left his position as the White House’s AI and cryptocurrency czar in late March.
The move coincided with the growing concerns about the risks posed by Anthropic’s new system among officials including Wiles, Vice President JD Vance and Treasury Secretary Scott Bessent, The Washington Post previously reported.
While officials continue to weigh an executive order, they have taken other steps that could give them greater influence over the industry.
This week, the Center for AI Standards and Innovation announced it was expanding on work launched under Biden to test models before they are released to the public, signing new agreements with Google, Microsoft and Elon Musk’s xAI. The agency would not require the companies to meet particular standards but said the goal is to better understand national security risks as the technology develops.
In March, the General Services Administration, an agency that handles IT contracts for the government, issued a draft of a new standard clause that would give officials significant control over AI systems they use — including the ability to probe for chatbot responses that include “unsolicited ideological content.”
Jessica Tillipman, associate dean for government procurement law studies at George Washington University, called the approach a “sledgehammer.”
“Anybody who thinks this administration is still light touch has got their head in the sand,” she added.
But while voices calling for more oversight might have the upper hand for now, Daniel Castro, president of the Information Technology and Innovation Foundation, said the pendulum could swing back.
“Right now if you’re calling for more regulation of AI or more oversight of AI, there’s a compelling case to be made based on the recent evidence, but give it time, and I think we’ll get back to a more grounded position,” Castro said.
The post How a new breed of hacking tools is forcing a White House reset appeared first on Washington Post.




