Anthropic chief executive Dario Amodei met with White House Chief of Staff Susie Wiles on Friday, according to a person briefed on the meeting, as the federal government races to understand the national security implications of a powerful new artificial intelligence model called Mythos that the company says it has developed.
The meeting reflects the strange embrace locking together Anthropic and the Trump administration. The White House has sought to blacklist the company from doing business with the federal government after a dispute over the use of its AI model by the Pentagon spun out of control this year.
At the same time, the government has been forced to engage with the company over the risks posed by its next-generation Mythos system, which Anthropic says has powerful and unprecedented abilities to find security weaknesses in computer code.
A White House statement called the meeting “productive and constructive,” and Anthropic said in its own statement that the discussion had covered “shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety.”
But there were no breakthroughs in the relationship between Anthropic and the administration at Friday’s meeting, the person briefed on it said, speaking on the condition of anonymity to discuss private talks.
“We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology,” the White House statement said. It added that the administration plans to host similar discussions with other leading AI companies.
Anthropic says the new Mythos model could help programmers fix long-dormant vulnerabilities — but it could also supercharge hackers targeting U.S. businesses and government agencies.
Cybersecurity experts recognized with the release of OpenAI’s ChatGPT in late 2022 that artificial intelligence would become an increasingly powerful hacking tool. But Anthropic’s dramatic announcement of Mythos last week and the company’s detailed claims about its apparent capabilities have focused the attention of the industry and government leaders around the world on the potential dangers of AI-enhanced cyberattacks.
Anthropic has said it has briefed U.S. government cybersecurity agencies on the new model. Officials at the White House and the National Institute of Standards and Technology have been studying its implications, according to an internal email obtained by The Washington Post and a person briefed on those discussions. Officials are exploring the possibility of giving more agencies access to a version of the model, according to the email.
Some officials are especially concerned, including Wiles, Vice President JD Vance and Treasury Secretary Scott Bessent, said the person, who spoke on the condition of anonymity to characterize private conversations. “Rightly so.”
The Trump administration has sought to speed the development of AI, trying to push aside regulations that could hold the industry back and position the United States to win what it sees as a race with China to dominate the technology. But the increasing power of the new generation of systems means officials are having to confront some of the downsides that the technology could bring.
President Donald Trump remained bullish about the prospects of the technology but was asked this week if some forms of AI should have a “kill switch.”
“There should be,” Trump told Fox Business.
A White House official said the administration is working with leading AI labs “to ensure their models help secure critical software vulnerabilities.”
Anthropic has said it will not immediately release the new model publicly to avoid enabling a rash of cyberattacks. Instead, the company formed a coalition of major tech companies and other big businesses including Apple, Microsoft and JPMorgan Chase to size up the risks Mythos poses and try to patch any holes. It called the effort Project Glasswing. The AI lab said Mythos had already unearthed thousands of vulnerabilities, affecting every major computer operating system and web browser.
“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” Anthropic said in its announcement. “The fallout — for economies, public safety, and national security — could be severe.”
OpenAI, one of the other leading labs, is finalizing a potent next-generation system code-named Spud. The company said this week that it was expanding a program to give computer security experts access to versions of its ChatGPT tools designed to help secure systems against attacks. (The Post has a content partnership with OpenAI.)
A British government AI safety agencyconcluded in a blog post Monday that Mythos represents a “step up” in the hacking ability of AI tools. It can, in a controlled environment, carry out tasks that would represent days of work for a human alone, the agency said.
Mythos “is at least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained,” researchers at the AI Security Institute wrote. But they said its ability to tackle harder targets remained unclear. Some security experts have said it is unclear how significant an advance Anthropic has made, because few outsiders have been able to properly test Mythos.
Peter Ranks, a former CIA cyber intelligence official, said the best-defended computer systems are likely to only get more secure with the help of AI. But he said during a panel discussion in Washington last week that less well-resourced systems, in which holes do not regularly get patched, will be at greater risk, because tools like Mythos will make less-skilled attackers more effective.
Computer security specialists across the economy should expect to soon experience a deluge of software updates as participants in Anthropic’s Glasswing program rush to patch their systems, a panel of experts said in a report preparing the industry for the coming storm.
The experts said it was time for the sector to rethink its approach to defense to account for the potential for high-speed, AI-powered attacks.
Peter Swire, a cybersecurity expert at Georgia Tech, said that attacks launched with help from AI have so far proved less effective than many in his field had expected. But defenders have been expecting that to change as AI tools advance. “The major defensive players are likely to succeed better than the doomsday scenarios would suggest,” Swire said.
Federal agencies have rushed to respond to the changing landscape. Bessent and Federal Reserve Chair Jerome H. Powell hosted the chief executives of major banks in Washington last week to urge them to take the risks seriously. Bessent said that he sees the power of the AI systems growing quickly and that some financial institutions are better at cybersecurity than others.
“I feel confident that everyone is now on board, rowing in the same direction to build up resiliency,” Bessent said in an interview with CNBC on Wednesday.
Until this spring, Anthropic had a close relationship with the federal government, having been the first of the major AI companies approved to work on the classified systems where agencies store their secrets. But as the Defense Department pushed for more control over how Anthropic’s Claude model could be used — seeking the freedom to use it for any lawful purpose — Amodei pushed back, saying he would not agree to the tool being used to power fully autonomous weapons or carry out mass domestic surveillance.
Amodei met personally with Defense Secretary Pete Hegseth to try to reach a deal, but the talks collapsed at the end of February. Trump blasted Anthropic’s leaders as “Leftwing nut jobs.”
A court in San Francisco ruled that the blacklisting was probably illegal, but a separate panel of federal judges in Washington issued a preliminary ruling allowing it to remain in place.
Claude is deeply enmeshed in the military’s systems. The same night that the Trump administration said it would cut ties with Anthropic, its system was put to use to aid the bombing campaign in Iran.
And in a sign that the company was trying to repair its relationship with the White House, disclosures filed this week showed that it had spent $130,000 in March to hire lobbyist Brian Ballard, who has close ties with the president’s team.
The post Anthropic CEO visits White House amid hacking fears over new AI model appeared first on Washington Post.




