The leaders of some of America’s largest banks were warned by a top government official this week about a new artificial intelligence model from Anthropic that could lead to heightened risks of cyberattacks, according to three people briefed on the matter but not permitted to speak publicly.
Treasury Secretary Scott Bessent delivered the stark message on Tuesday morning to a small group of chief executives, including those from Bank of America, Citi and Wells Fargo, in a hastily arranged meeting in Washington. Mr. Bessent, the people said, cautioned the banks that allowing the new A.I. software to run through their internal computer systems could pose a serious risk to sensitive customer data.
The Federal Reserve chair, Jerome H. Powell, who has spoken publicly in recent weeks about the threat of cyberattacks against the financial system, also attended Tuesday’s meeting.
The warnings relate to a new intelligence model that Anthropic named Claude Mythos Preview. Anthropic has said the model is particularly good at identifying security vulnerabilities in software that human developers could not find.
At Tuesday’s meeting, the people briefed on the matter said, the bank executives were told that the new model might be so effective at finding security weaknesses inside banks that hackers or other so-called third-party bad actors could get their hands on the information and exploit it.
Anthropic itself has warned about the risks. The company said this week that the model’s advancements were so powerful and potentially dangerous that they could not safely be released to the public yet and would instead be contained to a coalition of 40 companies that it called “Project Glasswing.”
That group includes at least one bank, JPMorgan Chase, the nation’s largest, which earlier said it would use the software “to evaluate next-generation A.I. tools for defensive cybersecurity across critical infrastructure.”
Jamie Dimon, JPMorgan’s chief executive, was invited to Tuesday’s briefing but skipped it for previously arranged travel plans, according to a person familiar with the matter.
The Trump administration and Anthropic are locked in a legal battle over the Defense Department’s recent designation of the company as a “supply chain risk.” The government issued that designation after Anthropic insisted on putting limits on the use of its A.I. technology in war.
In a statement, a Treasury spokesperson said, “This week’s meeting was convened by Secretary Bessent to initiate a process for planning and coordination of our approach to the rapid developments taking place in A.I.”
The existence of the meeting was reported earlier by Bloomberg News. The Fed declined to comment.
“We’re taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out,” Kevin A. Hassett, director of the National Economic Council, told Fox News on Friday. “There’s definitely a sense of urgency.”
Logan Graham, an Anthropic executive, said in a statement that the new technology would help “secure infrastructure that is critical for global security and economic stability.”
Rob Copeland is a finance reporter for The Times, writing about Wall Street and the banking industry.
The post Banks Are Warned About Anthropic’s New, Powerful A.I. Technology appeared first on New York Times.




