A federal judge in San Francisco blocked a Pentagon order Thursday labeling artificial intelligence company Anthropic a national security risk, saying officials had likely violated the law and retaliated against the firm for speaking publicly about how it wanted its technology to be used.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” District Court Judge Rita F. Lin wrote.
The immediate practical implications of the ruling are unclear, but it represents a clear victory for the AI lab, which has been involved in a bitter power struggle with the Defense Department over the use of its Claude system by the military. Defense officials pushed the company to allow for the technology to be used for any lawful purpose, but Anthropic wanted a bar on it being used in mass domestic surveillance and to power fully autonomous weapons.
After the dispute spilled into public view, the Pentagon terminated talks with the company and issued orders labeling it a “supply chain risk.” The extraordinary move grouped a leading American AI firm alongside tech firms with links to hostile foreign governments. Defense Secretary Pete Hegseth said he was ordering all military contractors to stop using Claude — a declaration with far-reaching consequences for the tightly interwoven tech industry.
Anthropic said it was grateful for the judge’s ruling.
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the company said in a statement.
The Defense Department did not immediately respond to a request for comment.
The post Judge blocks Pentagon order branding Anthropic a national security risk appeared first on Washington Post.




