AI “seems much worse for the math people than the word people,” Peter Thiel tersely said in 2024. He likely wasn’t anticipating that just two years later his Palantir co-founder, CEO Alex Karp, would use some decidedly flowery language to describe people he thought were stupid.
“If Silicon Valley believes we are going to take away everyone’s white-collar job … and you’re gonna screw the military—if you don’t think that’s gonna lead to nationalization of our technology, you’re retarded,” Karp said while speaking at the a16z American Dynamism Summit. “You might be particularly retarded, because you have a 160 I.Q.”
Karp was commenting in reference to the topic that has taken the AI world by storm: in what capacity do AI companies collaborate with the government? A closer look explains why a dust-up between the Pentagon and two totally separate companies (Anthropic and OpenAI) goes toward explaining Karp’s displeasure.
Katherine Boyle, General Partner at a16z, moderated the breakout session, which was entitled “AI in Defense of the West.”
“If Silicon Valley believes we are going to take away everyone’s white collar job—meaning primarily Democratic-shaped people that you might grow up with, highly educated people who went to elite schools or went to schools that are almost elite for one party—and you’re going to sue the military. If you don’t think that’s going to lead to nationalization of our technology, you’re retarded.”
Whoa. So what’s bothering Mr. Karp?
Why This Hits Home for Palantir
While Karp could have chosen less offensive language to make his point, he was touching on a raw nerve—one that is acutely personal for Palantir. “You cannot have technologies that simultaneously take away everyone’s job,” he said, and then be perceived as screwing the military. That tension isn’t abstract for Palantir. It could very well be a live operational crisis.
Companies including Anthropic, OpenAI, Google and xAI have all signed contracts with the Department of Defense, each with restrictions on whether their technologies can be used in settings that might violate their terms of service. The DOD has been in negotiations with AI companies to remove those restrictions and instead allow use of their tech for “all lawful purposes.” Karp has little patience for companies that treat that ask as a moral red line:
“There’s a difference between U.S. military and surveillance,” he said at the summit. “Despite what everyone thinks, Palantir is the anti-surveillance company,” he said, pushing back on claims that the company named after an all-seeing surveillance device from Lord of the Rings is fundamentally about surveillance. Every technical expert knows this to be the case but the proverbial “person online” simply has the wrong idea, Karp argued, “so I end up in every conversation that I don’t want to be in.”
Anthropic CEO Dario Amodei famously said he could not “in good conscience” support the “all lawful purposes” clause. Then, after hitting Anthropic with the threat of being deemed a military supply chain risk, the government penned a deal with OpenAI to use its tools in classified missions. (Anthropic is reportedly in talks with the Pentagon yet again, with the Pentagon confirming that Anthropic’s Claude Opus was key to its preparations for the historic strike by the U.S. and Israeli military on Iran.)
For Palantir, that sequence of events is not an abstraction—it is a direct operational threat. Palantir’s flagship AI Platform (AIP) relies on plugging best-in-class frontier models into its defense and intelligence workflows. Claude Opus is among the most capable of those models, prized for its reasoning depth and reliability in high-stakes environments. If Anthropic is blacklisted as a military supply chain risk—or if its terms of service effectively bar it from the classified settings where Palantir operates—Palantir would lose access to one of its most powerful AI engines. It would be forced to retool its platform around alternative models mid-contract, a costly and reputationally damaging disruption for a company whose entire brand promise is mission-critical reliability.
“Again, there’s a lot of subtlety here behind the curtain,” Karp acknowledged. “I’ve been heavily involved in that subtlety—what can be deployed, where it can be deployed.”
The Bigger Economic Picture
The stakes, Karp argued, go well beyond any single Pentagon contract or any single company’s policy decision. “The danger for our industry,” he warned, “is that you get a famous horseshoe effect where there’s only one thing people agree on—and that’s that this is not paying the bills, and people in our industry should be nationalized.”
That populist convergence—where left and right alike turn on tech—becomes inevitable, in Karp’s telling, if AI companies strip white-collar workers of their livelihoods while simultaneously refusing to serve the military. He was pointed about who those workers are: “Primarily Democratic-shaped people that you might grow up with—highly educated people who went to elite schools, or went to schools that are almost elite, for one party.”
Those fears are already materializing at an economic scale that lends urgency to Karp’s argument. Experts warn of an imminent AI doomsday scenario where white-collar workers’ days are numbered—a destabilizing force that would leave most employees jobless. These aren’t merely panic-inducing ideas; they carry real-world consequences, like a viral essay from Citrini Research that triggered mass market upheaval.
In Karp’s view, the government would not allow AI companies to amass the power they already hold and still operate in a self-regulatory, non-governmental oversight capacity—let alone dictate terms of use back to the government itself. “This is where that path is going,” he said simply. The only way for companies like Palantir to retain their position, their contracts, and their access to the frontier AI models that power their platforms is to play by the government’s rules when called upon. For Palantir, losing that seat at the table doesn’t just mean bad optics. It means losing the technological inputs that make its core product work.
It would be a dramatic reversal for a company that printed what Karp called just a month ago “one of the truly iconic performances in the history of corporate performance or technology” in Palantir’s latest quarterly earnings.
The post Palantir CEO’s rant about the Anthropic-Pentagon feud threatening his company was about a lot more than a dirty word appeared first on Fortune.




