DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans

March 13, 2026
in News
Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans

An ongoing and heated dispute between the Pentagon and Anthropic is raising new questions about how the startup’s technology is actually used inside the US military. In late February, Anthropic refused to grant the government unconditional access to its Claude AI models, insisting the systems should not be used for mass surveillance of Americans or fully autonomous weapons. The Pentagon responded by labeling Anthropic’s products a “supply-chain risk,” prompting the startup to file two lawsuits this week alleging illegal retaliation by the Trump administration and seeking to overturn the designation.

The clash, along with the rapidly escalating war in Iran, has drawn attention to Anthropic’s partnership with the military contractor Palantir, which announced in November 2024 that it would integrate Claude into the software it sells to US intelligence and defense agencies. Palantir says the Claude integration can help analysts uncover “data-driven insights,” identify patterns, and support making “informed decisions in time-sensitive situations.”

However, Palantir and Anthropic have shared few details about how Claude functions within the military or which Pentagon systems rely on it, even as the AI tool reportedly continues to be used in some US defense operations overseas, including the war in Iran. In January, Claude also reportedly played an instrumental role in the US military operation that led to the capture of Venezuelan president Nicolás Maduro.

WIRED reviewed Palantir software demos, public documentation, and Pentagon records that together paint the clearest picture to date of how American military officials may be using AI chatbots, including what kinds of queries are being fed to them, the data they use to generate responses, and the kinds of recommendations they give analysts.

The Department of Defense did not respond to a request for comment. Palantir and Anthropic declined to comment.

Palantir’s Pentagon Ties

Military officials can use Claude to sift through large volumes of intelligence, according to a source familiar with the matter. Palantir sells multiple software tools to the Pentagon where such analysis might take place, but the company has never publicly specified which of those systems do or don’t incorporate Claude.

Since 2017, Palantir has been the primary contractor behind “Project Maven,” also known as the Algorithmic Warfare Cross-Functional team, a Defense Department initiative for deploying AI in war settings. For the project, Palantir developed a product known as “Maven Smart System,” sometimes simply called “Maven.”

Maven is managed by the National Geospatial­ Intelligence Agency (NGA), the government body in charge of collecting and analyzing satellite data. Agencies across the military—including the Army, Air Force, Space Force, Navy, Marine Corps, and US Central Command, which is overseeing military operations in Iran—can access Maven. Cameron Stanley, the Pentagon’s chief digital and artificial intelligence officer, said at a recent Palantir conference that Maven is being deployed “across the entire department.”

According to public assessments of Maven published by the military, the tool can apply “computer vision algorithms” to images taken by a “space-based asset” like a satellite, as well as automatically detect objects likely to be “enemy systems.” A Maven demo shown during Stanley’s conference presentation shows the tool distinguishing people from cars.

Other features in Maven help visualize potential targets and “nominate” them for ground or aerial bombardment. A tool called the “AI Asset Tasking Recommender” can propose which bombers and munitions should be assigned to which targets, according to Stanley’s demo. Maven also facilitates the messaging of “target intelligence data and enemy situation reports” between military officials.

Both The New York Times and the Washington Post have reported in recent days that Maven relies on Anthropic’s AI technology, however, WIRED was not able to independently verify those claims.

Since 2022, Palantir has also sold another intelligence platform to the US Army called the Army Intelligence Data Platform (AIDP). The company has said that the AIDP “integrates” data from Maven and at least four other government systems. Publicly available details about the AIDP are sparse, but military assessments have described the tool as being able to prepare intelligence ahead of military operations, as well as “graphically” depict the positions of troops and weapons. It also has a tool called Dossier, which is reportedly used for developing an “intelligence running estimate,” a frequently updated collection of battlefield information that precedes a final intelligence summary. It’s not clear whether Claude is integrated into Palantir’s AIDP.

Although Palantir hasn’t disclosed which of its Pentagon systems can deploy Claude, it has shared some information about how the chatbot may be integrated into them. Palantir hinted at this in its November 2024 press release announcing its military and intelligence partnership with Anthropic, noting that Claude “became accessible” earlier that month within the Artificial Intelligence Platform (AIP), one of Palantir’s relatively new commercial offerings.

Got a Tip? Are you a current or former government employee who wants to talk about what’s happening? We’d like to hear from you. Using a nonwork phone or computer, contact the reporter securely on Signal at carolinehaskins.61.

How Palantir’s AIP Works

AIP isn’t a standalone Palantir platform, rather, it’s an application that can be used within an existing off-the-shelf system such as Foundry or Gotham. In addition to being able to automate certain tasks, AIP provides users with a chatbot—which the company has referred to as an “AIP Assistant” or an “AIP Agent”—that can answer questions or complete tasks within the larger system.

AIP Assistants are powered by third-party large language models from companies like Anthropic, Google, and Meta, and customers can choose which ones they want to use, as well as what training data the language model pulls from to generate responses. That feature may be particularly valuable in intelligence or national security settings, where intelligence data is often classified.

One Palantir demo released in 2023 highlights how an AIP Assistant could help a “military operator responsible for monitoring activity within Eastern Europe” plan and order a ground attack on several tanks simply by interacting with the chatbot.

The process begins with the AIP Assistant sending an automated alert about “potential unusual enemy activity” detected via “AI processing” of radar imagery.

In this case, a computer vision algorithm, rather than a large language model, would have detected the abnormal activity. The AIP Assistant then assists the analyst in interpreting the finding and deciding what to do next. The chatbot doesn’t directly suggest a target, but in helping the analyst act on the information, it could potentially still play a role in turning a suspicious observation into one.

When the user asks “What enemy military unit is in the region?” the AIP Assistant guesses that it’s “likely an armor attack battalion based on the pattern of the equipment.” This prompts the analyst to request a MQ-9 Reaper drone to survey the scene. They then ask the AIP Assistant to “generate 3 courses of action to target this enemy equipment,” and within moments, the assistant suggests attacking the unit with either an “air asset,” a “long range artillery,” or a “tactical team.” The user tells the assistant to send these options to a fictional commander, who ultimately chooses the tactical team.

The final steps play out quickly: The analyst asks the AIP Assistant to “analyze the battlefield,” then “generate a route” for troops to reach the enemy, and finally “assign jammers” to sabotage their communications equipment. Within seconds, the analyst gives the battle plan a final review and orders the troops to mobilize.

In this scenario, Claude would be the “voice” of the AIP Assistant, and the “reasoning” it uses to generate responses. Other AIP demos show users interacting with large language models in much the same way. In a blog published last week, for example, Palantir detailed how NATO, a Maven Smart Systems customer, could use an AIP Agent within the tool.

In one graphic, Palantir shows how a third-party defense contractor can select from several of Palantir’s built-in AI models, including different versions of OpenAI’s ChatGPT and Meta’s Llama. The user selects OpenAI’s GPT 4.1, but seemingly, this could be where a soldier would also have the option to pick Claude instead.

An analyst then views a digital map showing the locations of troops and weapons. In a panel labeled “COA” (courses of action), they click a button that prompts a tool powered by GPT-4.1 to generate five possible military strategies, including one called “Support-by-Fire-then-Penetration-Shock-and-Destruction.”

Another example shows how the system could help interpret satellite imagery: The analyst selects three tanker truck detections on a map, loads them into the AIP Agent’s chat interface, and asks it to “interpret” the imagery and suggest options for what to do next.

Claude may also be used by the military to create intelligence assessments that may inform strike planning later down the line. In June 2025, WIRED viewed a demonstration given by Kunaal Sharma, a public sector lead at Anthropic, showing how the enterprise version of Claude could be used to generate “advanced” reports about a real Ukrainian drone strike dubbed “Operation Spider’s Web.” In the demo, Sharma explained, Claude was relying only on publicly available information. But by partnering with Palantir, he said, the federal government can also pull from internal datasets.

“This is typically something that I might sit for like five hours with a cup of coffee, and read Google, and go into think tanks, and start writing reports and writing a citation, et cetera, et cetera,” Sharma said. “But I don’t have that kind of time.”

In the demo, Sharma asked Claude to create an “interactive dashboard” with information about Operation Spider’s Web, and then translate it into “object types” that could be analyzed in Foundry, one of Palantir’s off-the-shelf software products. He also asked Claude to write a detailed analysis of recent developments in Russia’s border provinces, as well as a 200-word synopsis of the operation’s “military and political effects.”

“Frankly, I’ve been reading these types of things for twenty years—I used to write them, I used to be an academic myself,” Sharma said, “This is actually pretty good.”

The post Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans appeared first on Wired.

What a U.S. victory would look like in the Iran war
News

What a U.S. victory would look like in the Iran war

by Los Angeles Times
March 13, 2026

Six days after the commencement of Operation Epic Fury, President Trump took to Truth Social to announce, in the context ...

Read more
News

How Have Pets Made Your Life Better?

March 13, 2026
News

This Type of Person Is Literally Aging You Faster

March 13, 2026
News

15 positive things that happened on Friday the 13th

March 13, 2026
News

Google’s AI Searches Love to Refer You Back to Google

March 13, 2026
Seeking Any Edge, BTS Fans Hunt for Tickets From Seoul’s Internet Cafes

Seeking Any Edge, BTS Fans Hunt for Tickets From Seoul’s Internet Cafes

March 13, 2026
Inside the Dirty, Dystopian World of AI Data Centers

Inside the Dirty, Dystopian World of AI Data Centers

March 13, 2026
After 30 years of waiting, a father and son’s birthday wish comes true

After 30 years of waiting, a father and son’s birthday wish comes true

March 13, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026