DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

How AI Is Reshaping the Battlefield

March 20, 2026
in News
How AI Is Reshaping the Battlefield

Subscribe here: Apple Podcasts | Spotify | YouTube

Just how are powerful AI models being used in warfare overseas? In this episode of Galaxy Brain, Charlie Warzel sits down with Will Knight, a senior writer at Wired, to discuss the rise of autonomous weapons. From the origins of Project Maven to the recent falling-out between Anthropic and the Department of Defense, they trace what’s happening as artificial intelligence moves from summarizing documents to informing decisions on the battlefield.

How do these weapons work? What are the safeguards? Who decides what values get baked into these models? As autonomous systems become harder to avoid, where exactly is the line between human judgment and machine decision making? Warzel and Knight help explain how the Pentagon and Silicon Valley are more entangled than ever and where warfare goes from here.

The following is a transcript of the episode:

Will Knight: The U.S. government will talk a lot about the importance of AI reflecting American values. But what are those values, exactly?

Charlie Warzel: Right. Do we get to decide those values together?

Knight: Who gets this like that? Is that just [Donald] Trump, or is it just the heads of these companies?

[Music]

Warzel: I’m Charlie Warzel, and this is Galaxy Brain, a show where today we are going to talk about autonomous weapons and the future of AI in warfare. It’s a very strange time to be covering artificial intelligence. There is an active conflict in Iran in which artificial intelligence is being used.

There’s also this huge fallout between the AI company Anthropic and the Department of Defense over concerns about the use of their technologies. And there’s this broader feeling right now over the subject of autonomous weapons that, frankly, these companies have this very powerful technology that they are then handing to the military. And that technology is being used in ways that maybe these companies don’t feel like they have control over, and that these companies are certainly worried about.

There are so many moral, legal, ethical concerns. There’s so much that we don’t know about how these models actually work, the decisions that they make. Whether they hallucinate, whether they can fail at rates different and more concerning than humans, whether humans are in the chain making these decisions, whether there are the appropriate safeguards, whether the ideologies of these companies and their leaders actually fit the ideologies of the military, and whether that conflict is something that we should all be having a real conversation about. It’s such a messy, scary moment, and Silicon Valley is totally caught up inside of it.

So I asked Will Knight to join me to talk about all of it. Will is a senior writer for Wired, and he covers artificial intelligence and he writes their AI Lab newsletter. Together, we get into the nuts and bolts of all of this: who the players are, what this technology can actually do and not do, and what the future of AI warfare might look like in a moment where we are constantly escalating for fears of being left behind. Will joins me now.

[Music]

Warzel: Will, welcome to Galaxy Brain.

Knight: Thanks for having me.

Warzel: So I want to start, and I want to ground this conversation in a little bit of history, right? Because there’s this long and somewhat amazing history of military innovation. The military building weird or niche things that then diffuse into a broader culture. Everything from radar to wristwatches to GPS to duct tape to the internet, right? So it is not uncommon, broadly speaking, for the government to fund ambitious, vague technologies for the battlefield that then have all these other applications. And it’s also not uncommon for them to partner with outside companies to do this work. I would love to go back to, if you wanna go even further than this, we can—but maybe starting at Project Maven and the announcement of that in 2017. The original mission is, you know, use computer-vision algorithms, right? To analyze drone footage, detect objects. And the quote was, turn data into a quote, “actionable intelligence and insight[s] at speed.” What’s the backstory of that?

Knight: Yeah, well, I mean, I think that the backstory goes much further back in some ways, right? The sort of architect of lot of the the U.S. thinking on AI was was Ash Carter. This was the sort of pre-ChatGPT era. But in the sort of deep-learning era, it seemed like, and you can see why, it would fit very well with military things—like targeting, like these image algorithms that could spot things. It seemed like that was going to be a paradigm shift, and a really, really big one. At the time it was incredibly controversial. And it’s amazing how much things have changed. But you had big protests at Google, which had won this contract.

I think, you know, things have shifted. I think in some ways, that makes a lot of sense to me. I think the idea that you’re not going to use something like AI in the world of defense seems kind of absurd, right? It’s like saying you’re not going to use software.

Warzel: I think you have to look at it too, right? Going back and looking into the Project Maven backlash initially, and the fear at places like Google from some of these engineers and folks at the time, was: That all sounds fine, potentially, but I’m worried we’re going to end up making autonomous weapons. Right?. We know, sort of, where the path could lead here. I think what’s really fascinating about those initial protests is it was over this idea of where this all leads. And it seems like they’re correct. Like, this is where it is leading to, right? It’s not just these vision algorithms.

Knight: Well, I guess I would say there’s always been a very long-standing principle about sort of how you sense and make decisions, and that being really critical in military conflict. You can see how the use of a vision algorithm would be very important in that. And so you can sort of see this trajectory toward more and more, you know, automating more of that process. But one thing I would say is—having many times interviewed people at the Pentagon, and people in the armed forces—there are very good reasons why people on the ground do not want to hand that off. And commanders don’t, as much as I think the public thinks. Those systems are unreliable. And the idea of handing off decisions about taking other people’s lives, you know, making terrible accidents, is not taken lightly.

And I think there is very good reason why it is not the case that that is going to be rushed in. For a long time we’ve had systems that are, by many people’s definitions, fully autonomous. So you have systems that will fully autonomously destroy a missile that’s trying to hit a ship. You know, there are these systems that will shoot those down, and they’ll have shot it down before the person can react.

We do have guided missiles that will go into an area. And so you would have a set area where you can configure the parameters where you know there’s only going to be enemy combatants. Those are extremely expensive: the rare, what they call exquisite systems. One of the things we’re seeing with Ukraine is, you know, off-the-shelf drones weaponized. And the way software can control those means that it becomes a lot easier for autonomy to be deployed.

One of the things that people know is going to be really sort of game-changing is swarms of them—like lots of them working together—because it’s much harder to counteract 20 of them coming in to try and attack your tank. And then you can’t have 20 people operating their own drones and defending it. So you’re going to have situations where autonomy becomes slightly more … it’s going to become harder to prevent it, I think, in some situations beyond those of purely defensive ones.

Warzel: I want to drill down a little bit in how these warfare models work. Specifically, to the extent that you can help us understand, because I think demystifying all of this is very important—down to the, you know, “explain it like I’m five” version. But to the extent that, like: How do they work? How many humans are there in the chains of commands? You know, how are these things, the nuts and bolts of them, working? How they identify targets, safeguards, all of that.

Knight: Yeah, these are classified systems. So we don’t have—I don’t have—a hundred percent visibility on it. The way I understand it: It’s slightly less that they’re feeding in a ton of information and saying, What should we do? You would see a map, and you would maybe have assets on it, and you could ask the language model to ask questions about the, you know, maybe the signal’s intelligence that was related to that particular area. And I think you would have all these sort of different resources, where you could ask things about it. So I think it is a little more sort of, you know, being kept at arm’s length in terms of the model-making calls. I think that they’re not crazy, and they’re not stupid about, like, how they ought to maybe not rely on the system a high level. But if a language model were making some errors, and maybe the user didn’t check carefully, could that lead further along to erroneous decisions? So it raises questions about how people are trained to use those. How much they rely on this. What sort of trust it kind of maybe inspires in people they shouldn’t have.

Warzel: It does feel like there’s probably somewhat of an analogy, if I’m gathering this correctly, between the idea of … we’re journalists, right? We say we get this series of spreadsheets, or this list of data, and want to use a language model. We let it look at this corpus of information, and it provides insights. It’s not saying, This is your story; write this. It’s saying, no, What I identify here are some patterns that we have seen. That is more of what we’re talking about here when we’re talking about that intelligence.

Knight: Yeah. That’s my understanding. You could compare it to maybe the law, or something like that, where clerks are using language models to analyze a lot more case studies. Or maybe medicine.

Warzel: You might not be able to answer this question, given your ability to see the systems, or know this—but I feel like it’s worth asking because so many people are using these generative-AI tools now. When you hear something like the Pentagon or the Department of Defense is using Claude, right? And it puts this thing into your head of like, you know, A lot of people are using these tools. Because they’re a friendly way to automate busywork, right? And they talk to you in this friendly way. And it can feel a little bit dizzying to think about it, right? And so I’m curious from you: Do you have an idea of how different this [military] AI would be from the commercial versions of this generative AI?

Knight: We do know that [Anthropic] gave the DOD a specialized version called Claude Gov. This is a great question. I was wondering about this, because if you prompt Claude anything to do with weapons or, you know, anything that might seem like related to military conflict, it will say, I’m sorry, I can’t help you with that. They’ve given it a specialized version, which I think—they’ve not disclosed this—but I think would have fewer of those guardrails. It would have to.

Warzel: Right.

Knight: But it’s an interesting question, I think—like, what does it mean when you unalign it? And I’ve played with some unaligned models. And they can behave in sort of surprising ways. They’re able to do more things, but they sometimes will push back. Even though, you know, some of that depends how those sort of restrictions have been removed. But you kind of have to give a model some guardrails to make it coherent. So I would be very curious, like, what the behavior of that model is. Does it sometimes actually reject stuff? Like, that could have happened, especially in the early stages. And maybe that, in pure speculation, but that maybe that has led to some of these kind of concerns about “woke” models and companies, or something like that.

Warzel: Right. And all of them, there’s a lot of debate between people who are real believers in this technology and those who are more skeptical of it. About personalities, right? And whether these things actually have personalities, or whether there’s just kind of like an emergent set of traits or ways they talk. But I think that that is fundamentally extremely interesting when you get down to this level of things. Like, you know, Claude is known for being a little more—compared to all the other frontier models—it’s a little more artistic. It has its own things.

Knight: Right; yeah. Its own vibe.

Warzel: And that becomes very interesting when you think about the ways in which these models are being ported into this moment. Where, again, they’re maybe not making decisions, but they are talking to a person. Or rather, giving an input in a very specific stylized, you know, human-sounding way.

Knight: There are really interesting questions about that. I think the truth is, most models have quite similar alignment. And I think that most of those things are fairly universal. And I think the question to me is, yeah, like: How does that really sort of change when you put it in a military setting? If they’ve sort of buttoned it down very much and it’s just doing some summarizing text—like here, ask a bunch of questions—that’s one thing. But if it’s trying to sort of do this sort of parasocial, you know, stuff that chatbots do normally? That is kind of strange. And that does really affect the user experience, the user expectations. Of, you would probably have much higher trust in a system like that if it were pretending to be a person, maybe, when you shouldn’t have.

Warzel: We’ve got at this a little bit, but I think it’s just worth drilling down even more on it and break down what we mean by autonomous weapons, right? Because that is a terrifying-sounding phrase. And the point of demystifying all of this, you know—it’s not always clear cut, right? Do you think when we talk about autonomous weapons, it’s best for people to think of it almost just like they have the ability to make split-second decisions in anything? And that’s sort of the place where we should work from? Or do you think there’s actually more specificity there? That, like, autonomous weapons are actually, as well, the thing that we’re all worried about, right? Which are weapons that are making broader choices about target acquisition or engagement in some kind of way.

Knight: The DOD has been clear that they want to always, at the moment, keep somebody making that final decision to some degree, right? That’s the thing. So you can have things be autonomous, but you have a person make that decision. But they issued a directive in 2023 on autonomous-weapons systems. And that very clearly spelled out that there’s no restrictions on developing them.

I’ve looked at autonomy in places like self-driving cars, in robots, and in the AI industry. Like agents, right? I have not come across anywhere that is more careful about what they do than the DOD in spirit. And that doesn’t mean they’re gonna get everything right. But I think that, I wouldn’t want to … I think it’d be a mistake to sort of portray the people in the armed forces as wanting to do autonomy for the sake of it. And I think that if they could keep their finger on the trigger, they would want to do that 100 percent.

Warzel: I think that that’s very important context. And it’s also very important to the sense of “there are so many different people in this chain”—from the people at the very top who are giving, you know, press conferences and who may be more ideological, down to the people who are nuts and bolts, pulling triggers.

Knight: But you’re making a great point when you talk about the leaders of countries, including our own, that will bear a lot of responsibility for how quickly these things get kind of rushed in. I mean, this is one of the things with adoption of AI—that it’s seen especially in the U.S. as a way to kind of regain parity and overtake China. Because the U.S. by some dimensions, has the biggest military. But by some dimensions it’s at a disadvantage compared to China. So leading in AI, like—this is one of the real reasons why AI is such an important thing for the U.S. government. And I think there are a few questions. Like how aggressively you want to rush that in; how much you want to throw safeguards out the window.

Warzel: All of this comes down to: When we’re talking about this, it leads us to the DOD-versus-Anthropic situation that has played out over the past couple of weeks. Because I do think it speaks a little to all of this as well, right? Which is the understanding that this technology … the people making this technology, in some senses, have strong feelings about how it should be used and what they do not feel comfortable with the use being. And then you also have a current administration that feels very strongly that they, speaking of autonomy, must have full autonomy over how things should be deployed. Can you just walk through the particulars of that fight, for someone who’s only very, you know, basically paid attention to it?

Knight: Okay, yeah. Well, I mean, there are some elements we don’t have the full details of. Broadly, a few matter of weeks ago, the Pentagon wanted to change the contracts that it signed last year with all the big AI companies. What was in there previously, that Anthropic didn’t want to change, was specific prohibitions against using it for mass surveillance of U.S. citizens, and for autonomous weapons. So the Pentagon says, We don’t want to do that. So let us change that. In a way, Anthropic just decided that was a hill they’re going to die on. And I think a lot of people in the AI world are very conscious of safety and conscious of the sort of moral questions around it.

And so OpenAI stepped in; so they’re going to take this contract. But they want to have some safeguards. But the question is: What are those safeguards? And the question is: What situations is it acceptable to have something that might fail X out of a thousand times? Like, we don’t know how much that failure rate is. And what times isn’t it? And I think if it’s gonna make the difference between telling somebody, giving some of the information that they might make a lethal decision, I think this is the question the public should have. Like, where is that line being drawn? And especially as this gets deployed more widely, I think it’s been sort of quite tentatively implemented so far. And as that gets more capable, and these errors can sneak in in surprising ways, like where is that? Where should we, and where shouldn’t we? And yeah—that’s sort of the question that’s not been answered.

Warzel: You know, one thing that we didn’t say was that Secretary of Defense Pete Hegseth and the administration are designating Anthropic—as a result of some of this—as a supply-chain risk. Which a lot of people have said is an extreme overstep of the bounds. So there’s that element, which seems, to my sense, very irrational. And then there is the understanding of the actual nuts and bolts of the contract, right? Which is, Okay, maybe this doesn’t make sense for us.

Knight: Yeah; he had this moment where he realized that [with] the model, you had to get permission to do all these different sort of things. So it does sound like either the model was not as unaligned as they might’ve wanted, or the safeguards—the sort of checks you had to go through—the way the system was designed wasn’t working for them. But yeah; that reaction. Like, to put that in context, that’s only ever been applied to Chinese companies like Huawei, who are accused of operating for the Chinese government. And to take that action—not just on a U.S. company, but on one of the most important, by their own measures, U.S. companies that there is, working in this incredibly important technology—seems amazingly destructive to me. And it’s a very, very extreme action. But that’s sort of … yeah, I guess that’s just how they roll.

Warzel: Something that’s really important here, to my knowledge—and it speaks to this whole Anthropic dispute; it speaks to almost everything we’re talking about when it comes to the companies interfacing with the government. They create these powerful models with these capabilities that they understand will be used in war fighting. And they try to set their own boundaries to some degrees: guardrails, safeguards, whatever it is.

Then they have to hand it over, right? And it becomes a bit of a black box. There is this lack of understanding in specifics of how this stuff is being used on the ground, because some of these operations are extremely classified. Some of these uses are extremely classified.

As all this Anthropic fallout is happening in the news, the United States goes to war with Iran. There are bombs dropping. And so there’s this moment, a week or so ago. There’s these first reports coming out in the press that perhaps the United States was responsible for this deadly Tomahawk-missile strike on an Iranian elementary school. There was all this speculation on X, in the media. You know, speculation is what it is. That we’re hearing about these systems like Claude being used for targeting, potentially. And here is a potential targeting failure.

Now again, making no link or connection—this is all just pure speculation—but it speaks to this idea of, you know: Would a company like Anthropic even know if their technology was being used? Or any company; let’s not even use Anthropic. Would any AI company know in a situation similar to that—let’s not use that one—if Were we involved? And I think that that’s what I mean by “the black box.” Which is, you know, you kind of have to give it over. And I doubt that they’re getting that visibility, right?

Knight: Yeah. You know, like the company that made the missile that they fired didn’t know anything about it. Didn’t have any say after that was used. I think, as far as I understand, it is not the case that Claude would have suggested that as a target. That’s my understanding. I think that’s right. I don’t know how much they rely on Anthropic or these other companies to help build out new applications, new uses of those models. Maybe partly Anthropic wants to have more visibility on that, because they believe they know how to do it more reliably. That would be my suspicion. And I think it would sort of behoove the DOD to be aware of that, as well. These people understand how these systems can fail. So you’d want to try and work with them. Remember: Right now, what we’re seeing in the AI world that’s taking off like crazy is a next generation of these tools. And these are agentic AI.

Warzel: Right.

Knight: And I know that’s buzzword; they genuinely are. You’ll ask a model, or ask one these tools like OpenClaw, to do something. It will write code, and it will go through a bunch of steps. And that does raise more risk of unintended consequences, because those systems are doing things, and because they’re going through a bunch of steps. I use OpenClaw. It’s amazing, and it is really capable. You can see the allure and why it’s the direction things are going to go. But it will do things that I didn’t really want it to do, to try and get something done. So how you build those is going to be an incredibly important question, I think.

Warzel: I was preparing to talk to you, and I was having a conversation about this black-box potential element, right? And I said to this person, You know, they just have to hand it over, and that’s got to be terrifying. And the person I was speaking with pushed back and said, Well, is it terrifying? And their rationale was past defense contractors: They’re probably not terrified to hand over the missile or the hardware or whatever it is—or the database-infrastructure system—to the United States military. They’re probably like, they know where their purpose is in this chain.

I was thinking about this and how the fundamental difference, to me, feels exactly what you’re saying. It’s a fundamental difference of artificial intelligence. Because a company like Anthropic, in this situation, was deeply concerned in this way that others may not have been in the past. Because it’s not a technology that does one discrete thing. There’s just an unpredictability here, becoming more and more unpredictable with each iteration of this type of technology. And the intelligence [it] gives. Its whole purpose is to give a world of options without a complete and specific control. And what do you make of that idea?

Knight: I think what we’re likely to see in this and other critical situations is the development of new kind of approaches to engineering, actually. And how reliable those will be will be an interesting thing to watch. So a good analogy is self-driving cars. Those systems also use nondeterministic technology. So, like, a vision system that you can’t predict actually the decision it’s going to make, or the output. What it classifies something as. And it has to do all these things on the fly, where, you know, it’s potentially life-or-death situations. So those companies have had to design conventional engineering around that to make sure there are safeguards in all these different steps. And that’s taken years. And those cars can still only go … I mean it’s amazing, and they’ve made great progress only going in limited situations. So, you know, I think it’s doable. You can do the sort of safety-critical engineering with these kinds of more nondeterministic, more unpredictable technologies. The challenge is when that technology is advancing so quickly. Like, how do you wrestle with that? Like, it’s going so fast; it’s hard to implement those. One of the other differences with Anthropic is it’s just not a missile-making company. And people signed up because they wanted to make the world safer with AI, right?

Warzel: You’ve reported on concerns from people in the industry. But also the outsiders, the watchdogs. The people who are concerned about AI safety, but also just the safety, in general, of autonomous weapons. Can you run me through a list of some of the practical concerns that they have?

Knight: There are some who just believe, from a moral standpoint, that handing off … We should draw a line and say, “We can’t hand off.” So like: use of chemical weapons or something. They’re just worried that there’ll be this sort of huge expansion of autonomous systems, and that’s just morally wrong. And that’s a position that I think is a reasonable one to take, and can be debated.

The more practical issues that people would have are absolutely, like, just the incredible unreliability of systems. Especially, like, if you think of a self-driving car that maybe works on the streets of San Francisco, where it’s been trained for years and years right now. But if you’re taking it to unfamiliar battlefields, the probability of mistakes go up. The idea that these systems would misidentify a noncombatant and take their life is a huge concern. And I think there are people on the very technical side who would just, I think, worry about the—as you were saying—the reliability of systems that incorporate these more inherently unpredictable technologies. They might seem like, Okay, this only makes a mistake like once every 1000 times—but that they can kind of compound and cause problems in unpredictable, unpredictable ways.

Warzel: We’re talking a bit here about the idea of fail rates. About the idea of these models pulling in all this different information and making these insights that are either hallucinations or having some kind of failure inside them. But humans, too, very clearly fail all the time. And so what is it about the AI of it all, and these models, that makes this a scarier proposition to certain people?

Knight: Right. Yeah, I think what we should be worried about—so you’re right; people make mistakes. And the way the military is set up is to kind of cope with that in many ways. To try and have this chain of command, and so on, that minimizes the risk of that. But the thing that we should be concerned about is that these models, one, pretend to be very human, and they really seem human. And then they will just fail in totally spectacular, unexpected ways. So it’s like you’re talking to an infantryman, a soldier, who gives you the right answer again and again, and you’re like, Guy’s so good. And then all of a sudden he’s like: completely batshit-crazy answer. And so that kind of unexpected thing is, I think, what is most worrying. And I think that’s what the AI companies know. : one, it can fail in sort of really unexpected ways sometimes. And then that people come to get really fooled by how human it seems.

Warzel: How much of this do you think is inextricable to the concern, broadly, to what has developed as a pretty reactionary culture recently in Silicon Valley?

You have people like Palantir CEO Alex Karp. He was on CNBC last week, and he said, quote, “This technology disrupts humanities-trained, largely Democratic voters and makes their economic power less. And increases the economic power of vocationally trained, working-class, often male voters. These disruptions are going to disrupt every aspect of our society.” Now, that’s not about autonomous weapons or AI technology specifically, but it is about instilling or having an ideological value inside of the company, and any broader mission statement. Do you think that that is part of the reason why people are having such a strong reaction? Does that change the valence of these conversations and the fears?

Knight: I don’t think it has, as yet. We started to see people sign open letters, and quite prominent people at some of the other AI companies backing up Anthropic in public. We haven’t seen that much of a discussion of where the line should be drawn, whether it’s in the political world or that much sort of public debate or discussion about it. And it’s going to be—you’ve got just a few people who are making really, really important decisions. Whether it’s the DOD or it’s the CEOs. So I would not be surprised if we saw much more pushback on it in many ways, whether it’s taking jobs, whether it’s being used in the military. I think we don’t have a great picture of what’s going on in employment, in the labor market, whatever the CEO of Palantir might say. But I think if it does start to do that, people will start to ask why, you know, these companies—that have honestly built tools by slurping up copyrighted material—why they own that. Why they are sort of in charge of disrupting everybody’s jobs. And you know, I don’t know that that’s the right position, but I think we could see much more of that.

Warzel: It seems inevitable to me, too. Because, as we said, this isn’t an Excel spreadsheet, right? It’s dynamic in this sense. It has to be instilled, whether it’s Anthropic talking about Claude having a constitution, right? Some of these companies talk about it as having values. Elon Musk has XAI. He talks about it having anti-woke values. And so then when you port this over into, yes, these systems are going to be in the chain. A chain that has decisions that ultimately can lead to people being killed or geopolitically significant events of war. They are not doing that thing of handing over a missile that is inert and saying, Use this as you will. These are dynamic systems.

Knight: Right. Yeah; I think that the constitution, personality, morality of those models is a whole other thing that this does raise. And it does. It is really interesting, right? You know, like the U.S. government will talk a lot about the importance of AI reflecting American values. But what are those values, exactly?

Warzel: Right. Do we get to decide those values together? Yeah.

Knight: Who who gets this like that? Is that just Trump, or is it just the heads of these companies? We’re on a trajectory where the beginning of use of AI and it’s not going to be just summarizing things very plainly, probably for a long time. And so, yeah; whose values does that, who do they reflect? What are you trying to put into those? I mean, I think it’s somewhat encouraging that if you look at any model—you take a Chinese model or a U.S. model—most of what they’ll say you should and shouldn’t do are quite similar.

Warzel: There’s a great Bloomberg story by the reporter Katrina Manson, who notes in this piece about Project Maven, about a lot of this AI-warfare technology. She writes, quote, “There’s a palpable sense within the Pentagon that things aren’t moving fast enough. Despite the show of force in Iran, officials worry that the U.S. is at risk of falling behind. And officials are already looking past the Middle East to potentially bigger conflict. As one person familiar with the U.S. operations puts it, ‘Iran is an amazing precursor to what could happen with China over Taiwan.’” Now, that’s not your reporting, so I’m not going to ask you to talk about the veracity of those types of claims. But in your mind, what comes next here? This is obviously the beginning of something.

Knight: Yeah, well. So just to say that that is the tenor of a lot of conversations around the Pentagon and Washington. On the one hand, you can say, Well, that must reflect that this is where we’re heading. And there’s a lot of scholars who believe that, you know, some kind of conflict is inevitable. There’s also a sense in which those things can become self-fulfilling, and arming yourself to the teeth with technology historically has contributed to conflict. Like the First World War, for example. And so what comes next, I think, is sort of up to us and China, right? Just assuming that’s gonna happen, and sort of trying to arm yourself and prepare for that, might cause you to think that’s the only solution, which I think would be a devastating and terrible one. I would hope that there’s a different path that is going to not lead to that.

Warzel: Will, thank you so much for coming on the podcast, helped demystifying some of this stuff. It’s important work. Thank you for your reporting and all of it.

Knight: Yeah. Thanks for having me. It was very fun.

[Music]

Warzel: That is it for us here. Thank you again to my guest, Will Knight. If you liked what you saw here, new episodes of Galaxy Brain drop every Friday. You can subscribe on The Atlantic’s YouTube channel or on Apple or Spotify or wherever it is that you get your podcasts. And if you want to support this work and the work of my fellow coworkers, you can subscribe to the publication at TheAtlantic.com/Listener. That’s TheAtlantic.com/Listener. Thanks so much, and I’ll see you on the internet.

This episode of Galaxy Brain was produced by Renee Klahr and engineered by Dave Grein. Our theme is by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio, and Andrea Valdez is our managing editor.

The post How AI Is Reshaping the Battlefield appeared first on The Atlantic.

CEO Says He’ll Hire Anyone Who Can Vibe Code With AI, Regardless of Actual Skill
News

CEO Says He’ll Hire Anyone Who Can Vibe Code With AI, Regardless of Actual Skill

by Futurism
March 20, 2026

Steven Bartlett, the host of the podcast “The Diary of a CEO,” apparently takes a vibes-based approach to recruitment, and ...

Read more
News

White House moves to strip California and other states of AI regulation power

March 20, 2026
News

New York City Celebrates Its First Ramadan With a Muslim Mayor

March 20, 2026
News

Aid Ship Departs for Cuba as Island Grapples With a Fuel Blockade

March 20, 2026
News

Justice Department subpoenas Comey in Trump conspiracy probe

March 20, 2026
Justice Dept. Seeks to Drop Charges Against Officers in Breonna Taylor’s Death

Justice Dept. Seeks to Drop Charges Against Officers in Breonna Taylor’s Death

March 20, 2026
The Lumineers’ Friends Told Them This Early Demo ‘Sounded Like S***’, but Here’s Why They Knew They Had to Record It Anyway

The Lumineers’ Friends Told Them This Early Demo ‘Sounded Like S***’, but Here’s Why They Knew They Had to Record It Anyway

March 20, 2026
Trump’s successor will have to pick up the pieces of a fractured MAGA: GOP strategist

Trump’s successor will have to pick up the pieces of a fractured MAGA: GOP strategist

March 20, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026