DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

What’s the AI Endgame?

May 15, 2026
in News
What’s the AI Endgame?

Subscribe here: Apple Podcasts | Spotify | YouTube

How should you feel about the AI boom? In this episode of Galaxy Brain, Charlie Warzel speaks with Chris Hayes about how to emotionally calibrate our response to this dizzying AI moment. Hayes describes why AI gives him “The Bad Feeling,” and how it led him to report on AI like an anthropologist would. The two discuss why AI is described as “the jagged frontier,” and they explore the distinction between using AI for creative thinking versus grunt work.

The following is a transcript of the episode:

Chris Hayes: If you’re having it do your brainstorming, like, your brainstorming muscles are going to get weaker. And my livelihood, my career is coming up with stuff. I gotta keep that. I gotta keep that sharp. Now maybe in five years, they’ll just have an AI do my show. And the AI will generate all the takes, and the AI will talk, and I’ll be out of a job. Fine. But until that happens, I don’t want the AI doing that.

[Music]

Charlie Warzel: I’m Charlie Warzel, and this is Galaxy Brain, a show where, today, we’re going to calibrate our feelings about artificial intelligence.

There’s this phrase that’s coined by AI researchers that I can’t get out of my head these days: It’s called “the jagged frontier.”

The phrase is meant to describe how AI can be extremely and unexpectedly good at some human tasks and then also extremely and unexpectedly bad at others. Individually, this can mean that it’s useful or even transformative for some people, while others see it as unnecessary, or even more akin to snake oil. For example: Large language models and especially coding agents have transformed the job of many programmers, making them more productive. That’s not true of all industries though, especially creative ones, where there are moral or financial or creative reasons to object to its use.

“The jagged frontier” is meant to apply to use cases and industries. In some ways it’s an echo of the old cliché: “The future is here, but it’s not evenly distributed.”

But lately I’ve been thinking about the jagged frontier as it applies to the broader AI moment and the discourse. This moment that we are living in—the AI boom, the hype cycle, or revolution, you choose your own language—it’s a weird one. If you try to keep up with industry news, it’s easy to feel just instantly overwhelmed. There’s the obvious, existential stuff: Will AI replace all white-collar workers? Is AI making us dumber or lazier? There’s also a lot of what’s being described as “AI malaise.” It’s this ambient feeling that there’s too much happening, too fast, and without most people’s say.

On places like X, there’s all kinds of breathless chatter—about people setting up swarms of bots to run their computers and monitor their personal lives, or of people creating vibe-trading platforms that can make, and lose, money while they sleep. CEOs aren’t just talking about job loss—they’re writing 14,000-word essays about a future where “our current economic setup will no longer make sense.” Now, if you are a regular person—the type of person who is more worried about the price of gas right now—these conversations can sound like they’re coming from another planet. And they’re also making a lot of people ambiently anxious. If you’re at all skeptical of the AI industry and the men who lead it, then you’d be right to be concerned about the future that these companies are outlining.

So, how do we calibrate our anxiety and our expectations about AI in this moment? How is AI going to impact our politics in the coming years? Should you be scared? Excited? Angry? Sad? Some unholy mix of all of that?

Chris Hayes has been asking these kinds of questions for the last few months. Hayes is the host of All In on MS Now and the host of the podcast Why Is This Happening?; he’s also written a great book on the attention economy called The Siren’s Call. Chris has this new podcast series out about the AI endgame, and in it he does something that I think is crucial: He tries to make sense of this moment with an almost anthropological perspective. So many people in the AI discourse are just in so deep that it can be really, really hard to see the big picture. And so, I brought on Chris to do just that.

[Music]

Warzel: Chris, welcome to Galaxy Brain.

Hayes: It’s great to be here, man. Thanks for having me.

Warzel: So you’ve described your feelings about AI in the first episode of this short-run podcast series that you’re doing about it. The whole generative-AI revolution, the discourse, the whole thing as having, I thought this was great, like a bodily, somatic effect on you.

Hayes: Yes.

Warzel: So tell me about this feeling. I want you to describe it. What happens when you encounter the news or discourse about AI?

Hayes: There’s this feeling that I’ve come to describe or think of as The Bad Feeling, like capital T, capital B, capital F, which is just a feeling of kind of like anxiety, doom, shutdown that I get from a lot of things. Some certain political news will give me The Bad Feeling. And basically the AI discourse gives me The Bad Feeling, usually because it feels like the end of something. It feels like it’s going to destroy things I love, or maybe lead to the end of human civilization. Some high-tech version of nuclear winter that we can only sort of hardly imagine.

And I think because of that, it puts me in this kind of fetal position, defensive crouch. And I think also it’s the case, one of the goals here is … there’s a world of people who are very in the AI discourse. And that world is very fertile and intense, and it’s largely happening on X still. But it’s also like its own kind of bubble, you know?

Warzel: Totally.

Hayes: And I think people outside of it find it scary and alienating. And I think that’s actually like a huge amount of people. That’s most people, at this point.

Warzel: Yes.

Hayes: And so part of what I’m trying to do is penetrate that from the outside, because I had not really been in that discourse intensely. Try to penetrate it in a way that I can be a kind of guide for other people that are outside of it, if that makes sense.

Warzel: It does. I have this theory, essentially, that we all have AI psychosis, right? Like, we’ve been using that term to describe this problematic relationship that some people have with chatbots. It’s an informal, nonmedical term, but like broadly speaking: AI driving folks, you know, informally insane. It’s like your boss has AI psychosis, and they will only accept marketing summaries that go through Copilot, right? Like, programmers have it, because they’re getting this competency high of like 10X-ing their productivity. And I feel like you have these people on X who definitely are marinating in this like micro-discourse. That’s very similar to the way that, like, Twitter weirdened politics, right? And then the skeptics, I think, also have a version of it.

Hayes: Absolutely.

Warzel: Because you either have skeptics who are like, I’m putting my fingers in my ears, I’m waiting for this to pass—or you have people who are like, I believe that this is very dangerous. I’m curious: Why do you think we can’t have a regular conversation about AI?

Hayes: It’s a great question. I mean, I think probably all discourses around transformative technologies tend to be a little berserk, so I think that’s part of it. I think we have an attention economy that is particularly inclined toward psychosis, because the crazier things are, the more attentionally salient they are. The bolder the claims, the more attentionally salient.

Warzel: It also moves so fast, like the speed of it. Which I saw someone the other day saying like, “We don’t talk about Claude Code anymore, because we talk about Codex.” Like Claude Code was last week’s thing. It is no longer like relevant to this thing. And I’m like—if you are moving in the broad, everyone needs to be paying attention to our conversation. If you are moving at the speed of, “If you weren’t paying attention on a daily, weekly basis, you don’t belong in this conversation anymore,” it almost feels to me like it’s supposed to be a little bit exclusionary, in that way.

Hayes: And that’s fine too. I mean, specialist discourse is a thing that you find in all sorts of domains and realms. And, you know, I don’t even begrudge that. One of the things I’m trying to do is just open up to people that are inside the conversation. And not again, I’m trying to do it not in the hothouse. Like, very intentionally coming to it from the outside. Because I think it’s not like there’s a shortage of people who are making AI and doing AI, are all doing each other’s podcasts like constantly. It’s just this entire discursive engine that’s just churning out, you know, code and content and models.

I’m trying to find a way in as a kind of Virgil-figure guide for people that are outside of that. Because it is alienating. I got to say—you know, it really is. It has a very cultish, hothouse, you know, true-believer feel in there.

Warzel: And in the same way too, like I can see it radicalizing people. And again, that’s why I think the politics thing is so salient.

Hayes: Yes.

Warzel: Okay, so I love this idea. You’re coming at it from the outside. And since you’ve been doing this, and you’ve had this experience, I would love if you could kind of give me the optimist take and the skeptic take as you see it.

Hayes: So one concept I’ve been playing with is: Let’s think about a normal distribution of outcomes. A probability curve, a bell curve. I think a lot of the discourse ends up focusing on like tail outcomes, as opposed to like the center of the bell curve.

And I think partly that’s because that center has moved so quickly that people are pulled towards the tails. And I think partly that’s because we’re inheritors of an entire mythic superstructure that, again, I think the AI people think is nonsense, liberal-arts-major craziness, but is clearly structuring the way everyone thinks about this. The myths of Gollum and Frankenstein are obviously massively influential in the narrative structure people are imposing on this.

Warzel: Yes, and the Terminator.

Hayes: Yes, and HAL. Everyone will—again, there’s a reason that our technology is ape science fiction, and it’s not because science fiction was so prescient. It’s because it’s literally the thing that was consumed by the folks that are making the technology. Like, that’s why it happens. It’s not like, Wow, how did they predict that?; it’s like, No, there was no future. We got those messages about what technology should look like. Everyone grew up with it, and then they made the thing, right? So I keep coming back to this idea of: Let’s think about it as a normal technology.

Warzel: Right.

Hayes: Like, what does it mean for it to be a normal technology? And what that means is like, okay: automobile, personal computer, the internet, cell phones, radio, television, the telegram, electrification. These are all normal technologies—massively revolutionary, enormous consequences. Like huge, huge costs and huge, huge benefits.

But fundamentally, human life went on. Like it wasn’t the end of us. And so I think that’s my way into it. And so in some ways I think that’s kind of the optimistic take, right? Is that it’s a normal technology, with dislocations, costs, and benefits that we can reason together around. And try to find ways to distribute the benefits broadly and mitigate the costs.

The pessimistic take I have is similar to the way that, say, industrialization functioned. Right? Which required a sort of creation of wage labor and a concentration of capital, and a kind of extractive relationship from the beginning. That there is an inherent sort of pro-capital bias to the technology, and that it basically becomes a tool for accelerating the concentration of wealth and power in smaller and smaller hands. And I think there’s a lot of reasons to think that’s the case, unfortunately.

Warzel: I love pairing the normal version of this also with it being almost, in some ways, less revolutionary, you know, than put forward. The thing that sticks in my head, as like the “keeps me up at night” part of this is not, you know, the paper-clip maximizer, “we’re all going to die” human-extinction theory. And it’s so much more like: Look at the money that is being invested into this thing. Sort of an unfathomable, unprecedentedly quick amount of spend into the infrastructure and backing of this technology. All those people expect a return on an investment at a level sort of never before seen. Which would then mean it works really well, which would then mean probably a lot of economic displacement. In a way that we have no way of dealing with in the short term. And it’s like: That’s the thing that scares me.

Hayes: There’s two options, right? I mean, a lot of people put it this way, so this is not a unique insight of mine at all. But like—there’s the success, which is all of that investment is rational and is producing a technology that is paying for itself with productivity gains. In which case, if that’s the case, it’s a dislocation unlike anything we’ve seen. Or, it’s irrational—and there’s an enormous bubble that goes bust. And that has enormous financial consequences that leak out into the real economy and end up hurting a lot of people who had nothing to do with AI. It’s probably one of the two.

And the railroad example. Everyone keeps coming back to it, but I do think it’s useful. I didn’t know this until I was going back in it—that the railroad was both transformative and also an insane gold rush of overinvestment and too much capital that ended up going bust multiple times in the last few decades of the 19th century. Producing some of the worst cataclysms—including a Great Depression in 1893—that the U.S. economy had ever gone through.

We didn’t have a central bank. Thank you, Andrew Jackson. There was no FDR figure. It was just like, whoopsies, now we’ve got a Great Depression, everyone’s out on the street, and your family might be starving because we devoted too much capital to the railroads.

Which, again, what I think is useful about that example is that doesn’t mean the railroad was bullshit. It turns out the railroad actually was a pretty transformative technology. It can be the case that a transformative technology is also the subject of an irrational bubble and overinvestment.

Warzel: I also think—to add another layer of confusion to all this—there is the fact that the markets are behaving so extremely irrationally right now. Narrative may be more important than actual reality.

Hayes: Yeah, and the other thing I would say about the way the markets are acting—there’s a lot to say about that. I think there’s really, really incredibly smart, sophisticated people who are making bets that are totally defensible bets, okay? But A, it’s been 18 years since people got wiped out. And there’s a whole group of people who are working in this world who never came into the office to watch everything go boom. And let me tell you, that changes people. Like, it just does.

Again, these are all human beings doing this. These are human beings, the subject of culture and group thinking. And I know this personally from people. Coming into work every day and watching your portfolio just absolutely get annihilated day after day while everyone’s getting annihilated is an experience that is a generational experience. That a lot of the people haven’t had in a long time.

Warzel: Yeah; we can’t keep anything in our heads for more than like four seconds.

Hayes: Exactly.

Warzel: Like, we’re not even talking about the assassination attempt on the president that happened like two weeks ago.

Hayes: Even when you just said that phrase, I was like, what? The what?

Warzel: Yeah, when did that happen, huh?

Hayes: So I think that’s part of it. But I also think the other thing I would say is there really is a lot of radical uncertainty, you know? So everyone’s kind of, you know, making these bets about a future that is really is quite unclear. Since we came out of the caves, people want to know the future. And you can’t know the future. That’s the fundamental human condition.

You can look at the—go down trails, you can consult the Oracle of Delphi, you could look at Polymarket and Kalshi, you can subscribe to Nate Silver’s Substack. None of it will get you the thing you want, which is knowing the future. And everyone is making bets on the future, but the future is unknown.

Warzel: So part of what I hear you grasping for in these pods—and part of what I think we’re all grasping for again—it’s not just the unknown part of the future, but it’s also trying to calibrate for how powerful is this technology, right? And you had an episode with Alison Gopnik, the cognitive psychologist, philosopher, that I thought is really illuminating because it’s exploring human intelligence and the ways that large language models are really working very differently than say, you know, humans’ minds.

You’re hanging out with someone at a bar; they’re asking you, like, “Are these models, you know, alive? Are they reasoning or whatever?” How do you think about human intelligence versus what’s being marketed as artificial intelligence? And how are you talking to people about that?

Hayes: That’s a great question. I guess my feeling about it is: It’s built extremely differently than human intelligence. Largely because of the sort of experiential stuff that Alison talks about, the way a child learns. But it may turn out that it is at a sufficient level of computation and sophistication; like enough power and enough weights and enough complexity. Things converge on each other is kind of the way I think about it. And one of the things I’ve found useful is—we just do a lot of patterned behavior ourselves. And I think if you take a step back to think about that, it’s actually really illuminating that a big part of what we’re doing is stimulus response off of pattern triggers.

And that doesn’t mean we’re not conscious, and it doesn’t mean that we own our free will. And there’s a bunch of interesting philosophical questions. But I always start with people when literally they’re like, What is this thing? I’m like, you know how you write. Someone invites you to a party, and you say, “I’d love to, but I can’t …” And the last two words are “make it.” Like, we all know that. Why do we all know that? Well, that’s just a pattern response. It’s like, well, you can train a computer to figure that pattern out using a bunch of weights of words. And then you start building out from there. There’s a fair amount of human behavior, right? That’s working off of that kind of computation.

Warzel: Totally. People are always trying to, if not actually impress … it’s trying to relate to somebody in a way that makes sense, right? Which is taking all the cues, all the things you’ve known. Trying to make yourself intelligible to another person is, in some ways, your brain saying, “What comes next?”

Hayes: Yes; what comes next?

Warzel: What is the most normal, rational, smart, funny, provocative, next phrase? Right? Based off of everything that I know. And so, I do like that a lot. What I found comforting is this idea that: Yes, we can throw as much compute, and as many weights, and as much pretrained data. And it gets a little bit better or maybe a lot better, right? It starts to do something emergent that feels powerful. And yet, at the same time, the fundamental human thing is, like, being a sack of meat walking through the world, getting sunburned…

Hayes: Yes.

Warzel: You know, seeing a baby cry. Doing whatever, right? And that feeling. Like, there’s nothing in the pretraining or the training or the whatever that can actually get to the physical, “you have senses” feeling that impacts reasoning—more than I think a lot of people are talking about or thinking about.

Hayes: Totally, yes. I think Yann LeCun has this big point about this, right. Like: Unless it’s got human vision and human touch and human smell, just even at a kind of empirical level, the amount of data you’re giving it is nothing compared to the amount of data that a human gets through their senses, right? You could read the whole internet. It doesn’t touch like a year in the life of a two-year-old, right?

Warzel: Right.

Hayes: But what I think is interesting about that, to me that’s interestingly a double-edged sword. And this comes out in the conversations I have with Michael Pollan and David Chalmers about consciousness. Which is like—this has made me more humanist in some ways. This whole moment. But it’s also like, if we keep pushing on that, right? If the big difference is that it’s not the embodied meat sack that we are, and it doesn’t have the senses—it’s also not clear to me that’s a theoretical limit, as opposed to just a limit right now.

Warzel: Say more about that; sorry.

Hayes: Well, I don’t know. If you build the robot that’s got vision and some version of hearing, and it’s got some sort of olfactory sense, and you build sensors, and you give it a model, five years from now. And unleashed on the world? Like, things that I thought would only be “us” have fallen in very quick order—which is both scary but also, I think, creates a little humility about what is that thing, you know.

I mean, you already see it in the way people move the goalposts on AI, particularly when they’re skeptical of it. Like, Well, it can’t do this—and then it does it. Well, can’t do this. It’s like, Well, it can’t love. It’s like, Okay, well, it can’t love. But it just went through my entire inbox and told me I should respond to these three emails. And it was right about that.

And six months ago, it couldn’t have done that. So like … I don’t know if it’s gonna be a little love in six months.

Warzel: Well, one thing that I think about with this, as a positive offshoot that sort of bridges the two gaps, in that is what I love about the consciousness research and the research of all that, is not like, “Is the AI conscious? Let’s prove it. Let’s give Anthropic another thing to stand on.” Or something. I don’t really care about its corporate utility. What is awesome to me is when scientists are like, “We’re starting to ask different questions than we wanted to ask in 2019, because of the way that we don’t understand how these things are doing this. We also don’t know how this thing is doing it. So now we get to do a different course of study.”

Hayes: Yes.

Warzel: And I think to the extent that this revolution could impact other areas. Of like, Let’s figure out how this stuff works. We actually know nothing about consciousness. You know, that to me is like really inspiring and interesting and cool.

Hayes: And I agree. And I think actually the place that this is, that’s at the expert level. But to me at the folk level, at the level of just ordinary people, it’s like … one of the reasons I did the podcast is my background as an undergraduate in philosophy. I’ve always loved philosophy. It prompts a question of like: What makes us human? What is special about us? What is distinct about being human? Why is that important?

And I actually think we’re in a moment in our politics where, you know, it is so much trench warfare. And it is such an intense national emergency, and an emergency I feel very deeply, obviously, and spend most of my waking moments devoted to trying to figure out these sort of grounding, foundational values. Like, it’s scary to think about a challenge from machines to my own personhood.

But it also is doing something useful and important, which is for all of us to think about: What do we have in common? What does make us human? What do we care about? Where do our values come from? And why should you treat people with kindness? Again, the glory and the difficulty of philosophy is: Ask very, very basic and simple questions that have incredibly elusive and difficult answers. And we don’t have a lot of opportunity in our normal navigation of the world and the news cycle to have those. But if anything prompts it, it’s this.

Warzel: Yeah; I feel that so much. And I also feel the opposite part of that, which is like: I don’t really want like Elon Musk and Sam Altman being the people who are forcing me to answer these questions. Or like Mark Zuckerberg. What I’m still grappling with in my day job—and I think we’re all societally grappling with—is all of the stuff that they built and the mess that has been left in the wake. And it’s like, Okay, no, we’re going to … it’s almost, there’s a toddler element to it. Where it’s just like, “We’re moving to the other room. We’re gonna make more Legos, and it’s gonna be another mess.” You’re like, “But you gotta put away the mess. Aw.”

Hayes: The Tom and Daisy line from The Great Gatsby, which is now one of those kind of clichés, you know, like the party told you to disbelieve the evidence your eyes. That’s just everywhere on the internet. But, you know: “They were careless people.” And they broke things and left other people to clean up the mess. I mean, it’s just so true. I saw, you know, I saw a clip of the all-in dudes who are, you know, they’re talking about, you know, R-word maxxxing very like, self-satisfied with their like, transgressiveness. And like, just do stuff, and try. It’s like—yeah, well, you know, things have worked out for you. But the person that used to handle baggage for Spirit Airlines. Like, the “just do stuff, like, let’s start a war with Iran” didn’t work out for that person.

Warzel: Right.

Hayes: Nor did it work out for the families of the children that were killed on the first day. So certain people in certain positions can just try stuff and have a boundless risk appetite and never experience the downsides.

And what happens is other people experience the downsides. And this has been true forever. If you read the history of the wars of Europe—of who’s starting the wars and who’s dying in the wars, you know—that’s that sort of “ever has it been thus.” But it really feels that way right now in a way that I find kind of intolerable, actually. And pretty, pretty radicalizing.

Warzel: I want to talk about the actual what it’s doing to us right now. Also in relation to—you wrote this wonderful book on attention and the attention economy. One of the salient parts in it is, even just talking about our own capacity for boredom, the ways that, you know, all of the internet, all these tools are playing with our mind, how that’s that idea of boredom, that sitting with your thoughts has been eradicated from most kids. Most adults even.

And as I was thinking about this, and how to frame it to you, I came across a Wired piece that came out about this survey from Carnegie Mellon, MIT, Oxford, UCLA. Top-line takeaway—it’s really good for a headline—is using AI chatbots for even just 10 minutes may have a shockingly negative impact on your ability to think and problem solve.

Hayes: I saw this.

Warzel: I’m wondering how you are thinking about this, also coming off of the social media. You know, the deepening and fracturing of our attention. Our ability to attend to other things and ourselves. And then it being turbocharged by AI on this scale in ways that we can’t even grasp. What’s the sirens’-call analysis of this, of just like AI use right now?

Hayes: My first cut at the answer is that it’s obviously going to make people dumber. I mean, I just think it’s clearly, and here’s a distinction that I have made in my own AI usage. And I think it’s an analog to a distinction I talk about in Sirens’ Call about how people use technology. I put texting with your friends in a totally different category than scrolling vertical video. Texting with your friends is a thing that you’re using the medium to do something that’s like a human thing. I used to talk to my friends on the phone every night. Vertical video is doing some other thing algorithmically to your attention sensors. And yes, it’s both the screen—but to me, they’re actually quite distinct. And I feel this way strongly about AI in this respect.

People do really like the idea of AI doing the thinking. Brainstorm ideas, generate. I never use it for that. I don’t want it doing that. I use it a lot to be like, Can you help me find this stuff? Can you go through my emails? Like this distinction to me between like generative and creative thinking, and like basically what I would call grunt work. I really think there’s a distinction between that.

And I think one of the weird things also about the way AI got marketed from the beginning was like: We’re going to do all the thinking now. We’re going to do all the generation. We’re going to make the paintings. And then it’s like, well, what am I going to do? I want to do that stuff. I want to come up with the title for my show. I don’t want AI to come; I want to do that. I do want AI to, like, tell me which emails I missed that I need to reply to.

One of the things they talk about in that paper, which I was reading earlier today, is like brainstorming. You know, I just think, man—that is a dangerous, dangerous road to go down. Because if you’re having it do your brainstorming, your brainstorming muscles are going to get weaker. And my livelihood, my career is coming up with stuff. I gotta keep that. I gotta keep that sharp. Now, maybe in five years, they’ll just have an AI do my show. And the AI will generate all the takes, and the AI will talk, and I’ll be out of a job. Fine. But until that happens, I don’t want the AI doing that.

Warzel: What’s so frustrating is there’s always this … first you have the reality that the new thing that’s supposed to make you more productive is just going to free up time for more work, right? Like, “email kills the memo”; no it doesn’t. “Slack kills emails”; no it doesn’t. Whatever. Like, all these work-productivity tools. They just—I think it’s called Parkinson’s law —just fill the time.

Anyway, you had this conversation in the first episode of the podcast where you’re talking about writing. And you posit, like, maybe writing is similar to being handy, right? When everything was analog, and you had to know how to fix your sink, your lawnmower, your car, whatever. A lot of people had those skills. Things became less analog, harder to fix. It was easier to get things fixed; fewer people are handy in that way that my grandfather was handy. Maybe that’s like writing, and that technology. And it’s absolutely terrifying, because writing—it’s not just a skill. And I don’t say this just because I’m a writer. It’s the same as brainstorming.

Hayes: Yes.

Warzel: The creative act, whatever it is, those constraints of the mind and thinking and creativity are just how you do everything else. Like, it’s the building block for whatever. It doesn’t have to be writing. And I just think, if you get rid of that, like that’s the paper-clip maximizer to me. That’s the society. That’s like, Okay, we don’t have anything to do here anymore. Like we’re doing WALL-E now, you know?

Hayes: Yes. I mean, I feel the same way. And I think this sort of question—of like, “Are humans even necessary?”—at a certain point is the question you end up sort of barreling toward, in a world of AI-increased automation and declining birth rates. It’s like, what do you need us around for anymore?

The handy analog, I got a lot of feedback to that. Because I think it’s sort of a useful one. The reason that I said it to myself was you know—the context, in that conversation Derek is like, “I’m a writer.” Like, I have encountered handy people in my life who have remarked, as I mentioned something that I’d hired someone to do, a little like, “Why do that? You can do that. You can figure that out.” And it’s like, “Yeah, I probably could. But you know, life’s short. And I’ve devoted myself to a bunch of other skills.”

And maybe I’m just precious about writing, because that’s what I do. But I totally agree that it’s in “writing is thinking; brainstorming is thinking.” And I think, you know, it’s interesting to come back. You’re talking about programmers; Anil Dash made this point that I thought was really useful. It’s actually surfaced in conversations I’ve had with my my good friends who are software engineers. You know, we all have in our jobs the stuff that’s like the fun stuff and the drudgery. All of us have some version of that, right? And the the reason programmers are so insane about Claude Code is because it’s doing the drudgery, and they’re doing the fun stuff.

Like in the world of programming, it perfectly does—partly, I think, because the models are built by coders. Again, to come back to the embodied sociological reality of this technology, which was not just like handed down from heaven.

Warzel: Right.

Hayes: You know, my friend even said this to me. He said, “A session of work used to be like, fun stuff, drudgery, fun stuff, drudgery, fun stuff, drudgery. You’d sort of be toggling back and forth. It’s like for you and me, it’s like if you’re doing footnotes. It’s now just like: fun stuff, fun stuff, fun stuff, fun stuff, fun stuff. And then in parallel sequences, someone’s just doing the drudgery. And it’s awesome. I’m more productive. It’s better.”

And it’s like, when you describe it to me that way, that does sound awesome. But again, the question is how it’s gonna be deployed. Is how people adopt it. When we talk about fun stuff—we have fun jobs, we won the lottery doing intellectually stimulating work. A lot of people don’t, and a lot of people also aren’t lucky enough to have been introduced to thinking or writing in these endeavors in a way that is fun and is creative or does make them feel good about themselves. It is drudgery to them. And so you do really worry about this kind of mass automation, and then this just mass atrophying of people’s brains. Similar to what we saw happen to people’s health and fitness in the dawn of like modern “supersize-me” capitalism.

Warzel: Yeah. Well, and also like Ethan Mollick, I think, has the phrase “the jagged frontier” of AI, right? That this is like what you’re saying with coders—like seeing this unbelievable, How could you not; how could this tool not be handed down from heaven, because of look what it does for me? Right? And llook how transformative it is. Versus somebody who paints all day and is like, What are we doing? This is just stealing my paintings, and letting people make paintings and not pay me for the paintings. Like, what have we done?

Hayes: Yeah, exactly right.

Warzel: And there’s part of that. But I do think in that sense, if the people on the frontier part of it, that have achieved that start making the decisions. And this is why the discourse on X and these types of things freak me out sometimes. It’s like: You get to make the decisions about how the rest of the people, whose lives and careers and jobs haven’t caught up, they get to make those decisions. And then, before you know it, the people who really can’t need to object. Who are like, “No, no, no, no; this is actually the foundation of creativity. We need to maintain this.” It’s like, “No, it’s just not economically viable anymore.” Or “No, you can’t have a job that way.” And that to me is a scary proposition.

Hayes: Yeah. And when I think about the sort of backlash to it, and I think about the resistance to it, like—I don’t want the backlash and the resistance to go away. Even though it’s interesting. Because at one level, I want to sort of say to people that are on the political left, or sort of share my values, like, “This actually is an incredibly meaningful and transformational technology. It actually does have really clear use cases”.

There are ways in which it might make the world better, like 100 percent. Also, you don’t have to use it, and you don’t have to swallow the Kool-Aid. If you want to go to protest the local data center, good on you. I sort of feel both those things.

But once I think people grapple with the reality of it, I think there has to be a kind of productive synthesis between how different people are encountering exactly that jagged frontier. Like, why should a painter be psyched about this?

Warzel: This is a great way to segue into the politics of this. Because, as you said, and as I believe, AI is on a collision course with electoral politics that I think is going to be very meaningful. NBC News did this survey that’s so staggering, I thought: 26 percent of voters say they view AI positively, 46 percent negatively. AI ranks less favorably than U.S. immigration, customs enforcement, Donald Trump, Kamala Harris, the Republican Party. It feels pretty notable to me that it’s just incredibly unpopular, or people have these strong skeptical feelings. That grassroots opposition to data centers is really kicking up, and it feels like it’s gonna be a thing.

Tell me how you see the battle lines that are being drawn here. You know, as we approach the midterms, but also as we approach ’28.

Hayes: Yeah, again—I don’t have anything more prophetic than the observations that we’re all dealing with. I mean, obviously there’s a growing backlash. I think it’s a rational assumption on the part of most people that a thing being built by billionaires who say, every time they’re in front of a microphone, “It’s going to put millions of people out of work” is not going to be great for most people. Which is not to say, like, sometimes masses of people are wrong. Sometimes the majority is wrong about stuff. And sometimes backlashes are built out of nothing. I’ve seen it happen. But this, to me, I think people have good reason to be fearful and to be skeptical and wary.

One of the things that I think is inescapable about the technology is how concentrated it is. Right. So think about; take a step back for a second. It requires enormous, nearly unprecedented amounts of capital to be invested to deploy nearly unlimited amounts of compute. Now compare that to the structural nature of the internet, which was created in a noncommercial environment in which there was zero profit motive. In which it was created purely for continuity of government purposes, at first, and then communication between research agencies and whose entire guiding, structural-engineering philosophy was distribution and nonconcentration.

And now compare that to a technology whose entire inherent philosophy engineering is “as much power in as concentrated hands as possible.” That’s a challenge, man.

Dario [Amodei] says this the other day in a podcast, about thinking about it the way cloud computers are. This is naturally a concentrated industry. And it’s naturally concentrated in people that are going to be the companies, that are going to be some of the most powerful that exist.

What will be the levers under those conditions that produce benefits for ordinary people? And people don’t get steamrolled. So I think that’s the way I think about it in terms of political economy. Now, the question is: Well, how do you operationalize that? It’s really complex. And I’m going to just admit that I don’t know. I don’t know the right answer. What’s the right regulatory framework? The two examples I keep thinking of are the Fed and the FDA, right? Which is like—

Warzel: Oh, interesting.

Hayes: We don’t want Congress voting on every drug to come to market, and we don’t want Congress and the president deciding interest rates. It’s just the way that I don’t think we should be—“the president gets to decide what models get released,” right?

There are areas that we’ve had to build institutions in the modern regulatory state to deal with a very technically complex area that we still want democratic control, in which we create mediating institutions. Which, by the way, are under attack from both this conservative Supreme Court and the Trump administration.

Like precisely this kind of institution: the FTC, the FCC, the CFTC, the SEC, the Fed, the FDA, right? All of these are created for the same purpose. There’s some really important technical, powerful, high-risk, high possible cost, high-reward area of activity the modern state needs democratic control and accountability of. But you do not want to just have, like, plebiscites on or Congress voting on. You got to create this kind of mediated-technique, technocratic space that’s sort of halfway in between.

Warzel: It seems like the opposition to this is relatively bipartisan in an interesting way, with different quirks and whatevers. I’m just curious, like, do you see anyone, party wise, in a better position here? Does anyone seem poised to deal with this in a way that’s going to be fascinating, or is it a jump ball?

Hayes: You know, right now it’s jump ball. This is the word, you know. Ben Collins—The Onion, my former colleague—was talking about a jump ball the other day. And, you know, [Ron] DeSantis had some populist rhetoric about it the other day. Bernie Sanders has been doing some interesting stuff. One of the things he’s calling for is national data-center moratorium. I’m not personally sure if that’s the right policy. But it’s like, at this point, people are trying stuff.

Warzel: Somebody doing something; yeah.

Hayes: Yeah. I think it’s totally a jump ball. And I also think it’s one of those places where the distance between the power elite—the richest people in the world, the most powerful people in the world—and the rank and file is so enormous. And you’re going to see this crazy cross pressure, because these politicians are in rooms with their donors and at fancy conferences with all the people making the models, all the stuff, or telling them all this stuff about it.

And then they’re going to go back home and get yelled at about electricity prices and data centers. And they’re going to be totally pulled in two different directions. So the political economy of this is really, really interesting. And I think it’s going to be very interesting to see who navigates that, and how and which parts of those win out

Warzel: I think Silicon Valley is in for like a really interesting reckoning with this, in the sense too. That like the people who are really into this technology are also a lot of the people who were like, Get your hands off my social-media moderation. Like, “I don’t like the free speech, whatever” thing. And yet what they’re doing is building like three companies that have taken all of the world’s information and just consolidated it, and put a bunch of opaque, unknown weights and references. To then spit out convincing, canonical answers about every fact ever. And it’s like dude, it’s coming for you.

Hayes: Here’s the frontier, man. Here’s the frontier. How are people going to go find out about politics and the candidates? And what answer is that model going to spit out? And who is going to be in the wiring of that model? À la, Elon Musk and the weird thing that he did to the model around white South Africans that made Grok talk about the Boer War.

Warzel: I mean, you know, it’s real cynical of you, Chris, to go after the guy who is responsible for the Mecha Hitler stuff. To suggest that there would be any weird nefarious meddling here, okay? I think that’s real disingenuous of you.

Hayes: I mean, I should say, for the purposes of journalistic rigor, I can’t prove that he did that. Something weird happened to the Grok model. He owns a company whose model started spitting out, reliably, answers that aligned with his politics. But the reason I say this is: This is replacing what for you and me was Google, right? And it’s going to be the portal for information.

Warzel: It’s true. He just owns the company. That’s all. He owns the company. That’s it.

Hayes: And imagine the power of being able to control what that information was. About who’s running for office, and what they stand for, and who you should vote for. That to me is a frontier that we haven’t breached yet. And again, I think the models are so emergent at this point that I don’t think anyone’s doing anything along those lines, like in the wiring of the models. But that’s a real possibility.

Warzel: And it’s also the kind of ominous note that we like to end Galaxy Brain on, so that people can go and shift from whatever they’re watching or listening to this. To just staring into the middle distance.

Hayes: Yeah, to give you The Bad Feeling, exactly.

Warzel: Yeah, we’re back at The Bad Feeling.

Hayes: Yeah. The somatic response.

Warzel: Amazing. Chris, thank you so much for coming on Galaxy Brain. I appreciate it so much.

Hayes: I really, really enjoyed it.

[Music]

Warzel: That’s it for us here. Thank you again to my guest, Chris Hayes. If you liked what you saw here, new episodes of Galaxy Brain drop every Friday. You can subscribe on The Atlantic’s YouTube channel, or on Apple or Spotify or wherever it is that you get your podcasts. And if you want to support this work and the work of my fellow colleagues, you can subscribe to the publication at TheAtlantic.com/Listener. That’s TheAtlantic.com/Listener. Thanks so much, and I’ll see you on the internet.

This episode of Galaxy Brain was produced by Renee Klahr and engineered by Miguel Carrascal. Our theme is by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio, and Andrea Valdez is our managing editor.

The post What’s the AI Endgame? appeared first on The Atlantic.

Duos, Larger Vaults, and More Coming in Marathon Season 2
News

Duos, Larger Vaults, and More Coming in Marathon Season 2

by VICE
May 15, 2026

The first season of Bungie’s Marathon is winding down but players should be excited about what’s coming to the game ...

Read more
News

Hundreds of Russian Shaheds are descending on Ukraine in the daytime, signaling a new kind of warfare

May 15, 2026
News

Here are our five of our favorite NFL 2026 schedule release videos

May 15, 2026
News

John Travolta looks unrecognizable at Cannes Film Festival with wild new look

May 15, 2026
News

Lisa Ann Walter Dishes It Out

May 15, 2026
Mockery as video shows big-name Republican hit with ‘brutal’ snub from rock legend

Mockery as video shows big-name Republican hit with ‘brutal’ snub from rock legend

May 15, 2026
Kentucky Republicans love Trump. Will they ignore him and reelect Thomas Massie?

Kentucky Republicans love Trump. Will they ignore him and reelect Thomas Massie?

May 15, 2026
Washington National Opera Builds a New Chapter After Kennedy Center

Washington National Opera Builds a New Chapter After Kennedy Center

May 15, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026