Subscribe here: Apple Podcasts | Spotify | YouTube
In this episode of Galaxy Brain, Charlie Warzel discusses the nightmare playing out on Elon Musk’s X: Grok, the platform’s embedded AI chatbot, is being used to generate and spread nonconsensual sexualized images—often through “undressing” prompts that turn harassment into a viral game. Warzel describes how what once lived on the internet’s fringes has been supercharged by X’s distribution machine. He explains how the silence and lack of urgency isn’t just another content-moderation failure; it’s a breakdown of basic human decency, a moment that signals what happens when platforms choose chaos over stewardship.
Then Charlie is joined by Mike Masnick, Alex Komoroske, and Zoe Weinberg to discuss a vision for a positive future of the internet. The trio helped write the “Resonant Computing Manifesto,” a framework for building technology that leaves people feeling nourished rather than hollow. They discuss how to combat engagement-maximizing products that hijack attention, erode agency, and creep people out through surveillance and manipulation. The conversation is both a diagnosis and a call to action: Stop only defending against the worst futures, and start articulating, designing, and demanding the kinds of digital spaces that make us more human.
The following is a transcript of the episode:
Alex Komoroske: AI should not be your friend. If you think that AI is your friend, you are on the wrong track. AI should be a tool. It should be an extension of your agency. The fact that the first manifestation of large language models in a product happens to be a chatbot that pretends to be a human … it’s like the aliens in Contact who, you know, present themselves as her grandparents or whatever, so that she can make sense of it.
It’s like—it’s just a weird thing. Perfect crime. I think we’re going to look back on it and think of chatbots as an embarrassing party trick.
Charlie Warzel: Welcome to Galaxy Brain. I’m Charlie Warzel. Initially, I wanted to start something out for the new year where I wanted to just talk about some things that I’ve been paying attention to every week, and give a bullet-pointed list of stuff that I think you should pay attention to. Stuff I’m covering, reporting on, et cetera, before we get into our conversation today. But today, I really only have one thing, and it has been top of mind for little less than a week. And it is something that I can’t stop thinking about and that frankly I find extremely disturbing. And I’m mad about it, honestly. To ditch the sober-journalist part, it’s infuriating. And this is what’s going on on Elon Musk’s X app.
I don’t know if you’ve heard about this, but Elon Musk’s AI chatbot, Grok, has been used to create just a slew of nonconsensual sexualized images of people, including people who look to be minors. This has been called a, quote, “mass-undressing spree.” And essentially what has happened is: A couple of weeks ago, some content creators who create adult content on places like OnlyFans used Grok’s app, which is infused inside of the X platform. You can just @Grok and ask it to, prompt it to do something. And the chatbot will, you know, generate whatever. It will make a meme for you, a photo, it will translate text. It will, you know, basically do anything like a normal chatbot would do, but it’s inside of X’s app. And, so some of these content creators said, Put me in a bikini. They were asking for this, and Grok did it. And a bunch of trolls essentially took notice of this and then started prompting Grok to put tons of different people in these compromising situations. On communities and different forums across the internet, people are trying to game the chatbot to try to get it to push the boundaries further and further and further. They’re prompting it to do things like edit an image of a woman to, quote, “Show a cellophane bikini with white donut glaze.” Really absolutely horrific and disgusting things that are these workarounds to get it to create sexualized images.
This has been happening for a long time online. There’s always been, since these AI tools have come out, problems with nonconsensual imagery being generated. There are lots of so-called “nudify” apps, right, that take regular dressed photos of people and undress them. And there are communities that share these as revenge porn and use them to harass and intimidate women and all kinds of vulnerable people. And this has been a problem.
People are trying to figure out the right ways to put guardrails up to stop this—to make sure that these communities get shut down, that they don’t continue to prompt these bots to do this, trying to get these tools to stop doing this. And a lot of this has been happening in these small backwater parts of the internet, and it does bubble up to the surface. But what’s changed here with X and Grok is that Grok is, as I said earlier: It’s baked into the platform. And so what has essentially happened is that X—xAI, Elon Musk—they have created a distribution method, and linked it with a creation method, and basically allowed for the viral distribution of these nonconsensual sexual images. And it has become, in the way that it does in places like 4chan and other backwater parts of the internet, it’s become a meme in this community. And people have decided that they are going to intimidate people and generate these images out in public.
And so what you have is publications posting photos of celebrities, and then a bunch of people, you know, in the comments saying: “@Grok undress this person.” “@Grok, put them into bikini.” “@Grok, put them in a swastika bikini.” “@Grok, put them in a swastika bikini doing a Roman salute.” And then you have a photo of a celebrity, undressed without their consent, in a Nazi uniform, giving a Nazi salute.
This is stuff that I have seen all across the platform. Not going into strange backwater areas of it—just looking directly at it. So this is out there. Something I noticed earlier this week—we’re recording this on Wednesday—was there was a photo of the Swedish deputy prime minister at a podium, giving a talk. And a bunch of people were asking Grok, prompting Grok to put her in a bikini, et cetera.
X and the people who work there have issued a statement saying that they’re working on the guardrails for this system. This is against their community standards, and they will punish the people who are involved here. But that doesn’t really seem to be happening. Just yesterday I was looking around, and people who are asking Grok to put women in compromising photos have blue checks next to their name, which means they asked the company for a verified badge. Those people are still on the platform as of this time when I’m talking to you.
So I reached out to Nikita Bier on his personal email. He’s the head of product at X. And I asked as a journalist, as a human: How someone can in good conscience work for a company that’s willing to tolerate this type of thing? Like, what’s the rationale? Who’s being served? How can you tolerate your product doing this? Do you imagine you’ll be able to get this under control with the appropriate guardrails? And if not, how can you sign your name to this stuff? How is this allowed to be in the world? They did not respond. They forwarded me to their comms lead, and I asked the same questions of them, and they never responded back to me. I have also asked Apple and Google similar questions. How can they allow an app like this on their app store? And they also have not gotten back to me.
The lack of response to this from the people who are the stewards of this platform, and the people who can exert pressure on this—including X employees or investors, or Elon Musk himself, who has made jokes about the @Grok bikini-photo stuff on the platform over the past week.
The lack of apologizing. The lack of urgency in trying to fix this. The lack of really seeming, from my perspective, to care about this, I think, feels a bit like crossing some kind of Rubicon. This is not a standard content-moderation issue. This is not a bunch of people trying to scold for something that is a part of some kind of ideology. This is basic human decency: That we shouldn’t have tools that can very easily create viral content of women and children being undressed against their will. Feels like the lowest possible bar, and yet the silence is—it just speaks volumes of what these platforms have become and what their stewards seem to think.
I would just ask of truly anyone who works at these platforms: How do you sleep at night with this? The silence from X, from employees there who we’ve tried to contact just to get some basic understanding of what they’re doing and how this can be allowed. And what’s happening on the platform, because the platform is not taking enough action to stop this, because it’s still allowing this undressing meme to go forward. What’s happened is: A culture has evolved here. And that culture is one of harassment and intimidation. And it feels like the people who are doing this know that no one’s going to stop them. They’re doing this out in the open. They’re doing it proudly. They’re doing it gleefully.
Something has to change here. I’ve been covering these platforms for 15-plus years, and I’ve watched different people in these platforms struggle with moderation issues in good faith, in bad faith. I’ve watched it devolve into this idea of politics and ideology. I’ve watched people pledge to do things, and then give up on those things.
It ebbs, it flows. The internet is chaos. I get it. But this is just different. This is a standard of human decency and social fabric and civic integrity that you can’t—you can’t punt on it. You either choose to have rules and order of some kind of a very base level, or you just—it does become full anarchy and chaos. And it seems to be that’s the direction where they want to go.
So if you work at X, if you’re an investor, if you’re somebody who can exert any influence in this situation, I would a) love to hear from you. And also I would ask: Is this okay? Is this what you want the legacy to be? Sorry for getting on a soapbox there, but I think it’s a massive, massive story. And one that, again—I think if this is allowed to just be the way that the internet is, then we lose something pretty fundamental.
So anyhow, it’s a tough way to segue there, but today’s conversation is actually the opposite of all of this. I do a lot of tech criticism. Do a lot of really sort of, you know, aggressive reporting, trying to hold tech companies to account. And that means looking at a lot of awful things and talking about a lot of awful things. But today’s podcast is about something great, something that’s actually hopeful that’s being built. It’s about a group of technologists who’ve come together with a different vision for the internet: a positive vision for the internet, something that they are trying to build that can sort of lead to positive outcomes and people living their best lives.
And so this project is called the “Resonant Computing Manifesto.” Basic top-line idea of it is that technology should bring out the best in humanity. It should be something that allows people to flourish. And they have five core principles here, that are essentially meant to combat the hyperscalers and extraction of what we know as the current algorithmic internet that we all live on.
And to talk about that, I’ve brought on Zoe Weinberg, Mike Masnick, Alex Komoroske. They are three of the writers of the “Resonant Computing Manifesto.” And I had them on to talk about why they came up with all this, and what, if anything, we can do to change the internet in 2026.
Warzel: All right: Zoe, Alex, Mike. Welcome to Galaxy Brain.
Mike Masnick: Thanks for having us.
Warzel: You all put forward something that I actually came across very recently. Often my timeline is a mess of the horrors of the world. The terrible things, the doomscroll. And this kind of stopped me in my tracks, because frankly, it wasn’t doomscrolly at all.
And when I clicked on it, I began to feel this very strange emotion I’m not used to feeling, which is hope. And/or, I agree. And I agree, and it doesn’t make me furious. And so what you guys have done in part, with a group of other people, is come up with something called the “Resonant Computing Manifesto.”
And it is based off of this idea of resonance. And when you guys put this out—and I want you guys to describe all of this—but when you put it out, you said that you were hoping this was going to be the beginning of a conversation. A process about getting people to realize technology should work for us, and not just for the people at the very top, the people behind [Donald] Trump on the inauguration dais, that sort of thing.
And so, in this world of mergers and acquisitions and also artificial intelligence and all that jazz, I wanted to start the conversation off with a definition of what resonant technology is and what it means. And I’ll bring that up to either all of you or one of you.
But what is resonant technology? What does it mean to you?
Alex Komoroske: So to me, that resonant computing. There’s a difference between things that are hollow—leave you feeling regret. And things that are resonant—leave you feeling nourished. And they’re superficially very similar in the moment. And it’s not until afterwards, or until you think through it, or let it kind of diffuse through you, that you realize the difference between the two.
And I think that technology amplifies whatever you apply it to. And now with large language models that are taking what tech can do and making it go even further than before, it’s more important than ever before to make sure the stuff that we’re applying technology and computing to is resonant.
And I think we are so used to not having a word for this. And we can tell that something is off in slop or things that are just outrage bait or what have you. And social networks, but we don’t know how to describe it. And just having a term for that: the kind of stuff that you like.
And then also the more that you think about it, the closer you look, the more you like it. Does that capture it?
Masnick: Yeah, pretty much. I mean, we spent a lot of time trying to come up with the term.
Komoroske: And you wanted something that was ownable, that was distinctive, that wasn’t just a thing that would fade into nothing.
Zoe Weinberg: There’s a lot of terms out there that now have a lot of baggage. Even something that sounds kind of innocuous—responsible tech—I think now comes laden for a lot of people with a bunch of associations or different movements of people, whether it’s corporate or grassroots or otherwise.
And so, you know, we were trying to move beyond that a little bit in the choice of the word resonance.
Warzel: Yeah. There is also like—there’s an onomatopoeia to it. There’s sort of, this is what it sounds like. You have resonance there. And also there is something a little bit, the word that comes to mind is almost monkish.
Like, a monastery type. There’s something that’s very, it’s … resonance is not, like, a capitalistic word. It is a word that signifies something much different to me. Like sort of sacred. You know?
Komoroske: Yeah. It’s balance, pureness. tThere’s something about it that feels very whole, maybe.
Warzel: And at the top of the manifesto, there’s this line that is sort of offset there. A pull quote, if you will. Says: “There’s a feeling you get in the presence of beautiful buildings, bustling courtyards. A sense that these spaces are inviting you to slow down, deepen your attention, and be a bit more human. What if our software could do the same?”
That was the thing that struck me there. When did you guys see a sort of architectural element to this? Like, an inspiration from things that we see and experience in meatspace, so to speak, in the world?
Komoroske: We had the word resonance, I think, actually came before we …so, I’m a big fan of Christopher Alexander. He lived a few blocks away from me. And, you know, a big fan of The Timeless Way of Building and a few other books.
And so we had various formulations of it, that try to key off of that frame or idea. I don’t think he ever calls it resonance in the book, in his actual book. But, you know, it’s a word that other people—maybe he might offer it as one of the potential names. He calls it aliveness and wholeness and other things.
But so, it was always in the mix of the kind of vibe that we were trying to capture. And then we decided to lean into resonance and introduce it via this architectural lens. And actually, that addition at the top was a late addition, because it starts off talking about resonance kind of indirectly and it pivots into this architectural frame.
And someone was like, What? I thought you were talking about technology. We said, Okay, let’s put a little teaser about the architectural connection up at the top, to help connect with the way the middle of it is going, so you don’t get confused.
Weinberg: I think there’s something also powerful about writing and thinking about software, which exists in a digital plane—that is, not a physical space—that feels like it’s kind of in the ether and a little bit untouchable. And then trying to ground that in a very human reality, which is in fact tied to place and space and where we spend time.
And maybe drawing some insights from those physical realities into the way in which we build digital spaces.
Komoroske: Christopher Alexander, when you read some of his work and, we all know that feeling, we can all imagine the situations that we’ve been in, the environments where we feel that resonance. And there’s something very, I don’t think we ever think about it in the digital world. Because you have to be, when you’re in it, in the physical world. It’s impossible to ignore it when when you’re in it.
And there’s always point. Let’s—why don’t we ask the question, why do digital experiences not feel the same way? They absolutely could. You know.
Weinberg: And I think, you know, what is the feng shui for software? It’s maybe a way of thinking about it. But I think that goes much deeper than UX and UI-design principles.
It’s much more about: What is the experience as a user, and as a human, interacting with a tool over repeated periods of time?
Warzel: Well, and I think too, a lot of—at least what I reach for in my work, which a lot of it is, critiques of, you know, big-tech platforms and such. A long time ago, I found the word architecture—the “architecture of these platforms”—as just being extremely helpful to communicate some of this stuff.
I think there is a way for people who, you know, are just using these platforms to get from A to B. Or, you know, on the toilet at a moment of just, I’ve just gotta get away from the kids, or whatever it is. If you’re not thinking with the critical lens—which, there’s no judgment there—about these platforms, you might just sort of think this is a neutral thing. Or this is a thing that just does a thing, and, you know, whatever. And I think that, you know, architecture—this idea that there are designs, there is an intentionality to this algorithm or this layout or whatever choice that a platform has made that leads to these outcomes—that leads you to post more incendiary things, or whatnot. And I think that architecture there is so helpful to let people see like: no, no, no. In the same way that, you know, these arches are the way they are. This stained-glass window does this, to give this vibe. So is putting the “What are you thinking?” bar right here, or whatever. The poke icon wherever.
Komoroske: So I think that’s also about, with connection to architecture, that’s even stronger there. I think of traditionally, architecture is, like, this designed-top-down cathedral. Like, the designer’s intent. And one of the things that Christopher Alexander later did was this bottoms-up emergent of: How is this space actually used and modified? How does it come alive?
And I think that’s one of the reasons architecture, in his sense, I think really nails it. Because a lot of these experiences, like a bunch of people, when you build Facebook 10 years ago, were trying to connect the world. That’s a prosocial outcome. It’s prosocial in the first order. The second-order implications, turns out, oh, actually are not prosocial.
And so you get these emergent characteristics that are not what anyone intended going in, necessarily. And still, and yet, they emerge out of the actual usage of how different people react off each other, and how the incentives kind of bounce off each other. And so I think architecture hits that emergent case too.
Warzel: Mm-hmm. So, Mike, I’ll throw this to you. How did this come about? What is the behind-the-scenes process here? I’ve heard, you know, “We’re using these words, and we’re taking ’em for a spin in the world for two weeks.” This does not sound like something that you guys wrote last weekend and put up on the thing.
There’s a lot of people behind it who aren’t on this call here. Or this podcast here, I should say, not a call. How did this come about?
Masnick: Yeah, I mean a lot of this is Alex, and so I’m curious about his version of this. But in my case: I mean, I met Alex about a year ago. Almost exactly a year ago at some event.
And we got to talking, and it was a good conversation. It was a resonant conversation, where I sort of came out of it saying, Oh wow, there are people thinking through these things and having interesting conversations. And then we kept talking and he said, “You know, I’ve been having this same conversation with a group of different people. And I thought I might just pull them all together, and we’ll get into a Signal chat, and we’ll have a Google Meet call every couple weeks. And we will try to figure out what do we all—we’re all having this feeling, what do we do about it?”
And then we did that for almost a year. I mean, it’s kind of incredible. And where we would just sort of be chatting in the group chat and occasionally having a call and sort of talking through these ideas, and working on it. And trying to figure out even what we were going to do with it.
Weinberg: I definitely think the manifesto emerged very organically.
Masnick: Yes.
Weinberg: To the point that I would say in the first couple months of us meeting, Charlie, like I was like: Okay. It’s really fun chit-chatting with these interesting people that Alex has brought together, but let’s get to brass tacks. Is this going anywhere? And I have to say, there was a part of me that wanted to end those calls being: Okay guys, what’s our agenda? Where are we going? What are the outputs? How are they met? Whatever. And I actually think, Alex, you did a really great job of kind of keeping people from jumping to that sort of action-item mode too early.
And so, from my perspective, we did not get together to write a manifesto. We got together to talk about these issues. And then, very naturally, you know, out of those conversations came a set of ideas and principles, and sort of theses. That then felt like we should put them out in the world.
Warzel: Did this feel like—the choice of the word manifesto and the choice to just do this—does this feel a little bit, too, like a response to we’re in a manifesto-heavy moment here? It feels like there are a lot. Whether we’re talking like the Marc Andreessens of the world or, if you pay taxes in San Francisco, you need to write a manifesto to get your garbage picked up or something.
But is this a response in the same way? Or is it meant to be seen as, in some senses, in dialogue with some of these other things that are coming out there?
Komoroske: I think to some degree, I don’t know, actually. I can’t remember how we ever discussed if it should be a manifesto.
We just knew that there should be something that we could point people at, that kind of distilled some of the conversations and ideas that we were having. And I think I’ve seen a bunch of manifestos in the tech industry, that sometimes I look at and go, Oh my God, is that the tech industry that I’m a part of?
That doesn’t seem at all like—that seems so cynical or so close-minded about the sort of broader humanistic impacts that technology might have. And so, I think the choice of doing something that other people have, you know … this manifesto was deliberately kind of humble. It says: We don’t have all the answers; just here’s a few questions that seem relevant to us.
That was a very important stylistic choice. Manifestos are not typically humble. But we aimed for that because we wanted to almost counter-position to some of the ones that say, This is definitely the right way. And everyone should think about it this way.
Masnick: Yeah, I almost think I’ve been using that as a joke to other people. Where it’s, This is the most humble manifesto you’ll ever see.
Which is not something—you know, you don’t normally see those two words together. You don’t think of as a manifesto as being humble. But, I mean, this was definitely a part of the conversation that we had. Which is: We want to be explicit that we don’t have all the answers, and that this is the start of a conversation. Not, you know, putting an exclamation point on a philosophy or something.
Weinberg: I do think, Charlie, you’re touching on something noteworthy here. Which is, and I’ll speak only for myself, but I’ve been observing in the last couple years as it has felt to me like the ideological landscape of the discussion in Silicon Valley has been really defined by these extremes.
And on one end, it’s like the accelerationist kind of techno-optimism way of seeing the world. And on the other side, on the other kind of far extreme, it is like existential and catastrophic risk and ways that, you know, we must prevent that. And I know a lot of people who don’t feel like they really belong in either of those camps, and actually don’t even really think that the optimist/pessimist spectrum is like the right way to think about it.
And so from my own perspective, part of what I have hoped that the “Resonant Computing Manifesto” will accomplish is, like, helping to establish some values and some north stars that are kind of on a different plane from that conversation. That also feels like there can be both. You can both be optimistic about the ways things might develop, and also concerned about the places we’ve come from. And that those things can coexist, and that is like the beauty and complexity of the technological moment we’re in.
Masnick: Yeah, totally. Because, you know, I had written something in response to Andreessen’s manifesto, and I never really thought of this as like a response.
Warzel: Is it the “build one” or the “techno-optimist” manifesto?
Masnick: There’s been many. Yeah, that’s true. Fair enough. But, you know, I’ve always considered myself, and I’ve been accused of being, a techno optimist. Like, to a fault. And like, I am optimistic about technology. But to me, his manifesto really, you know, rubbed me the wrong way. Because I was like, This isn’t optimism. What he was presenting was not an optimistic viewpoint.
It was a very dystopian, very scary viewpoint. And so soon after it came out, I had written a response, like, “That’s not optimism that you’re talking about.” And there, and if you really believe in this—this vision of like a good, better world from technology—then you should also be willing to recognize the challenges that come with that.
Because if you don’t acknowledge that, and don’t seek to—if we’re building these new technologies—understand what kinds of damages and harms they might create, then the end result is inevitably going to be worse. Because something terrible is going to happen. And then, you know, the politicians will come in and make everything else that you want to do impossible.
It’s just like: Think this through. Like a couple steps ahead.
Komoroske: And so technology is powerful. Like, we should be careful with that power, and we should use it for good. And I think it is incumbent—you know, it’s a good thing for people to do, to use technology for good. Like, you shouldn’t sit there and not use it.
You should use it, and you should be aware of the second-order implications and the third-order implications. And not say, “Well, who could have seen this inevitable outcome?” You know, so much in the tech industry is about optimizing. It’s about driving the number up. It’s about thinking, not necessarily thinking about second-order implications.
I, at some point, had somebody tell me, You know, anything that can’t be understood via computer science is either unknowable or unimportant. Which is an idea that, you know, pervades some parts of Silicon Valley. And I think this combination of the humanistic side and the technology side into the synthesis, I think is where a lot of value for society is created. And you have to have them in balance. You have to be in conversation with each other.
Warzel: Well, that’s definitely speaking my language, for sure. That’s like Charlie bait right here. But I want to define a little of this. I want to actually define it, but first I want to define it via its opposite.
What’s the opposite of resonance here? How would you describe the current software dynamic? I’ll let anyone who wants to take that. But maybe all of you, honestly.
Komoroske: And to me it’s just, I think, most of the technology, the tech experience and consumer world is hollow. In that you wake up the next day and go, God, why did I do that?
Or you use the thing. To me, if you use a tool and then after you are sober, after you’ve sort of come down from it, because sometimes you’ll be really hopped up on the thing. So maybe a week later, or the next day, would you proudly recommend it to somebody you care about? And if not, then it’s probably not resonant.
And you know, at some point, somebody—I was having this debate with somebody at Meta many years ago—they said, Oh, Alex, despite what people say, our numbers are very clear. People love doomscrolling. It’s like, that’s not love. That’s right. Like that’s a … what are you talking about?
So I think trying to just make number go up, and increase engagement or what have you, is what creates hollow experiences. And that tends to happen when you have, hypercentralized, hyperscale products. One of the reasons that happens inevitably is if you have five hyperscale products that are all consumer, and trying to get as many minutes of your waking day, there’s only so many waking minutes of people’s time in a given day. And so you naturally kind of have to marginally push. You know, try to figure out the thing that’s going to be more engaging than the other thing. And that emerges, I think, fundamentally when you have these hyperscale products—which is what emerges when you have massive centralization.
And all these things are of a piece, and lead to these. Hollow. Yeah.
Masnick: I think there’s a concept that has come up a few times in the conversations, in the various meetings that we had. And I don’t remember if it originated from you, Alex, or from someone else. But like, the difference between what you want and what you want to want, which may take a second. You think through, and you begin to like, Oh, right.
Like, there is this belief within certain companies that revealed preference is law. “If people love doomscrolling, ’cause they keep doing it, then we’re just giving them what they want.” Like, shut up. Like, you know, anyone who complains about that is just wrong. But then, as Alex said, it leaves you feeling terrible.
You have a hangover from it later. Whereas, if there’s this intentionality—of like, No, this is what I really want; I get nourishment out of it; I get value out of it in a real way—that lives on. That stays with me; that lingers. That’s different. And there’s that intentionality. As opposed to like, the problem with Oh, people love to doomscroll.
It’s, yeah. Because you’re sort of manipulating people into it. And people feel that they might not be able to explain it clearly. But like, it just feels like someone’s twisting the knobs behind the scenes, and I have no control over it. Right. And I think that feeling is what pervades; it’s the opposite of resonant computing.
Weinberg: I also think the opposite can be defined as any technology that’s ultimately undermining human agency. And so that can be things that are attention, you know, engagement-maximizing. And so it removes your agency in that sense. ’Cause you’re not actually able to express what you really want.
But also all the kind of micro ways in which we end up feeling deeply surveilled by the technology that we use. And I think all of us have probably had moments where we feel deeply creeped out by our tools. And I think, to me, that is the opposite of resonance also. So part of it’s about attention and engagement. And then part of it also is about, you know, having some individual autonomy in how you make decisions, where your data lives, who has access to it. And all of that we’ve tried to kind of embed into this piece.
Warzel: So you all write in the manifesto—and I’m going to quote you guys here, back to you at length. Hopefully it’s not cringey because it is written, you know, with a committee of people; I hate when people read my own stuff back to me.
But you all say: “For decades, technology has required standardized solutions to complex human problems. In order to scale software, you have to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander”—mentioned by you guys before—“has spent his career pushing back against. This is where AI provides a missing puzzle piece. Software can respond fluidly to the context and particularity of each human at scale. One size fits all is no longer a technological or economic necessity.”
This is the one part where I was tripped up while reading, and not in the “I am reflexively against AI” kind of way. But because personalization, I feel, in my own experience a lot of times, can be discordant with that idea of resonance.
I think personalization can be great. I think it’s actually, you know, underutilized or -realized in the tech space. But when I look around at the algorithmic world that we’re living in, sometimes it can feel like optimization. Which was, you know, the word there—like personalization and optimization comingle together.
Yeah. To become part of the problem and not the solution. So I was curious how you all would respond or think about that.
Komoroske: I think that the key thing there, I agree with that. What is the angle of the thing that is personalizing itself for you? Is it the tool, is it like trying to figure out how to fit exactly into the crevices of your brain? To get you to do something that is … you know, to click the ads or whatever?
Or does it feel like an outgrowth of your agency? Like, one way I talked about it is: Large language models can write infinite software. They can write little bits of software on demand, which has the potential to revolutionize what software can do for humanity. Today, software feels like a thing.
You go to the big-box store, and you pick which one of the three beige boxes, all of which suck, are you going to purchase. And instead, what if software felt like something that grew in your own personal garden? It was something that nourished you and felt like it aligned with your interest naturally and intrinsically, because it was an extension of your agency and intention?
And I think that kind of personalization—where it doesn’t feel like something else manipulating you, but it feels like something that is an extension of you and your agency and intention—I think is a very different kind of thing. We’re just not familiar with that kind, because it doesn’t exist currently.
Warzel: I was going to ask Alex just to push—not push back on it, but further follow up on that. Is there anything that exists like that, you think? A piece of software that feels garden grown versus a big-box store?
Komoroske: The one that keeps me coming back in my history is like—I think looking back at the early days of the web, actually, is where you had a bunch of these interesting bottoms-up kind of things.
HyperCard is my favorite one from many, many, many years ago. Have you heard of HyperCard? It’s like this thing that allowed you to make little stacks of cards. And you could have images on them; you could click between them, and you could program them to be like slideshows. Or like stacks of different things. And interlink.
The original game Myst, that was really popular, was actually implemented as a HyperCard stack back in the day. And so HyperCard, to me, is an example of one of these tools that allows you a freeform thing, that allows you to create this situated, very personalized software. You could argue that spreadsheets also have this kind of dynamic, because it’s an open substrate that allows you to express lots of different logic and build up very complex worlds inside of itself.
It’s pretty intimidating, but it is something that gives you that kind of ability to create meaning and behavior, inside of that substrate.
Masnick: Yeah. The thing I’ll say, to that point—and you’re not the only one who has sort of stopped on that line. And a few people have called it out and raised questions about it. And I think it’s because the idea of personalization, to date, has generally really been optimization. And it’s been optimization for the company’s interest, as opposed to the user’s interest. I think the real personalization is when it’s directly in your interest—and it’s doing something for you and not the company.
In the end, it has to be the user who has the agency, who has the control. Who says, This is what I want; this is what I want to see. And having it match that.
Komoroske: Charlie, I’ve also made a bunch of little tools. You know, a bunch of—if you’re technical, you can build these little bespoke bits of software now that fit perfectly to your workflow with large language models.
And that’s the kind of thing that a few of us can see a glimpse of this today, who are at the forefront and able to use Claude code and in the terminal to make these things. And I think in the not-too-distant future, large language models would, put on the proper substrate, will allow basically everyone on Earth to have that same kind of experience, that feels like an extension of their agency.
And I think that’s what some of us are seeing. And that’s why it’s in that essay. And that people who haven’t seen that yet are like, Excuse me, what? Like, you know, because they haven’t experienced it yet, they can’t see what’s coming.
Weinberg: Yeah; I do think that that sentence itself in many ways is a little bit forward looking. And so, as Alex said, there’s glimpses of it.
But I think the urgency and feeling like we needed to write about this is that it feels, I think to many of us, like the introduction of AI into all of our workflows gives us this kind of amazing opportunity. And crossroads. To either build along the lines of the paradigm of big tech and platforms and everything we’ve seen in the last, you know, couple decades—or we can try to shift into this new paradigm that is about personalization that, as Mike said, is not extrinsic from a third party, But something that you are building intrinsically yourself.
Warzel: I want to go through, actually, some of these starting principles. You all have five of them
that are these guiding lights. And I’d love to just sort of rapid-fire go through them, have whoever wants to explain just a little bit about how you’re thinking of them. Or how they, you know, might work to give a framework or a set of ethics or values to whatever is going to come out of this manifesto.
Right. And how they could be incorporated. And so the first one here is “private.” Which it says: In the, in the era of AI, whoever controls the context holds the power. Data often involves multiple stakeholders, and people in the service stewards of their own context, determining how it’s used.
We’ve talked a little around that. What “private” makes me think of, in a world of AI, is like: Our consumer-AI tools look the way that they do now because they’re built by the people who have spent—not totally, but when you think about like X, Google, Meta—the people who have spent the last, you know, 10, 15, 20 years collecting information on people.
So you are going to build a product that makes having that information more valuable to the end use. That’s part of the architecture there. But talk to me about how you see that first principle. Yeah. Zoe, do you want to take that one?
Weinberg: We debated this word a lot, and even the concept of privacy.
Komoroske: Yeah. We debated all these words.
Weinberg: Yeah, that’s true. But, you know, I think this one in particular is tricky, because we really went back and forth on—is it privacy that we feel like is the key value here? Or is it really about control, and putting the user in the driver’s seat?
And so it’s about, you know, consent. Rather than it is about just, like—and I think I speak for all of us. Like, I don’t think any of us are privacy maximalist. There are lots of, you know, amazing, wonderful prosocial reasons that you don’t always want to keep information private. And actually sharing information can be very helpful. And all those things.
And so, I guess, there’s a different way that we could have framed this that was a little bit more about control, or about agency, or whatever. But I think there is something meaningful about privacy as a value, and the notion. And the point of having privacy in the digital world is to be able to have a rich interior life. And that is, in many ways, very central to the experience of being human. And that’s why privacy is an individual value. It’s also a societal value. And I think that that was sort of important to capture in the mix here.
Komoroske: What we try to do with all these words is the word themselves.
We want to communicate on its own. And, if anything, go a little bit too hard in the direction it’s going. And because we actually soften the statement a fair bit about data stewardship. Because, you know, various thoughtful people pointed out that, well, actually data is owned, co-owned by the different parties. And in some cases you do want to give it up for an advantage, and whatever.
Mm-hmm. But we wanted the word to be private. Like, we wanted to be obvious when you have these five words. Like you could apply it to a product and say, “Does this fit, or does this not?” And not have little, like, soft, nuanced words for some of this. So we try to add the nuance in the sentence after the key word.
Warzel: Well, to that point, Alex: “dedicated.” You guys define it as: “Software should work exclusively for you, ensuring contextual integrity where data use aligns with expectations. You must be able to trust that there are no hidden agendas, conflicting interests.” Why’d you use the word dedicated? Like what do you mean exactly?
Komoroske: I wanted something that was, again, about: It’s an extension of your agency. It is not a conflict of interest, because it is in your interest. And “contextual integrity” actually is a meaningful phrase, because this is Helen Nissenbaum’s concept of contextual integrity. Which is, to my mind, the gold standard of what people mean when they think of privacy.
And it means: Your data is being used in line with your interests and expectations. So it’s aligned. It’s not being used against you, and it’s being used in ways that you understand or could be—or would not be surprised by if you were to understand it. And so that we wanted to get the words contextual integrity in there to get across this alignment with your interests and expectations.
Masnick: I think that’s a really important concept. You know, one of the discussions that comes up when talking about privacy is this idea that privacy is like a thing. And to me it’s always been a set of trade-offs. And the thing that really seems to upset people is when their data is being used in ways that they don’t understand, for purposes that they don’t understand.
And that is the world that we often live in, in the digital context. It’s like we know we’re giving up some data for some benefit, and neither side of that is fully understood by the users. We don’t know quite how much data we’re giving up. And we’re not quite sure for what purpose. And we’re getting some benefit, but we can’t judge whether or not that trade-off is worth it.
Warzel: I think about this all the time in terms of the “terms of service” agreement. I try to tell people, with that: Imagine that on the other side of the button that you were about to click is the most expensive-looking boardroom that you’ve ever seen in your life. With a whole bunch of people who make more in a week than you do in a year.
All in fancy suits. You know, like perfectly coiffed. And they’re just standing there, being like you versus them, you know? That’s what that is. It’s not a fair fight. You are agreeing to things. Yeah. Anyway, I want to keep running through this, though, because I want to get to ask a couple more questions here.
But the third of the five principles is “plural.” Which is: No single entity should control that distributed power. Interoperability. That seems relatively obvious. But, I mean, is this the idea of the decentralized, Bluesky sort of, you know, protocol-type thing? Being able to port your information to that, just being like a central tenant?
Masnick: Obviously that’s a big tenant for me.
Warzel: Yeah, I was going to say, I spent a lot. And you were involved with Bluesky, correct?
Masnick: Yes, yes. I’m on the board of Bluesky. But I also wrote the “Protocols not Platforms” paper that sort of was part of the inspiration for Bluesky. So, that kind of thinking—I’ve spent a lot of time thinking about that thing. And so I did. But I do think it’s important, not just in the social context—it’s important across the board. And this idea of, you know, why I’ve always thought that Bluesky or just a protocol-decentralized system is so important is this idea that we want to avoid giant centralized systems that will continually manipulate things. And so making sure that we don’t go down that path with, you know, the AI systems, I think, is really important. And just putting out there the idea that now, at this stage of the development of AI, we should be thinking about that. Rather than what we’re doing with social. Which is having to go back, you know, a decade: Oh crap. Like, Oh wait, we shouldn’t have done that.
And it’s funny to talk to the early Twitter people, who were like, Yeah, you know, we kind of thought that’s what we were doing. And we just lost track of it.
Warzel: Well, and it’s also like the biggest form of competition, actually, Like if you have a place where you can just say, I mean, I feel like I’m seeing this. I’ve seen this so much with the newsletter game.
Yes. Like, you have a lot of people who came to a company like Substack, just because, okay, yeah, this works really well. Great recommendation system. I can grow this audience; I can do this, I can link it to my, you know, paid. Boom. Like it just works. And then some of those people have problems with the leadership, the direction of the company, whatever.
And because of the way that, you know, newsletter lists work, and things like that. And the portability via, you know, different payment companies. You can just, you know, pop it over, and it’s relatively seamless. And then, of course, you have companies trying in ways to, you know, lock in these ways to keep people.
But this idea of interoperability: is that like competition? It allows Ghost or Beehiiv to, yeah.
Komoroske: Plurality is one of the things that leads to it. Also, it’s important to make sure you don’t have that undue influence of one particular voice. It’s important also to have competition and adaptability. A healthy system has multiple options that are multiple opposite, who are trying and competing to be the best version of it. And if we all used a single model, for example, and we didn’t realize what its bias was, or what it else could do, that would be bad. And that’s one of the reasons that having most people using just a single chatbot of ChatGPT, which obviously only works with OpenAI models, is not nearly as good of a feature as one where people can use different models in different contexts and try them out and switch between them.
Warzel: The fourth principle here is “adaptable.” Anyone can take it. It does seem relatively understandable.
Komoroske: The way I think about that one is: It lifts you up. It doesn’t box you in. ’Cause a lot of products have, like, if a product manager said, These are the five actions you were allowed to do in this context. I want a system that’s open-ended, that I can use to build whatever I want to do. As opposed to something that kind of limits me into a particular subset of things that I can do.
Warzel: And the last one is like—this is my music, man. “Prosocial.” Technology should enable connection, coordination; help us become better neighbors, collaborators, stewards of shared spaces, online and off.
This dovetails—we can talk about that all day. I’d love to hear what you guys think about it. I went through some of the comments of people who are, you know, who are seeing this. Who want to be either signatories or contributors or just help out with the process. A lot of really interesting comments.
A lot of people writing their own thoughts. One of them kind of hit me a little. I guess resonated with me a little bit. And it was the culture. It was, I’ll quote them: “The cultural backlash against attention extraction is coming. Technologies that respect and protect human attention will, in time, win the marketplace.”
To this idea of, like, the prosocial: I think it’s pretty obvious that these tools are having antisocial effects. Not always; not in every context. But there are, you know, ways they’re trapping us, keeping us from living the lives we want to live. In some contexts, making us feel just bad, or adding to problems with mental health that people may be having.
I’m curious about this idea of the cultural backlash, though. Zoe, I’d love for you to begin on this one. But do you all feel like this is happening? To me, it feels very much like people are waking up to the idea that, like, This stuff makes me feel bad, and I don’t know how much longer I really want to feel bad in this context.
Weinberg: You know, it’s funny; I exist in this world of tech and start-ups and VC, where everybody is really excited about AI and thinks it’s really positive. But if you take even a half a step outside that bubble, I think it is very clear, at least to me, that the AI backlash is coming, or it is already at our doorstep. Or it’s already here. And that there is a lot of hate and vitriol, and I get it. Because, I think, Charlie, you nailed it. Like fundamentally, I think what people are reacting to is that AI, in many ways, has been profoundly antisocial.
Even in the ways that social media itself were bad, it’s almost like it’s almost gotten worse. Like, I’ll give you an example. Like, we used to worry about people falling down these, you know, these sort of disinformation rabbit holes, because they’re in these echo chambers on social media. Now, you can fall down a disinformation echo chamber, a rabbit alone with a chatbot. You know, it’s like an echo chamber of one.
Warzel: Made it real simple.
Weinberg: It’s the way that I think about it. And that’s even more antisocial than the previous version, which, you know, what was itself very problematic and very, very harmful. And so I think that’s part of what people are reacting against.
And, look, I live in New York City. There was a subway campaign for a product called friend.com that elicited a ton of backlash from the city. And, you know, I’ve been observing things like that—and a few other instances along the way—that have definitely convinced me that I think for most people, whether or not they’ve used AI tools or they feel like AI is coming for their job or not, there’s just this sort of instinct of like, No, I don’t want this in my life.
Komoroske: Especially as an extension of the tech of the last decade. I mean, this industry, the one who gave us this crap and this hypercentralized. People who make these bombastic statements that are unnuanced and just don’t really seem to grapple with the amount of power and responsibility that they have.
Like, that’s not the place that you want AI to be. I also think, by the way, there’s a difference between AI tools. AI should not be your friend. If you think that AI is your friend, you are on the wrong track. AI should be a tool. It should be an extension of your agency. The fact that the first manifestation of large language models in a product happens to be a chatbot that pretends to be a human … it’s like the aliens in Contact who, you know, present themselves as her grandparents or whatever, so that she can make sense of it.
It’s like—it’s just a weird thing. Perfect crime. I think we’re going to look back on it and think of chatbots as an embarrassing party trick. You know, in five years, and be like, Oh, that was the wrong manifestation of large language models. Large language models should be in this inherent tool thing where you don’t get confused about whether this thing is your friend and you don’t get, you know, caught up in delusions of grandeur and everything.
Warzel: Well, I think too with the backlash, like—you mentioned this with this idea. It’s like, Oh, these companies are going to build, you know, the next generation of it. But I think, too, that it’s bigger than AI. Like I think you see this a lot.
This is, I think, the third time I’ve said this on this podcast now. But you can feel with younger generations that they understand very acutely how they’re being manipulated. Like they’re born into this ecosystem that a lot of people have had to take time to learn and understand.
There’s this real idea of it. And there’s sort of—though they are a part of it in a big way, also really, they don’t suffer fools in that sense. It’s like, I don’t necessarily want that. I’m feeling bad about it. It does feel like when I want to get hopeful about this stuff, I talk myself into this idea that we are sort of on the cusp of a little bit of a change.
I’ve experienced in the last year more phone-free spaces in general. Yeah, yeah. Right. Like this idea of this thing is not helping me in context, you know, outside of where I want to use it as a tool. I’m going to put it away right now. Or I need, I need someone to create a permission structure for me to put it away.
Komoroske: Going to saunas, I’ve heard, is a big thing, because you can’t have phones in them. Like, as a social space to be in person.
Weinberg: I predict that in the next year, we’re going to start to see people creating human-only spaces and saying, like, Okay, just so you know: This gathering, whether it’s online or in person, this is a human-only space.
Like, no wearables. Like, don’t bring your AI assistant or your, you know, Copilot. I’ll go on record. That’s a prediction for 2026.
Masnick: I was gonna say, I think one of the interesting things is that society adapts to these things. And there is this belief that like, oh, you know, once we start spiraling down, we continue to go down. But, like, people and society as a whole starts to figure this stuff out. And it may take a while, and there may be a lot of damage done in the interim.
Today, going on Facebook, I mean—they’ve picked other places. You know, over time as new generations come in, they sort of look at the old stuff, and they realize, they see the problems of it. Because, you know, they’re all much more obvious. And then they look for somewhere other space.
And so, you know, in the social world—that had been TikTok, for example, which has its own problems. But if there’s going to be another generation, and there’ll be another generation, of AI tools. And there’ll be another generation of social as well. And if we’re in a position where we’re creating spaces that are welcoming, human people will move to them eventually, as they realize how problematic the other ones are.
And that’s, like, a lot of the response that I’ve heard—at least to the manifesto as it came out. Was like just this—like an exhale. Like, yes. Like I’ve been thinking that we need this, you know, vision. And I’ve been thinking about it. I didn’t realize other people were thinking it. And I think that’s part of society, you know—moving forward with these things and thinking through. Like, what is next?
What do I want? If I’m going to make a jump to new tools and new systems, you know, like I want to be a little more deliberate about it. And if the people building it are also more deliberate about it, maybe we can actually have a next generation that meet these principles that we’re talking about
Warzel: To that end. there are some interesting critiques here that are made, I think, in good faith. One of them, I wanted to just highlight and get your reaction to, which is: Somebody on Bluesky said, quote, “Like other cyber libertarian frameworks, they stop short of the root cause, which is politics. Liberation depends on shifting political power, because power determines which values take hold.” That’s obviously true. I think other criticisms that I have seen in general that seem to be, you know, part of the cynicism of living in 2025—or 2026, as people listen to this—is this idea that it’s like, Yeah, that sounds great, you know, in theory.
But again, you butt up against the politics of it all. The capitalism of it all, the scale of it all. Like all of all of those things. Very real. Yep. By the time people hear this, like Warner Brothers may be bought by like 450 companies.
We don’t know what that future looks like, but all of them portend some kind of strange dystopian consolidation. No, but yeah. But in general, like, how are you guys thinking about that? This is a guiding statement, to some degree. This is not meant to, you know, solve every problem that exists.
But how are you thinking about that? But, you know, coming up against politics of it.
Komoroske: I think the one broader point—by the way, I published another essay about optimization and how modern society just kinda optimizes everything. It’s true in the technology industry, but it’s also true in tech, or, sorry, in business and in politics too. I think it’s the defining characteristic of modern society—that we forgot that optimization actually does come in at a cost.
It’s just an indirect and harder-to-see cost. and I think that that is true across many different dimensions. It’s part of what I think everyone is feeling in this moment. And I think I would also point out that we are part of the industry. And we are also realists. We understand the incentive structures, the things that get us stuck in these kinds of behaviors.
A couple things. One, I think a lot of this is to a point that we made earlier in the conversation. Some of it is totally structural, and it’s, you know, the person at the top making these kind of decisions to optimize for Wall Street or something. Other parts of it are just emergent. They’re just local product managers in a given team making a decision: Okay, oh, we know that number is supposed to go up.
And it’s not thinking about what the downside of number going up is. And actually, if they think about it in terms of resonance, they might make a better product that actually creates more value for the shareholders, too. It doesn’t have to be intention. So little things, if everybody can say, Hey, is this resonant?
Just having people be able to have that terminology, and ask that question. If lots of different people are asking that throughout the industry, that could have an impact. And second, myself and a number of others who are working on this manifesto are working on things that are structural changes. To the kinds of distribution structures and power structures that create technology.
I’m working on an alternate security model that’s open and decentralized and allows getting rid of some of these silos that lead to aggregation, while still being fully aligned with people’s private interests. And so, we are not just saying: “Oh, what if everyone just said, Hey, let’s be nice today?”
You know, there’s some of that that actually could be somewhat effective. And also, we are realists about the emergent factors that cause some of these things. And working to modify or tweak or do what we can to help, you know, the right kinds of things emerge.
Masnick: There are many reasons to be cynical right now.
I completely understand where all that is coming from. And I think some of the job that we’re hoping to do—or at least I’m hoping; I shouldn’t speak for anyone else on this—is like, the more that we can paint this picture and show people. And yes, like, maybe some of us are a few steps into the future on this stuff. but if we can start to bring that back, begin to show people there are real things behind this, and we can all start to make decisions in this direction. And hopefully we can start to thaw out some of that cynicism and show that there’s something real here.
And each one of those steps is important. We’re not going to, you know, flip the entire structure of the world right now. But we can take these little steps and really make a difference over time.
Weinberg: The only thing I would add is that I think that there’s already been a lot of ink spilled doing the diagnosis. And I think capitalism is part of it. I think our political system is part of it. I think optimization culture is part of it. I think it’s a confluence of different factors. But I think part of what we were trying to do, at least in this piece, is move beyond just the diagnosis of the problem and try to craft a positive vision for where we should go.
But absolutely: A totally valid critique might be that you need to spend more time unpacking some of those underlying drivers. Of which we are all, I think, very aware of the ways in which that shapes, you know, the current reality.
Warzel: So I want to land this plane with: People are going to be listening to this at the beginning of the year.
I think this is a hopeful vision of a future. Or at least telling people. What if you planted the seed in your brain of a hopeful vision while you’re constructing these things? What is giving you all hope about what’s coming next this year in this space?
You’ve gone through this. You are clearly hopeful people to put this together in some sense. No matter how, you know, beaten down and cynical anyone who exists online is these days. But yeah. What is keeping you guys going forward on this?
Weinberg: I think what gives me hope on this vision is that I am seeing this whole new generation of founders and technologists, many of whom are like contemporaries of me, that grew up kind of under big tech and are just questioning all of the assumptions that underlie the way that we built things. And are trying to think about building things in new ways, and I think are very subscribed to the types of values and vision that we lay out in the manifesto.
And so I think that’s what gives me hope. I feel like the tide is really turning. And the fact that there’s been a ton of interest and momentum in the manifesto itself, I think, suggests to me like, you know—there’s a critical mass here who feels this way? And that’s kind of all you need to, like, nudge it in the right direction, I think.
Masnick: Yeah, I was going to say, like, I guess I’m the old man of the crew. I think that I’ve been alive slightly longer than the others. And that I remember the early days when people were thrilled with new technology, and it was exciting. And before it all seemed to turn. And to me, there is this element of going back to that. You know, there are mistakes that were made, but being able to go back to that time while recognizing the mistakes and doing a better job this time, I think, is actually really important.
And I’ve had, like, some of the criticism I’ve seen. Because I talked about this concept of going back, and some of the criticism was like, No; it was always terrible. And it’s like, No, like, I lived that time. And I remember when using new technology in the internet was enjoyable and exciting. And we can bring that back.
There’s nothing that says we have to keep the awful parts of the internet working the way that they currently work. And really against our own interests. And so I’m very optimistic; when you put these things out in the world, you know, people are gravitating to it. And that’s the first step toward pretty massive change over time.
Komoroske: I think for me—I think people have felt so cynical, and that they can’t do anything. And like, maybe they’re the only one that wants to push back against some of these optimization pressures. And seeing the response that people have to this has been really inspiring to me. Because at some degree, I’m thinking that we’re saying this thing that no one’s going to care about.
Everyone’s going to think it’s kind of dumb. And people are like: Yeah, how can I participate? I’m like, Oh my gosh; wow. Okay. I mean, I’m into it too, but so it feels very encouraging to me to see people feel that agency and wanting to sort of change the world in this way. And again, I work with a bunch of folks who are at the cutting edge of using large language models, and interesting ways to create infinite bits of situated software that—you know, personalized software.
And like, it’s exciting what you can do with some of these things. And again, I think chatbots—if you’re looking at chatbots like this is going to be social media, but worse and just kind of the same old story of centralization. Like, my hope is that we will be beyond that relatively soon as people start waking up to all the other things that you can do that are now possible. And democratized and available to just about anyone, to aid and empower them.
It’s really cool. And so I’m just extremely excited about what we as a society are going to do with some of these technologies.
Warzel: All right, with that, let’s go forth into 2026 and make it suck less than it did before. No, I appreciate everyone’s time. Zoe, Alex, Mike: Thank you for coming on Galaxy Brain and offering a unusual dose of positivity and hope.
Masnick: Excellent. Well, thanks for having us. Thanks.
Warzel: Thank you again to Zoe Weinberg, Mike Masnick, Alex Komoroski. I wanted to have this conversation because back in November at this panel discussion that I participated in, in Bozeman, Montana, we had this long conversation about the generative-AI moment. And so much of it was focused on the economic issues, the fears of artificial general intelligence, the ways in which this is all being abused. The conversation—as it tends to with new technologies that are consequential—it gets very negative, and very reactive, and very thinking about all the scary externalities of a new technology.
And at the very end of the conversation, one of the panelists, Sarah Myers West, who does a lot of work in AI policy, ended with something that was very—to borrow the term—resonant to me. And that was that she was really tired of talking about all the bad stuff and all the stuff that AI shouldn’t be—you know, the future that is being brought to the world that we need to fear—and wanted to think about ways to put forward a positive vision. To stop being on the defensive all the time and to think about: What is the future you want to build? If this technology is here, if it’s not going away, how do we harness it to do something that will be productive and helpful to human flourishing? And that just stuck with me, especially as someone who’s always focused on these negatives. And so a couple days later, when I saw this manifesto, I just thought to myself, Some of this stuff is probably idealistic. Some of this stuff is gonna be really hard to enact from.
From a political standpoint, from a fundraising standpoint, it’s gonna be a challenge. It’s always a challenge to build something that resists scale in general. But that doesn’t mean that we shouldn’t try. We shouldn’t try to be so rational about all of this, that we talk ourselves out of building something that matters, that helps, that actually aligns with the goals of being a good human living a good life. And so I found the conversation—in that sense more than anything—to just be motivating, to be something that, as we continue to do episodes here, as I continue to do my reporting, as you all continue to live your life out there among this technology, to think about. What it is you want. What it is we should be building to come up with positive visions of how this stuff should work, instead of constantly just defending against it.
So I hope this conversation gave you some of those ideas, some of those tools. It certainly did for me. And it’s something we’re gonna be continuing to explore throughout the year. So thank you once again. If you liked what you saw here, new episodes of Galaxy Brain are dropping every Friday.
And you can subscribe to The Atlantic’s YouTube channel, or you can go on Apple or Spotify or wherever you get your podcasts. Please leave a five-star review if you would. And just remember, if you also enjoyed this, you can support this work and the work of all of my colleagues at The Atlantic by subscribing to the publication at TheAtlantic.com/Listener. That’s TheAtlantic.com/Listener. Thank you so much for listening, and I’ll see you on the internet.
The post Can We Save the Internet? appeared first on The Atlantic.




