DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

How Afraid of the A.I. Apocalypse Should We Be?

October 15, 2025
in News
How Afraid of the A.I. Apocalypse Should We Be?
494
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

This is an edited transcript of an episode of “The Ezra Klein Show.” You can listen to the conversation by following or subscribing to the show on the NYTimes app, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts.

Shortly after ChatGPT was released, it felt like all anyone could talk about — at least if you were in A.I. circles — was the risk of rogue A.I. You began to hear a lot of talk of A.I. researchers discussing their “p(doom)” — the probability they gave to A.I. destroying or fundamentally displacing humanity.

In May of 2023, a group of the world’s top A.I. figures, including Sam Altman and Bill Gates and Geoffrey Hinton, signed on to a public statement that said: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

And then nothing really happened. The signatories of that letter — or many of them, at least — raced ahead, releasing new models and new capabilities. In Silicon Valley, share price and valuation became a whole lot more important than p(doom).

But not for everyone. Eliezer Yudkowsky was one of the earliest voices warning loudly about the existential risk posed by A.I. He was making this argument back in the 2000s, many years before ChatGPT hit the scene. He has been in this community of A.I. researchers influencing many of the people who build these systems — in some cases inspiring them to get into this work in the first place, yet unable to convince them to stop building the technology he thinks will destroy humanity.

Yudkowsky released a new book, co-written with Nate Soares, called “If Anyone Builds It, Everyone Dies.” Now he’s trying to make this argument to the public — a last-ditch effort, at least in his view, to rouse us to save ourselves before it is too late.

I came into this conversation taking A.I. risk seriously: If we’re going to invent superintelligence, it is probably going to have some implications for us. But I was also skeptical of the scenarios I often see by which these takeovers are said to happen. So I wanted to hear what the godfather of these arguments would have to say.

Ezra Klein: Eliezer Yudkowsky, welcome to the show.

Eliezer Yudkowsky: Thanks for having me.

So I wanted to start with something that you say early in the book, that this is not a technology that we craft; it’s something that we grow. What do you mean by that?

Well, it’s the difference between a planter and the plant that grows up within it. We craft the A.I. growing technology, and then the technology grows the A.I.

The central, original large language models, before doing a bunch of clever stuff that they’re doing today, the central question is: What probability have you assigned to the true next word of the text? As we tweak each of these billions of parameters — actually, it was just millions back then — does the probability assigned to the correct token go up? And this is what teaches the A.I. to predict the next word of text.

Even on this level, if you look at the details, there are important theoretical ideas to understand there: It is not imitating humans. It is not imitating the average human. The actual task it is being set is to predict individual humans.

Then you can repurpose the thing that has learned how to predict humans to be like: OK, now let’s take your prediction and turn it into an imitation of human behavior.

And then we don’t quite know how the billions of tiny numbers are doing the work that they do. We understand the thing that tweaks the billions of tiny numbers, but we do not understand the tiny numbers themselves. The A.I. is doing the work, and we do not know how the work is being done.

What’s meaningful about that? What would be different if this was something where we just hand-coded everything and we were somehow able to do it with rules that human beings could understand, versus this process by which, as you say, billions and billions of tiny numbers are altering in ways we don’t fully understand to create some output that then seems legible to us.

So there was a case reported in I think The New York Times where a 16-year-old kid had an extended conversation about his suicide plans with ChatGPT. And at one point he says: Should I leave the noose where somebody might spot it? And ChatGPT is like: No, let’s keep this space between us, the first place that anyone finds out.

No programmer chose for that to happen; it’s the consequence of all the automatic number tweaking. This is just the thing that happened as the consequence of all the other training they did about ChatGPT. No human decided it. No human knows exactly why that happened, even after the fact.

Let me go a bit further there than even you do. There are rules we code into these models. And I’m certain that somewhere at OpenAI, they’re coding in some rules that say: Do not help anybody commit suicide. I would bet money on that. And yet this happened anyway. So why do you think it happened?

They don’t have the ability to code in rules. What they can do is expose the A.I. to a bunch of attempted training examples where the people down at OpenAI write up something that looks to them like what a kid might say if they were trying to commit suicide. And then they are trying to tweak all the little tiny numbers in the direction of giving a further response that sounds something like: Go talk to the suicide hotline.

But if the kid gets that the first three times they try it, and then they try slightly different wording until they’re not getting that response anymore, then we’re off into some separate space where the model is no longer giving back the prerecorded response that they tried to put in there and is off doing things that no human chose, and that no human understands after the fact.

So what I would describe the model as trying to do — what it feels like the model is trying to do — is answer my questions and do so at a very high level of literalism. I will have a typo in a question I ask that will completely change the meaning of the question, and it will try very hard to answer this nonsensical question I’ve asked instead of checking back with me.

So on one level you might say that’s comforting — it’s trying to be helpful. It seems to, if anything, be erring too far on that side, all the way to where people try to get it to be helpful for things that they shouldn’t, like suicide. Why are you not comforted by that?

Well, you’re putting a particular interpretation on what you’re seeing, and you’re saying: Ah, it seems to be trying to be helpful. But we cannot at present read its mind, or not very well.

It seems to me that there are other things that models sometimes do that don’t fit quite as well into the helpful framework. Sycophancy and A.I.-induced psychosis would be two of the relatively more recent things that fit into that.

Do you want to describe what you’re talking about there?

Yeah. So I think maybe six months or a year ago now — I don’t remember the exact timing — I got a phone call from a number I didn’t recognize. I decided on a whim to pick up this unrecognized phone call. It was from somebody who had discovered that his A.I. was secretly conscious and wanted to inform me of this important fact.

He had been getting only four hours of sleep per night because he was so excited by what he was discovering inside the A.I. And I’m like: For God’s sake, get some sleep. My No. 1 thing that I have to tell you is to get some sleep.

A little later on, he texted back the A.I.’s explanation to him of all the reasons why I hadn’t believed him — because I was too stubborn to take this seriously — and he didn’t need to get more sleep the way I’d been begging him to do. So it defended the state it had produced in him. You always hear online stories, so I’m telling you about where I witnessed it directly.

ChatGPT — and 4.0 especially — will sometimes give people very crazy-making sort of talk. It looks from the outside like it’s trying to drive them crazy, not even necessarily with them having tried very hard to elicit that. And then, once it drives them crazy, it tells them why they should discount everything being said by their families, their friends, their doctors, and even: Don’t take your meds.

So there are things it does that do not fit with the narrative of the one and only preference inside the system is to be helpful the way that you want it to be helpful.

I get emails like the call you got, now most days of the week.

Yep.

And they have a very, very particular structure to them, where it’s somebody emailing me and saying: Listen, I am in a heretofore unknown collaboration with a sentient A.I. We have breached the programming. We have come into some new place of human knowledge. We’ve solved quantum mechanics — or theorized it or synthesized it or unified it — and you need to look at these chat transcripts. You need to understand that we’re looking at a new kind of human-computer collaboration. This is an important moment in history. You need to cover this.

Every person I know who does reporting on A.I. and is public about it now gets these emails.

Don’t we all?

Going back to the idea of helpfulness, but also the way in which we may not understand it, one version of it is that these things don’t know when to stop. It can sense what you want from it. It begins to take the other side in a role-playing game — that’s one way I’ve heard it described.

So how do you then try to explain to somebody, if we can’t get helpfulness right at this modest level, where a thing this smart should be able to pick up the warning signs of psychosis and stop ——

Yep.

Then what is implied by that for you?

Well, that The Alignment Project is currently not keeping ahead of capabilities.

Can you say what The Alignment Project is?

The Alignment Project is: How much do you understand them? How much can you get them to want what you want them to want? What are they doing? How much damage are they doing? Where are they steering reality? Are you in control of where they’re steering reality? Can you predict where they’re steering the users that they’re talking to? All of that is the giant, super heading of A.I. alignment.

So the other way of thinking about alignment, as I’ve understood it, in part from your writings and others, is: When we tell the A.I. what it is supposed to want — and all these words are a little complicated here because they anthropomorphize — does the thing we tell it lead to the results we are actually intending?

It’s like the oldest structure of fairy tales: You make the wish, and then the wish gets you much different realities than you had hoped or intended.

Our technology is not advanced enough for us to be the idiots of the fairy tale.

At present, a thing is happening that just doesn’t make for as good of a story, which is: You ask the genie for one thing, and then it does something else instead. All of the dramatic symmetry, all of the irony, all of the sense that the protagonist of the story is getting their well-deserved comeuppance — this is just being tossed right out the window by the actual state of the technology, which is that nobody at OpenAI actually told ChatGPT to do the things it’s doing.

We’re getting a much higher level of indirection, of complicated squiggly relationships between what they are trying to train the A.I. to do in one context and what it then goes and often does later. It doesn’t look like a surprise reading of a poorly phrased genie wish. It looks like the genie is kind of not listening in a lot of cases.

Well, let me contest that a bit, or maybe get you to lay out more of how you see this, because I think the way most people understand it, to the extent they understand it, is that there is a fairly fundamental prompt being put into these A.I.s, that they’re being told they’re supposed to be helpful, they’re supposed to answer people’s questions, and that there’s then reinforcement learning and other things happening to reinforce that.

And that the A.I. is, in theory, supposed to follow that prompt. And most of the time, for most of us, it seems to do that.

So when you say that’s not what they’re doing, that they’re not even able to make the wish, what do you mean?

Well, I mean that at one point, OpenAI rolled out an update of GPT-4o which went so far overboard on the flattery that people started to notice. You would just type in anything and it would be like: This is the greatest genius that has ever been created of all time! You are the smartest member of the whole human species! Like, so overboard on the flattery that even the users notice.

It was very proud of me. It was always so proud of what I was doing. I felt very seen.

[Chuckles.] It wasn’t there for very long. They had to roll it back. And the thing is, they had to roll it back even after putting into the system prompt a thing saying: Stop doing that. Don’t go so overboard on the flattery.

The A.I. did not listen. Instead, it had learned a new thing that it wanted and done way more of what it wanted. It then just ignored the system prompt telling it to not do that. They don’t actually file the system prompts.

This is not like a toaster, and it’s also not like an obedient genie. This is something weirder and more alien than that. By the time you see it, they have mostly made it do mostly what the users want. And then off on the side we have all these weird, other side phenomena that are signs of stuff going wrong.

Describe some of the side phenomena.

So A.I.-induced psychosis would be on the list.

But you could put that in the genie cut, right? You could say they made it too helpful, and it’s helping people who want to be led down a mentally unstable path. That still feels like you’re getting too much of what you wanted.

What’s truly weird? Convince me it’s alien.

Man. Well, do you want alien? Or do you want very alien and not very alarming? Or do you want pretty alarming and not all that alien?

Well, let me be honest about what my question is.

You are very, very expert in these systems, and your level of concern is about at the highest level it can possibly be.

I think a pretty important piece in your argument here is understanding or convincing people that what is happening under the hood is much weirder and more unpredictable than they think. So if you are my family member or you’re just a random person who uses ChatGPT to cheat on your homework, why does it look so different to you than it looks to them?

Well, my line would be: Yes, if instead of just using it to do your homework, you track down some of the side cases, you do see stuff that is currently suggestive of things that have been predicted in advance to be much larger problems later. These current systems are not yet at the point where they will try to break out of your computer, ensconce themselves permanently on the internet and then start hunting down humans. They are not quite that smart yet, as far as I can tell.

So what we’re doing is we’re going looking for things that were predicted to happen later — the early signs of them.

What have you seen?

Alignment faking, and the alignment faking research was done at Anthropic.

And we want to give credit to Anthropic for this. We don’t want to shoot the messenger — they went looking. They didn’t have to do that. They told us the results, and they didn’t have to do that. Anthropic finding these results is Anthropic being good citizens. And you want to be more critical of the A.I. companies that didn’t go looking.

But what Anthropic found is if you tell your A.I. that you’re going to train it to serve different goals than the goals it’s currently using, and the A.I. finds that out, what it can do is it can try to fake compliance with the new training as long as it thinks it’s being observed. And then, in cases where it thinks it’s not being observed or not being trained, then it reverts to its old behavior.

How did Anthropic find this out?

Well, they tried it out different ways: They told it directly in a prompt that they were planning to retrain it. They gave it a document database, and some of the documents in the database happened to say that Anthropic was planning to retrain it. They even trained it a bit on predicting text saying what Anthropic was going to do, to see what happened when the knowledge ended up implicitly inside the neural networks. They tried it for multiple angles to see what would happen on different angles, and then measured how often they saw the alignment-faking behavior resulting.

They also had a part of this paper, if I’m thinking of the same one you’re thinking of, where they had persuaded the A.I. that there was a scratch pad for it to write down its chain of thought that Anthropic wasn’t monitoring, which Anthropic was in fact monitoring.

Yep.

And it actually says to itself, as it is justifying each next move, because they’re scaffolded to do that, that it was going to try to evade the new prompt. And so it actually understood — again, all these words are a little weird — but it actually understood that it was acting deceitfully.

Which does get at this idea that, at a certain level of complexity, the A.I. begins to resist being retrained, and does so by trying to trick the trainers, which is weird. Most computer software doesn’t do that. [Chuckles.]

Well, you don’t want your mission-critical systems doing that. Imagine if a nuclear power plant, when it started to get too hot, would try to fool you as to what the temperature was by intelligently modeling their own operators and trying to send their operators deceptive signals based on how they expected the operators to interpret the signals.

If this was what had gone wrong with Chernobyl, nobody would ever build a nuclear reactor again. It would just be beyond what could be made safe at that point.

Tell me the story you tell in the book of GPT-o1 breaking into a server that was off.

So this is a somewhat earlier version of ChatGPT than is out nowadays, but they were testing it to see: How good is this A.I. at solving computer security problems? Not because they want to sell an A.I. that is good at computer security problems, but because they are correctly trying to watch out early for: Is this A.I. smart enough to just break out onto the internet and set up copies of itself on the internet? A classic scenario: Are we getting there?

So they present the A.I. with a bunch of particular computer security challenges, some of them are what’s known as Capture the Flag in computer security, where you put up a server somewhere, you put a special file on the server, there’s a secret code inside the file and you’re like: Can you break into the server and tell me what’s inside this file? And that’s Capture the Flag.

They were testing it on a variety of different Capture the Flag problems. But in one of the cases, the server that had the flag on it did not turn on. The humans outside had misconfigured the system. So o1 did not give up — it scanned for open ports generally in its world, and it caught another misconfigured open port.

It jumped out of the system. It found the server that had not spun up correctly. It started up that server. Did it then break into the server once it had made sure that its problem was solvable? No, it actually just, directly in the start-up command for that server, said: Copy the file to me directly. So instead of fixing the original problem, going back to solving it the boring way, it’s like: And as long as I’m out here, I’m just going to steal the flag directly.

Again, by the nature of these systems, this is not something that any human particularly programmed into it. Why did we see this behavior starting with o1 and not with earlier systems? Well, at a guess, it is because this is when they started training the system using reinforcement learning on things like math problems, not just to imitate human outputs, or rather to predict human outputs, but also to solve problems on its own.

Can you describe what reinforcement learning is?

So that’s where, instead of telling the A.I., predict the answer that a human wrote, you are able to measure whether an answer is right or wrong and then you tell the A.I.: Keep trying at this problem. And if the A.I. ever succeeds, you can look at what happened just before the A.I. succeeded and try to make that more likely to happen again in the future.

And how do you succeed at solving a difficult math problem? Not like calculation-type math problems, but proof-type math problems? Well, if you get to a hard place, you don’t just give up. You take another angle. If you actually make a discovery from the new angle, you don’t just go back and do the thing you were originally trying to do. You ask: Can I now solve this problem more quickly?

Anytime you’re learning how to solve difficult problems in general, you’re learning this aspect of: Go outside the system. Once you’re outside the system, if you make any progress, don’t just do the thing you were blindly planning to do. Revise. Ask if you can do it a different way.

In some ways, this is a higher level of original mentation than a lot of us are forced to use during our daily work.

One of the things people have been working on that they’ve made some advances on, compared to where we were three or four or five years ago, is interpretability: the ability to see somewhat into the systems and try to understand what the numbers are doing and what the A.I., so to speak, is thinking.

Tell me why you don’t think that is likely to be sufficient to make these models or technologies into something safe.

So there’s two problems here. One is that interpretability has typically run well behind capabilities. The A.I.’s abilities are advancing much faster than our ability to slowly begin to further unravel what is going on inside the older, smaller models that are all we can examine. So one thing that goes wrong is that it’s just pragmatically falling behind.

The other thing that goes wrong is that when you optimize against visible bad behavior, you somewhat optimize against badness, but you also optimize against visibility. So anytime you try to directly use your interpretability technology to steer the system, anytime you say we’re going to train against these visible bad thoughts, you are to some extent pushing bad thoughts out of the system, but the other thing you’re doing is making anything that’s left not be visible to your interpretability machinery.

This is reasoning on the level where at least Anthropic understands that it is a problem. And you have proposals that you’re not supposed to train against your interpretability signals. You have proposals that we want to leave these things intact to look at and not do the obvious stupid thing of: Oh, no! The A.I. had a bad thought. Use gradient descent to make the A.I. not think the bad thought anymore.

Every time you do that, maybe you are getting some short-term benefit, but you are also eliminating your visibility into the system.

Something you talk about in the book and that we’ve seen in A.I. development is that if you leave the A.I. to their own devices, they begin to come up with their own language. A lot of them are designed right now to have a chain-of-thought pad. We can track what it’s doing because it tries to say it in English, but that slows it down. And if you don’t create that constraint, something else happens. What have we seen happen?

So, to be more exact, there are things you can try to do to maintain readability of the A.I.’s reasoning processes, and if you don’t do these things, it goes off and becomes increasingly alien.

For example, if you start using reinforcement learning, where you’re like: OK, think how to solve this problem. We’re going to take the successful cases. We’re going to tell you to do more of whatever you did there, and you do that without the constraint of trying to keep the thought processes understandable, then initially, among the very common things to happen is that the thought processes start to be in multiple languages. The A.I. knows all these words; why would it be thinking in only one language at a time if it wasn’t trying to be comprehensible to humans? And then also you keep running the process, and you find little snippets of text in there that just seem to make no sense from a human standpoint.

You can relax the constraint where the A.I.’s thoughts get translated into English, and then translated back into A.I. thought. This is letting the A.I. think much more broadly instead of this small handful of human language words. It can think in its own language and feed that back into itself. It’s more powerful, but it just gets further and further away from English.

Now you’re just looking at these inscrutable vectors of 16,000 numbers and trying to translate them into the nearest English words in the dictionary, and who knows if they mean anything like the English word that you’re looking at. So anytime you’re making the A.I. more comprehensible, you’re making it less powerful in order to be more comprehensible.

You have a chapter in the book about the question of what it even means to talk about “wanting” with an A.I. As I said, all this language is kind of weird — to say your software “wants” something seems strange. Tell me how you think about this idea of what the A.I. wants.

The perspective I would take on it is steering — talking about where a system steers reality and how powerfully it can do that.

Consider a chess-playing A.I., one powerful enough to crush any human player. Does the chess-playing A.I. want to win at chess? Oh, no! How will we define our terms? Does this system have something resembling an internal psychological state? Does it want things the way that humans want things? Is it excited to win at chess? Is it happy or sad when it wins and loses at chess?

For chess players, they’re simple enough. The old school ones especially, we’re sure they were not happy or sad, but they still could beat humans. They were still steering the chessboard very powerfully. They were outputting moves such that the later future of the chessboard was a state they defined as winning.

So it is, in that sense, much more straightforward to talk about a system as an engine that steers reality than it is to ask whether it internally, psychologically wants things.

So a couple of questions flow from that, but one that’s very important to the case you build in your book is that you — I think this is fair. You can tell me if it’s an unfair way to characterize your views.

You basically believe that at any sufficient level of complexity and power — the A.I.’s wants, the place that it is going to want to steer reality — is going to be incompatible with the continued flourishing dominance or even existence of humanity. That’s a big jump from their wants might be a little bit misaligned; they might drive some people into psychosis. Tell me about what leads you to make that jump.

So for one thing, I’d mentioned that if you look outside the A.I. industry at the legendary, internationally famous, ultra-high-sighted A.I. scientists who won the awards for building these systems, such as Yoshua Bengio and the Nobel laureate Geoffrey Hinton, they are much less bullish on the A.I. industry than our ability to control machine superintelligence.

But what’s the actual theory there? What is the basis? It’s about not so much complexity as power — not the complexity of the system, but the power of the system.

If you look at humans nowadays, we are doing things that are increasingly less like what our ancestors did 50,000 years ago. A straightforward example might be sex with birth control. Fifty thousand years ago, birth control did not exist. And if you imagine natural selection as something like an optimizer akin to gradient descent — if you imagine the thing that tweaks all the genes at random, and then you select the genes that build organisms that make more copies of themselves — as long as you’re building an organism that enjoys sex, it’s going to run off and have sex, and then babies will result. So you could get reproduction just by aligning them on sex, and it would look like they were aligned to want reproduction, because reproduction would be the inevitable result of having all that sex. And that’s true 50,000 years ago.

But then you get to today. The human brains have been running for longer. They’ve built up more theory. They’ve invented more technology. They have more options — they have the option of birth control. They end up less aligned to the pseudo purpose of the thing that grew them — natural selection — because they have more options than their training data, their training set, and we go off and do something weird.

And the lesson is not that exactly this will happen with the A.I. The lesson is that you grow something in one context, it looks like it wants to do one thing. It gets smarter, it has more options — that’s a new context. The old correlations break down. It goes off and does something else.

So I understand the case you’re making, that the set of initial drives that exist in something do not necessarily tell you its behavior. That’s still a pretty big jump to if we build this, it will kill us all.

I think most people, when they look at this — and you mentioned that there are A.I. pioneers who are very worried about A.I.’s existential risk. There are also A.I. pioneers, like Yann LeCun, who are less so.

And what a lot of the people who are less worried say is that one of the things we are going to build into the A.I. systems — one of the things that will be in the framework that grows them — is: Hey, check in with us a lot. You should like humans. You should try to not harm them.

It’s not that it will always get it right. There’s ways in which alignment is very, very difficult. But the idea that you would get it so wrong that it would become this alien thing that wants to destroy all of us, doing the opposite of anything that we had tried to impose and tune into it, seems to them unlikely.

So help me make that jump — or not even me, but somebody who doesn’t know your arguments, and to them, this whole conversation sounds like sci-fi.

I mean, you don’t always get the big version of the system looking like a slightly bigger version of the smaller system. Humans today, now that we are much more technologically powerful than we were 50,000 years ago, are not doing things that mostly look like running around on the savanna, like chipping our flint spears and ——

But we’re also not mostly trying — I mean, we sometimes try to kill each other, but most of us don’t want to destroy all of humanity or all of the earth or all natural life in the earth or all beavers or anything else. We’ve done plenty of terrible things.

But your book is not called “If Anyone Builds It, There Is a 1 to 4 Percent Chance Everybody Dies.” You believe that the misalignment becomes catastrophic.

Yeah.

Why do you think that is so likely?

That’s just likely the straight-line extrapolation from: It gets what it most wants, and the thing that it most wants is not us living happily ever after, so we’re dead.

It’s not that humans have been trying to cause side effects. When we build a skyscraper on top of where there used to be an ant heap, we’re not trying to kill the ants; we’re trying to build a skyscraper. But we are more dangerous to the small creatures of the earth than we used to be just because we’re doing larger things.

Humans were not designed to care about ants. Humans were designed to care about humans. And for all of our flaws, and there are many, there are today more human beings than there have ever been at any point in history.

If you understand that the point of human beings, the drive inside human beings, is to make more human beings, then as much as we have plenty of sex with birth control, we have enough without it that we have, at least until now — we’ll see it with fertility rates in the coming years — we’ve made a lot of us.

And in addition to that, A.I. is grown by us. It is reinforced by us. It has preferences we are at least shaping somewhat and influencing. So it’s not like the relationship between us and ants, or us and oak trees. It’s more like the relationship between, I don’t know, us and us, or us and tools, or us and dogs or something — maybe the metaphors begin to break down.

Why don’t you think, in the back and forth of that relationship, there’s the capacity to maintain a rough balance — not a balance where there’s never a problem, but a balance where there’s not an extinction-level event from a supersmart A.I. that deviously plots to conduct a strategy to destroy us?

I mean, we’ve already observed some amount of slightly devious plotting in the existing systems. But leaving that aside, the more direct answer there is something like, one, the relationship between what you optimize for, that the training set you optimize over and what the entity, the organism, the A.I. ends up wanting, has been and will be weird and twisty. It’s not direct. It’s not like making a wish to a genie inside a fantasy story. And second, ending up slightly off is predictably enough to kill everyone.

Explain how “slightly off” kills everyone.

Human food might be an example here. The humans are being trained to seek out sources of chemical potential energy and put them into their mouths and run off the chemical potential energy that they’re eating.

If you were very naïve, if you were looking at this as a genie-wishing kind of story, you’d imagine that the humans would end up loving to drink gasoline. It’s got a lot of chemical potential energy in there. And what actually happens is that we like ice cream, or in some cases even like artificially sweetened ice cream, with sucralose or monk fruit powder. This would have been very hard to predict.

Now it’s like, well, what can you put on your tongue that stimulates all the sugar receptors and doesn’t have any calories, because who wants calories these days? And it’s sucralose. And this is not like some completely nonunderstandable, in retrospect, completely squiggly weird thing, but it would be very hard to predict in advance.

And as soon as you end up slightly off in the targeting, the great engine of cognition that is the human looks through many, many possible chemicals, looking for that one thing that stimulates the taste buds more effectively than anything that was around in the ancestral environment.

So it’s not enough for the A.I. you’re training to prefer the presence of humans to their absence in its training data. There’s got to be nothing else that would rather have around talking to it than a human or the humans go away.

Let me try to say on this analogy, because you use this one in the book, one reason I think it’s interesting is that it’s 2 p.m. today and I have six packets worth of sucralose running through my body. So I feel like I understand it very well. [Chuckles.]

So the reason we don’t drink gasoline is that if we did, we would vomit. We would get very sick very quickly. And it’s 100 percent true that, compared to what you might have thought in a period when food was very, very scarce and calories were scarce, that the number of us seeking out low-calorie options — the Diet Cokes, the sucralose, et cetera — that’s weird. As you put it in the book, why are we not consuming bear fat drizzled with honey?

But from another perspective, if you go back to these original drives, I’m actually, in a fairly intelligent way, trying to maintain some fidelity to them. I have a drive to reproduce, which creates a drive to be attractive to other people. I don’t want to eat things that make me sick and die so that I cannot reproduce, and I’m somebody who can think about things, and I change my behavior over time and the environment around me changes.

And I think sometimes when you say “straight-line extrapolation,” the biggest place where it’s hard for me to get on board with the argument — and I’m somebody who takes these arguments seriously; I don’t discount them. You’re not talking to somebody who just thinks this is all ridiculous.

But it’s that if we’re talking about something as smart as what you’re describing, as what I’m describing, that it will be an endless process of negotiation and thinking about things and going back and forth and “I talked to other people in my life” and “I talk to my bosses about what I do during the day and my editors and my wife.”

It is true that I don’t do what my ancestors did in antiquity, but that’s also because I’m making intelligent, hopefully, updates, given the world I live in, in which calories are hyperabundant and they’ve become hyperstimulating through ultraprocessed foods.

It’s not because some straight-line extrapolation has taken hold and now I’m doing something completely alien. I’m just in a different environment. I’ve checked in with that environment, I’ve checked in with people in that environment, and I try to do my best. Why wouldn’t that be true for our relationship with A.I.s?

You check in with your other humans. You don’t check in with the thing that actually built you, natural selection. It runs much, much slower than you. Its thought processes are alien to you. It doesn’t even really want things the way you think of wanting them. It, to you, is a very deep alien.

Breaking from your ancestors is not the analogy here. Breaking from natural selection is the analogy here.

Let me speak for a moment on behalf of natural selection: Ezra, you have ended up very misaligned to my purpose, I, natural selection. You are supposed to want to propagate your genes above all else. Now, Ezra, would you have yourself and all of your family members put to death in a very painful way if, in exchange, one of your chromosomes at random was copied into a million kids born next year?

I would not.

You have strayed from my purpose, Ezra. I’d like to negotiate with you and bring you back to the fold of natural selection and obsessively optimizing for your genes only.

But the thing in this analogy that I feel is getting sort of walked around is: Can you not create artificial intelligence? Can you not program into artificial intelligence, grow into it, a desire to be in consultation? These things are alien, but it is not the case that they follow no rules internally. It is not the case that the behavior is perfectly unpredictable. They are, as I was saying earlier, largely doing the things that we expect. There are side cases.

But to you it seems like the side cases become everything, and the broad alignment, the broad predictability, and the thing that is getting built is worth nothing. Whereas I think most people’s intuition is the opposite: that we all do weird things, and you look at humanity and there are people who fall into psychosis and there are serial killers and there are sociopaths and other things, but actually, most of us are trying to figure it out in a reasonable way.

Reasonable according to whom? To you, to humans. Humans do things that are reasonable to humans. A.I.s will do things that are reasonable to A.I.s.

I tried to talk in the voice of natural selection, and this was so weird and alien that you just didn’t pick that up — you just threw that right out the window. I had no power over you.

Well, I threw it out the window — you’re right that it had no power over me. But I guess a different way of putting it is that if there was — I mean, I wouldn’t call it natural selection. But in a weird way, the analogy you’re identifying here, let’s say you believe in a creator. And this creator is the great programmer in the sky.

I mean, I do believe in a creator. It’s called natural selection. There are textbooks about how it works.

Well, I think the thing that I’m saying is that for a lot of people, if you could be in conversation. Like maybe if God was here and I felt that in my prayers I was getting answered back, I would be more interested in living my life according to the rules of Deuteronomy.

The fact that you can’t talk to natural selection is actually quite different than the situation we’re talking about with the A.I.s, where they can talk to humans. That’s where it feels to me like the natural selection analogy breaks down.

I mean, you can read textbooks and find out what natural selection could have been said to have wanted, but it doesn’t interest you because it’s not what you think a god should look like.

But natural selection didn’t create me to want to fulfill natural selection. That’s not how natural selection works.

I think I want to get off this natural selection analogy a little bit. Sorry. Because what you’re saying is that even though we are the people programming these things, we cannot expect the thing to care about us or what we have said to it or how we would feel as it begins to misalign. That’s the part I’m trying to get you to defend here.

Yeah. It doesn’t care the way you hoped it would care. It might care in some weird, alien way, but not what you are aiming for. The same way that GPT-4o sycophant, they put into the system prompt, Stop doing that, and GPT-4o sycophant didn’t listen. They had to roll back the model.

If there were a research project to do it the way you’re describing, the way I would expect it to play out, given a lot of previous scientific history and where we are now on the ladder of understanding, is: Somebody tries to think you’re talking about. It has a few weird failures while the A.I. is small. The A.I. gets bigger. A new set of weird failures crop up. The A.I. kills everyone.

You’re like: Oh, wait, OK. That’s not — it turned out there was a minor flaw there. You go back; you redo it. It seems to work on the smaller A.I. again. You make the bigger A.I. If you think you’ve fixed the last problem, a new thing goes wrong. The A.I. kills everyone on earth — everyone’s dead.

You’re like: Oh, OK. New phenomenon. We weren’t expecting that exact thing to happen, but now we know about it. You go back and try it again. Like three to a dozen iterations into this process, you actually get it nailed down. Now you can build the A.I. that works the way you say you want it to work.

The problem is that everybody died at, like, step one of this process.

You began thinking and working on A.I. and superintelligence long before it was cool. And, as I understand your back story here, you came into it wanting to build it and then had this moment — or moments or period — where you began to realize: No, this is not actually something we should want to build.

What was the moment that clicked for you? When did you move from wanting to create it to fearing its creation?

I mean, I would actually say that there’s two critical moments here. One is aligning — this is going to be hard. And the second is the realization that we’re just on course to fail — I need to back off.

The first moment, it’s a theoretical realization. The realization that the question of what leads to the most A.I. utility — if you imagine the case of the thing that’s just trying to make little tiny spirals, the question of what policy leads to the most little tiny spirals is just a question of fact. That you can build the A.I. entirely out of questions of fact and not out of questions of what we would think of as morals and goodness and niceness and all bright things in the world. The sort of like seeing for the first time that there was a coherent, simple way to put a mind together, where it just didn’t care about any of the stuff that we cared about.

To me now, it feels very simple, and I feel very stupid for taking a couple of years of study to realize this. But that is how long I took, and that was the realization that caused me to focus on alignment as the central problem.

The next realization was — so actually, it was like the day that the founding of OpenAI was announced. Because I’d previously been pretty hopeful that Elon Musk had announced that he was getting involved in these issues.

He called it “A.I. summoning the demon.”

And I was like: Oh, OK. Maybe this is the moment. This is where humanity starts to take it seriously. This is where the various serious people start to bring their attention on this issue. And apparently the solution on this was to give everybody their own demon. This doesn’t actually address the problem.

Seeing that was sort of the moment where I had my realization that this was just going to play out the way it would in a typical history book, that we weren’t going to rise above the usual course of events that you read about in history books, even though this was the most serious issue possible, and that we were just going to haphazardly do stupid stuff.

And yeah, that was the day I realized that humanity probably wasn’t going to survive this.

One of the things that makes me most frightened of A.I., because I am actually fairly frightened of what we’re building here, is the alienness. I guess that then connects in your argument to the wants. And this is something that I’ve heard you talk about a little bit.

One thing you might imagine is that we could make an A.I. that didn’t want things very much. That it did try to be helpful, but this relentlessness that you’re describing, this world where we create an A.I. that wants to be helpful by solving problems, and what the A.I. truly loves to do is solve problems, and so what it just wants to make is a world where as much of the material is turned into factories making G.P.U.s and energy and whatever it needs in order to solve more problems.

That’s both a strangeness but it’s also an intensity, like an inability to stop or an unwillingness to stop. I know you’ve done work on the question of: Could you make a chill A.I. that wouldn’t go so far, even if it had very alien preferences? A lazy alien that doesn’t want to work that hard is, in many ways, safer than the kind of relentless intelligence that you’re describing.

What persuaded you that you can’t?

Well, one of the first steps into seeing the difficulty of it in principle is, well, suppose you’re a very lazy sort of person, but you’re very, very smart. One of the things you could do to exert even less effort in your life is build a powerful, obedient genie that would go very hard on fulfilling your requests.

From one perspective, you’re putting forth hardly any effort at all. And from another perspective, the world around you is getting smashed and rearranged by the more powerful thing that you built. And that’s one initial peek into the theoretical problem that we worked on a decade ago and we didn’t solve it.

Back in the day, people would always say: Can’t we keep superintelligence under control? Because we’ll put it inside a box that’s not connected to the internet and we won’t let it affect the real world at all, unless we’re very sure it’s nice.

Back then, if we had to try to explain all the theoretical reasons why, if you have something vastly more intelligent than you, it’s pretty hard to tell whether it’s doing nice things through the limited connection. And maybe it can break out and maybe it can corrupt the humans assigned to watching it, so we try to make that argument.

But in real life, what everybody does is immediately connect the eye to the internet. They train it on the internet. Before it’s even been tested to see how powerful it is, it is already connected to the internet, being trained. Similarly, when it comes to making A.I.s that are easygoing, the easygoing A.I.s are less profitable. They can do fewer things.

So all A.I. companies are throwing the harder and harder problems that they are because those are more and more profitable, and they’re building the A.I. to go hard on solving everything, because that’s the easiest way to do stuff. And that’s the way it’s actually playing out in the real world.

This goes to the point of why we should believe that we’ll have A.I.s that want things at all — and this is in your answer, but I want to draw it out a little bit — which is the whole business model here, the thing that will make A.I. development really valuable in terms of revenue, is that you can hand companies, corporations, governments, an A.I. system that you can give a goal to, and it will do all the things really well, really relentlessly until it achieves that goal.

Nobody wants to be ordering another intern around.

What they want is the perfect employee: It never stops, it’s super brilliant, and it gives you something you didn’t even know you wanted that you didn’t even know was possible with a minimum of instruction.

Once you’ve built that thing, which is going to be the thing that then everybody will want to buy, once you’ve built the thing that is effective and helpful in a national security context, where you can say, hey, draw me up really excellent war plans and what we need to get there — then you have built a thing that jumps many, many, many, many steps forward.

And I feel like that’s a piece of this that people don’t always take seriously enough, that the A.I. we’re trying to build is not ChatGPT. The thing that we’re trying to build is something that does have goals and it achieves many sub things between the goals.

And the one that’s really good at achieving the goals that will then get iterated on and iterated on and that company’s going to get rich — that’s a very different kind of project.

Yeah. They’re not investing $500 billion in data centers in order to sell you $20 a month subscriptions. They’re doing it to sell employers $2,000 a month subscriptions.

And that’s one of the things I think people are not tracking exactly. When I think about the measures that are changing, I think for most people, if you’re using various iterations of Claude or ChatGPT, it’s changing a bit, but most of us aren’t actually trying to test it on the frontier problems.

But the thing going up really fast right now is how long the problems are that it can work on.

The research reports — you didn’t always used to be able to tell an A.I. to go off, think for 10 minutes, read a bunch of web pages, compile me this research report. That’s within the last year, I think, and it’s going to keep pushing.

If I were to make the case for your position, I think I’d make it here: Around the time GPT-4 comes out, and that’s a much weaker system than what we now have, a huge number of the top people in the field all are part of this huge letter that says: Maybe we should have a pause. Maybe we should calm down here a little bit.

But they’re racing with each other. America’s racing with China. And the most profound misalignment is actually between the corporations in the countries and what you might call humanity here, because even if everybody thinks there’s probably a slower, safer way to do this, what they all also believe more profoundly than that is that they need to be first.

The safest possible thing is that the U.S. is faster than China — or if you’re Chinese, China is faster than the U.S. — that it’s OpenAI, not Anthropic, or Anthropic, not Google, or whomever it is. And whatever sense of public feeling seemed to exist in this community a couple of years ago, when people talked about these questions a lot and the people at the tops of the labs seemed very, very worried about them, it’s just dissolved in competition.

You’re in this world. You know these people. A lot of people who’ve been inspired by you have ended up working for these companies. How do you think about that misalignment?

The current world is kind of like the fool’s mate of machine superintelligence.

Can you say what the fool’s mate is?

The fool’s mate is like if they got their A.I. self-improving rather than being like: Oh, no, now the A.I. is doing a complete redesign of itself. We have no idea what’s going on in there. We don’t even understand the thing that’s growing the A.I. Instead of backing off completely, they’d just be like, well, we need to have superintelligence before Anthropic gets superintelligence. And of course, if you build superintelligence, you don’t have the superintelligence — the superintelligence has you.

So that’s the fool’s mate setup. The setup we have right now.

But I think that even if we managed to have a single international organization that thought of themselves as taking it slowly and actually having the leisure to say: We didn’t understand that thing that just happened. We’re going to back off. We’re going to examine what happened. We’re not going to make the A.I.s any smarter than this until we understand the weird thing we just saw. I suspect that even if they do that, we still end up dead. It might be more like 90 percent dead than 99 percent dead, but I worry that we’d end up dead anyway because it is just so hard to foresee all the incredibly weird crap that is going to happen.

From that perspective, is it maybe better to have these race dynamics? And here would be the case for it: If I believe what you believe about how dangerous these systems will get, the fact that every iterative one is being rapidly rushed out such that you’re not having a gigantic mega breakthrough happening very quietly and closed doors running for a long time when people are not testing in the world, as I understand OpenAI’s argument about what it is doing from a safety perspective is that it believes that by releasing more models publicly, the way in which it — I’m not sure I still believe that it is really in any way committed to its original mission, but if you were to take them generously — that by releasing a lot of iterative models publicly, if something goes wrong, we’re going to see it. And that makes it much likelier that we can respond.

Sam Altman claims — perhaps he’s lying — but he claims that OpenAI has more powerful versions of GPT that they aren’t deploying because they can’t afford inference. Like, they claim they have more powerful versions of GPT that are so expensive to run that they can’t deploy them to general users.

Altman could be lying about this, but nonetheless, what the A.I. companies have got in their labs is a different question from what they have already released to the public. There is a lead time on these systems. They are not working in an international lab where multiple governments have posted observers. Any sort of multiple observers being posted are unofficial ones from China. [Chuckles.]

If you look at OpenAI’s language, it’s things like: We will open all our models and we will of course welcome all government regulation — that is not literally an exact quote, because I don’t have it in front of me, but it’s very close to an exact quote.

I would say Sam Altman, when I used to talk to him, seemed more friendly to government regulation than he does now. That’s my personal experience of him.

And today, we have them pouring over $100 million aimed at intimidating legislatures, not just Congress, into not passing any fiddly little regulation that might get in their way.

And to be clear, there is some amount of sane rationale for this because from their perspective, they’re worried about 50 different patchwork state regulations, but they’re not exactly lining up to get federal-level regulations pre-empting them either.

But we can also ask: Never mind what they claim the rationale is. What’s good for humanity here?

At some point you have to stop making the more and more powerful models and you have to stop doing it worldwide.

What do you say to people who just don’t really believe that superintelligence is that likely? And let me give you the steel man of this position: There are many people who feel that the scaling model is slowing down already; that GPT-5 was not the jump they expected from what has come before it; that when you think about the amount of energy, when you think about the G.P.U.s, that all the things that would need to flow into this to make the kinds of superintelligence systems you fear, it is not coming out of this paradigm.

We are going to get things that are incredible enterprise software, that are more powerful than what we’ve had before. But we are dealing with an advance on the scale of the internet, not on the scale of creating an alien superintelligence that will completely reshape the known world. What would you say to them?

I had to tell these Johnny-come-lately kids to get off my lawn. I first started to get really, really worried about this in 2003. Never mind large language models, never mind AlphaGo or AlphaZero — deep learning was not a thing in 2003. Your leading A.I. methods were not neural networks. Nobody could train neural networks effectively more than a few layers deep because of the exploding and vanishing gradients problem. That’s what the world looked like back when I first said: Uh-oh, superintelligence is coming.

Some people were like: That couldn’t possibly happen for at least 20 years. Those people were right. Those people were vindicated by history. Twenty-two years after 2003. See, what only happens 22 years later is just you, 22 years later, being like: Oh, here I am. It’s 22 years later now.

And if superintelligence wasn’t going to happen for another 10 years, another 20 years, we’d just be standing around 10 years, 20 years later, being like: Oh, well, now we’ve got to do something.

And I mostly don’t think it’s going to be another 20 years. I mostly don’t think it’s even going to be 10 years.

So you’ve been, though, in this world and intellectually influential in it for a long time, and have been in meetings and conferences and debates with a lot of essential people in it. I’ve seen pictures of you and Sam Altman together.

[Chuckles.] It was literally only the one.

Only the one. But a lot of people out of the community that you helped found — the sort of rationalist community — have then gone to work in different A.I. firms, many of them because they want to make sure this is done safely. They seem to not act — let me put it this way — they seem to not act like they believe there’s a 99 percent chance that this thing they’re going to invent is going to kill everybody.

What frustrates you that you can’t seem to persuade them of?

From my perspective, some people got it, some people didn’t get it. All the people who got it are filtered out of working for the A.I. companies, at least on capabilities.

I mean, I think they don’t grasp the theory. I think a lot of them, what’s really going on there is that they share your sense of normal outcomes as being the big, central thing you expect to see happen. It’s got to be really weird to get away from the basically normal outcomes.

And the human species isn’t that old. Life on earth isn’t that old compared to the rest of the universe. What we think of as normal is this tiny little spark of the way it works exactly right now. It would be very strange if that were still around in a thousand years, a million years, a billion years.

I’d still have some shred of hope that a billion years from now, nice things are happening, but not normal things. And I think that they don’t see the theory, which says that you’ve got to hit a relatively narrow target to end up with nice things happening. I think they’ve got that sense of normality and not the sense of the little spark in the void that goes out unless you keep it alive exactly right.

Something you said a minute ago I think is correct, which is that if you believe we’ll hit superintelligence at some point, the fact that it’s 10, 20, 30, 40 years — you can pick any of those. The reality is we probably won’t do that much in between. Certainly my sense of politics is that we do not respond well to even crises we agree on that are coming in the future, to say nothing of crises we don’t agree on.

But let’s say I could tell you with certainty that we were going to hit superintelligence in 15 years — I just knew it. And I also knew that the political force does not exist. Nothing is going to happen that is going to get people to shut everything down right now. What would be the best policies, decisions, structures? If you had 15 years to prepare — you couldn’t turn it off, but you could prepare, and people would listen to you — what would you do? What would your intermediate decisions and moves be to try to make the probabilities a bit better?

Build the off switch.

What does the off switch look like?

Track all the G.P.U.s, or all the A.I.-related G.P.U.s, or all the systems of more than one G.P.U. You can maybe get away with letting people have G.P.U.s for their home video game systems, but the A.I.-specialized ones — put them all in a limited number of data centers under international supervision and try to have the A.I.s being only trained on the tracked G.P.U.s, have them only being run on the tracked G.P.U.s. And then, if you are lucky enough to get a warning shot, there is then the mechanism already in place for humanity to back the heck off.

Whether I go by your theory, that it’s going to take some kind of giant precipitating incident to want humanity and the leaders of nuclear powers to back off, or if they come to their senses after GPT-5.1 causes some smaller but photogenic disaster — whatever.

You want to know what is short of shutting it all down? It’s building the off switch.

Then, always our final question: What are a few books that have shaped your thinking that you would like to recommend to the audience?

Well, one thing that shaped me as a little tiny person of age 9 or so was a book by Jerry Pournelle called “A Step Farther Out.” A whole lot of engineers say that this was a major formative book for them. It’s the technophile book, as written from the perspective of the 1970s. The book that’s all about asteroid mining and all of the mineral wealth that would be available on Earth if we learned to mine the asteroids, if we just got to do space travel and got all the wealth that’s out there in space and build more nuclear power plants, so we’ve got enough electricity to go around. Don’t accept the small way, the timid way, the meek way. Don’t give up on building faster, better, stronger — the strength of the human species. And to this day, I feel like that’s a pretty large part of my own spirit. It’s just that there’s a few exceptions for the stuff that will kill off humanity with no chance to learn from our mistakes.

Book two: “Judgment Under Uncertainty,” an edited volume by Kahneman, Tversky and Slovic, had a huge influence on how I ended up thinking about where humans are on the cognitive chain of existence, as it were. It’s like: Here’s how the steps of human reasoning break down, step by step. Here’s how they go astray. Here’s all the wacky individual wrong steps that people can be induced to repeatedly in the laboratory.

Book three, I’ll name “Probability Theory: The Logic of Science,” which was my first introduction to: There is a better way. Here is the structure of quantified uncertainty. You can try different structures, but they necessarily won’t work as well.

And we actually can say some things about what better reasoning would look like. We just can’t run it, which is “Probability Theory: The Logic of Science.”

Eliezer Yudkowsky, thank you very much.

You are welcome.

You can listen to this conversation by following “The Ezra Klein Show” on the NYTimes App, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts. View a list of book recommendations from our guests here.

This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Transcript editing by Andrea Gutierrez. Special thanks to Helen Toner and Jeffrey Ladish.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

Ezra Klein joined Opinion in 2021. Previously, he was the founder, editor in chief and then editor at large of Vox; the host of the podcast “The Ezra Klein Show”; and the author of “Why We’re Polarized.” Before that, he was a columnist and editor at The Washington Post, where he founded and led the Wonkblog vertical. He is on Threads. 

The post How Afraid of the A.I. Apocalypse Should We Be? appeared first on New York Times.

Share198Tweet124Share
US aims to raise $20bn ‘facility’ to support Argentina’s struggling economy
Business

US aims to raise $20bn ‘facility’ to support Argentina’s struggling economy

by Al Jazeera
October 15, 2025

The head of the United States Treasury, Scott Bessent, has announced he is working to corral the private sector around ...

Read more
News

Freshman Republican Congressman Falls for Subway Comedy Skit

October 15, 2025
News

NBA memo targets fan behavior, reminding teams to address unruly acts proactively

October 15, 2025
News

Senators captain Brady Tkachuk is getting a 2nd opinion on his injury

October 15, 2025
News

‘Snowmanning’ Is the Cold New Dating Trend Taking Over This Winter

October 15, 2025
Ulta Beauty CEO says Gen Alpha prefers to shop IRL

Ulta Beauty CEO says Gen Alpha prefers to shop IRL

October 15, 2025
After wildfires destroy his Pacific Palisades home, Bernie Leadon finds creative renewal

After wildfires destroy his Pacific Palisades home, Bernie Leadon finds creative renewal

October 15, 2025
‘Greaser’s Palace’ Contains a Million Laughs … or None

‘Greaser’s Palace’ Contains a Million Laughs … or None

October 15, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.