DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Why Are We Still Driving?

April 30, 2026
in News
Why Are We Still Driving?

It feels as if we’ve been hearing about self-driving cars for a long time, but now they’re really here. Ferrying people to work and school and nightlife from Los Angeles to Nashville, poised to spread to just about every big city in America.

My guest this week is very optimistic about a future where the cars take over. He writes about self-driving automobiles and transportation policy on his Substack, “Changing Lanes,” and he’s a co-author of a recent book with the stark title “The End of Driving.”

We talked about the potential benefits of this transformation, and as someone who loves the open road, I pressed him on what’s lost — in freedom and mastery — if we don’t have to be in the driver’s seat anymore.

Below is an edited transcript of an episode of “Interesting Times.” We recommend listening to it in its original form for the full effect. You can do so using the player above or on the NYTimes app, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts.

Ross Douthat: Andrew Miller, welcome to “Interesting Times.”

Andrew Miller: Thank you. It’s a pleasure to be here.

Douthat: I want you to start by giving me a sales pitch for self-driving cars. Explain why people might welcome them. What would be good about a self-driving future?

Miller: So we can approach this from the micro or the macro level. At the macro level, 40,000 Americans die every year in road incidents, and that is only those who die. It excludes those who suffer life-altering injuries. None of those need to happen. And the vast mass majority of those are caused by driver error.

So at scale, the more automated driving there is, the safer the roads are, the safer Americans are, the safer anyone who uses the roads are. But at a micro level — not just safety — driving is an immense consumer of people’s attention. They have to give, or they should give, their full attention to the road.

Douthat: In theory, yes. That’s the goal. That’s the ambition.

Miller: If they don’t, we get more of those road incidents that I was describing. But what it allows you to do is it unlocks vast reservoirs of attention. Hundreds of millions of hours every year that Americans would get back for other things.

And as a good liberal, I don’t prescribe a vision of the good life, whether they want to play Candy Crush or whether they want to read The New York Times. There’s any number of things that they could do, but they can’t right now because they must pay attention to the road. It will be a huge liberation of time and attention, which can lead to so many good things.

Douthat: When would you expect, on the current trajectory, self-driving cars, automated driving to become a normal part of life in lots and lots of North American cities?

Miller: I like to — it’s not so much a joke, it’s a wry observation, that around this time last year, I could name every city that Waymo was operating in from memory because there were so few.

Sometime late last summer that stopped being true. I believe they have announced plans to be in more than 15 cities. Their footprint in each of those cities is small, but they’re going to grow quickly. So it really depends on how fast Waymo can scale and how fast their two big competitors, Zoox and Tesla, can scale.

I’m always wary of making predictions because this field is so rife with hucksters and charlatans who make predictions. But if I ——

Douthat: It’s an occupational hazard of podcasting, though, so a general prediction.

Miller: 10 years is a good anchoring thing, 2035.

Douthat: By 2035, then, the normal North American city will have a large fleet of self-driving taxis? Most likely, they’ll be mostly taxis in this scenario?

Miller: Yes.

Douthat: OK. Why is this accelerating and taking off now? We’ve been hearing about self-driving cars for as long as I’ve been an adult. Is it connected just directly to the A.I. revolution? What’s the big push at this moment?

Miller: It is partially connected to the A.I. revolution.

The A.I. revolution is making some of the problems that were associated with iterating the technology easier to solve. But Google’s been working on this since the first decade of the century. The reason that Google’s been working on it and others have been working on it, the reason that Elon Musk thinks self-driving is the future, is because rather than generative A.I., teaching a car how to drive is very expensive initially. But once you know how to do it, it is very cheap to copy.

And then because it is a shared vehicle as opposed to a privately owned vehicle, a robotaxi can be used for as many hours of the day as you can keep it clean and charged. Then, it can just spit out money for you endlessly, every hour, every day, every week. So, from a business point of view, it’s a wonderful business to be in if you can spend enough money to get to the point where you have a safe and reliable product.

Douthat: How much of an obstacle is serious bad weather to this kind of technology right now?

Miller: One way to look at it is if humans can drive in bad weather, a machine can. The question of how they do it depends on which technology stack you’re thinking of. The Waymo approach relies on the consensus of the field that for a self-driving car to “know” where it is, it has to rely on a variety of senses.

So you, Ross, you can see, but you can also smell, you can also taste. The Waymo view is that a self-driving car should be able to see with its cameras, it should see with its radar, it should see with its lidar.

Lidar: Think of it like radar, but it’s light. It shoots out lasers and then it measures how long it takes to get the measurement back. So it can know with great fidelity where everything is in space around the vehicle to tens of meters.

So if you have a car that’s got all of these modes, then rain might occlude a sensor, snow might confuse the lidar, but the radar works. So the more sensing modes you have, the more expensive your car is, the harder it is to scale up your operations because every car costs so much, but the more reliable it is in a variety of conditions.

Tesla is making a big bet that you don’t need any of that. Tesla thinks you can do it all with cameras, and if they’re right, that gives them a huge advantage because cameras are very, very cheap.

So Tesla, once they start rolling out their cybercab, they will be able to produce vehicles in vast amounts and reach scale very quickly. But it’s not clear that that approach is as safe because it doesn’t have the same sensors, and it’s not clear that they have got the same skill of programming behind them that Waymo does.

So it’s very much an open contest between these two — which is going to win?

Douthat: So the limiting factor on Tesla potentially right now is safety, and the limiting factor on Waymo is cost. And then the presumption is that, essentially, in the same way that Uber lost tons and tons of money for an extended period of time, that was OK because everyone assumed they would make money eventually.

This is the same kind of arc, right?

Miller: Yeah. It took Waymo a big investment to get this far, but they are so far ahead and they’ve got such a great record. They’re going to be very difficult to catch. I wish Tesla all the best during this contest. I think they’re going to need it.

Douthat: So you’ve got the mid 2030s as a zone where it’s as normal to hail a self-driving car in an American city as it is to hail an Uber right now. At what point does this become part of people’s transportation reality outside cities? Whether as a kind of suburban phenomenon the way Uber is right now, or is there a self-driving future in the near term for rural America?

Miller: The rural case is easy to answer: no. Just like Uber isn’t a big thing in rural America now.

My take is that the American suburb is actually a good bet for robotaxis. If you can get robotaxis cheap enough, there’s enough demand in the suburbs to make it work. Particularly because of the way that we’ve designed the North American suburbs since Levittown. It is really hard to retrofit those for public transit, whereas with robotaxis, it is entirely possible that the suburbs get them but your local suburb pays some sort of stipend to a robotaxi company to offset the cost of doing business and that makes the economics profitable.

So I can absolutely see this being something that would work in the American suburbs, but it may require us to put aside 20th century ideas of what a public transit agency is.

Douthat: In that scenario, people in the suburbs are using them for commuting? Is there car-pooling? What does the culture of self-driving car use look like in that scenario?

Miller: Well, now you get into an interesting question because there’s two schools of thought.

There is the transport planning professionals school, and then there’s everybody else’s school, or the average American school. The transport planning professional says roads are fixed, finite spaces, there’s only so many cars that can fit on them. This is an asset we have to use efficiently, therefore, we should have shared vehicles.

Just like we get 20 people on a bus, we should have multiple people in every robotaxi or shuttle bus. You’ll get more use and everyone will have more efficient trips.

And then the average American says: Go pound sand. I like being alone. I like my privacy. I don’t want to share my space with strangers. I’m going to be in a robotaxi alone, and if you won’t let me do that, then I will buy my own car and it can drive me around.

So the question is how we thread that needle between planning a future of efficient use and the overwhelming revealed preference.

Douthat: In this extremely hypothetical and contingent timeline, when is it normal for people to have their own self-driving car available for purchase? It’s not part of a taxi fleet. You’re just like, I’m going to buy a car, and of course it’s going to be a self-driving car because why wouldn’t I want that capacity?

Miller: The trick there is liability.

Tesla’s going all in on complete self-driving, but the conventional automakers — your VWs and your Fords, and particularly your GMs — they would love for you to have driving assist that gets more and more sophisticated every year. The steering wheel never goes away, but it can handle more and more of your daily driving in 10 or 12 years. If we solve the liability issue, it can be doing your driving almost all the time.

There’s no reason a privately owned vehicle, if you’re willing to pay for it, can’t have all of these sensor systems to make it work. If Waymo leads the charge and makes lidar rigs incredibly cheap, everyone’s going to pile on that.

Douthat: What level of self-driving is available in Teslas right now?

Miller: So I drive a Tesla personally. You hear a lot about these levels — Level 3, Level 4, Level 5.

I think that sort of language is misleading. All you need to understand about self-driving is: Does it require a human to be actively monitoring the situation or does it not?

Douthat: Right. You get in the back seat and it goes.

Miller: But if I turn on autopilot in my privately owned Tesla, I need to be keeping my foot on the brake and my hands on the wheel and my eyes on the road at all times.

The car can handle most situations, but some it can’t, and it’s my responsibility to intervene in those cases. At the most sophisticated level of a Tesla, you can plug in your destination and it will take you to the road. It’ll take you at the speed limit, or more than the speed limit if you tell it to, it’ll keep you in the center of the lane. It’ll make turns, it’ll stop. It’ll even change lanes for you.

Douthat: And when you say you have to keep your hands and feet active while it’s doing all this, what are you doing with them? Are you just hovering over the brake and the steering wheel until a large bison stampedes across the road?

Miller: Exactly. You don’t have to do anything, but as they said on “The Simpsons” once, “maintain yourself in a state of catlike readiness” in case something happens. There was a time when I was using my autopilot, I was traveling in a part of my town I didn’t know very well, and it wanted to take me down a private road, which was sealed off by a chain hung between two posts. It took me at it at full speed and I was curious, so I was willing to wait to see how close it would get. I broke before it did. I had to slam on the brakes before we hit the chain, but it was a near-run thing.

Douthat: So we don’t know basically how good Tesla self-driving is going to be. You can’t generalize from what the cars can do right now. We are essentially waiting to see what their emergent taxi fleet looks like?

Miller: Well, they are operating in Austin right now, and they have been operating in Austin for more than half a year now. And we have some safety data, and how you feel about what Tesla’s reporting has been will depend on what standard you’re holding it to. Most of the time, it works just fine. But where Waymos have no safety operators in them—there’s no human controlling the vehicle in the vehicle, Tesla does.

Douthat: In Austin?

Miller: In Austin. And those safety drivers have to intervene an awful lot. So far, the safety record of Tesla is not nearly what Waymo’s was when it was at this stage of its journey, but I mean, it’s always tough in early days. Will they be able to get better? I hope so, but they’ve got to do it quickly.

Douthat: How autonomous are these cars really? You already mentioned that Tesla has these interventions. It’s like you’re assessing the car’s safety or reliability depending on how often a human sitting in it has to intervene. Waymo doesn’t have humans sitting in them, but there are still interventions for Waymos, right?

Miller: There are.

Douthat: What does that look like?

Miller: We learned about this because Waymo was called to the Senate to testify. So we got an inside look at this. Waymo says that what they have is remote assistance. So what that means is that it is not like someone playing a video game where they’ve got a fake steering wheel in front of them and they jack into the car and then drive it and then jack out, and the car computer takes over.

It’s more like laying digital bread crumbs. The car isn’t sure what to do. It encounters a situation that is confusing to it because there are a bunch of traffic cones, but a few of them are knocked over, and that’s sufficiently unusual that the car is uncertain. So it calls a human remote assistant who looks at it and says, “Oh, it’s safe to proceed, just don’t knock over that cone.” Or even go so far as to say, “I can see on your map, now go to Point A, then go to Point B, then go to Point C. At Point C, you will no longer be confused.”

That’s what they call remote assistance. So is that driving? People have differences of opinions on this. I say it’s not; I say that the remote assistance is what it says it is. It’s a human providing additional input to the computer to make its decisions. But yeah, there are cases where the computer cannot figure it out on its own and it does need help.

Douthat: And just to make the case that this is something more like driving, the human in that situation has the capacity to direct the car?

Miller: Yes, it’s giving an instruction to the computer.

Douthat: What is the passenger’s capacity to affect what the self-driving car does? Once you’ve bought your fare, it’s taking you to Fisherman’s Wharf or something, and you think it’s doing something wrong as the passenger, is there anything you can do? Can you stop the car?

Miller: What you can do is press a button and speak to — it’s not one of those remote operators — but you can speak to a concierge, if I can use that term, and explain what the situation is. That there’s an emergency, or there’s something of concern, and then the remote operator is able to send messages to the car.

The typical thing that we want a self-driving car to do in any situation is, if it’s genuinely uncertain or there’s a problem, to reach a safe position, which normally means to pull over to the side of the road, come to a full and complete stop, and then wait for further directions.

There are situations where you can imagine that would be a bad thing, like if there’s an earthquake. But, under normal circumstances, that’s what it does. So you’ve got limited ability. You can’t override, but you can talk to a human who has some capacity to override.

Douthat: But presumably the human-owned self-driving car of 2035 would be sold with essentially a human override, right? It would be unlikely that people would be buying self-driving cars that didn’t promise that you can take control of this thing.

Miller: You would think so.

Douthat: I would assume so. I’m just trying to envision how this plays out.

Miller: But Mr. Musk has said there is an absolute market for people to buy a car that is entirely self-driving and doesn’t have a human interface.

So, is he right? If what he says comes to pass, we’ll be able to test your hypothesis within months.

Douthat: Interesting. That definitely cuts against my own intuitions. Let’s talk about liability. Which you’ve already mentioned as a bigger issue than cost in terms of making personal sales commercially viable. Would you say that?

Miller: I would say it is the single issue that is most in need of clarity that we need to solve because it’s what’s going to hold back this sector if we don’t.

Douthat: OK. Why is it such a hard issue if, as you suggested at the outset of the conversation, these cars will be so much safer?

Miller: Well, from my point of view, it shouldn’t be. We should take manufacturers at their word, and we should say to them in classic American fashion: Put up or shut up.

If you think that this is so safe, you assume 100 percent of the liability. If there is an incident while the — what we call the A.D.S. — the automated driving system is in control, and it is later shown that the A.D.S. is at fault, you’ve got to take on the liability. I think that is a clear, bright line. I think it’s very easy to argue for, and it would be easy to implement, and if we had that, we would be able to move forward very clearly. The problem is: There is reluctance among the carmakers to live up to that standard, and that’s a problem.

Douthat: What is Waymo’s liability right now? If you get hit by a Waymo taxi in L.A., who is liable?

Miller: Waymo is.

Douthat: So they’ve accepted it for their current fleet.

Miller: Yes. Waymo has done so. Tesla, I think to their discredit, has suggested that they might not want to. Certainly with regard to their driver assist systems, they’ve been reluctant to assert that responsibility because I think the potential for lawsuits is so vast. They are trying to protect themselves.

And what I think regulators need to do is say, “You need to have the courage of your convictions. So we’re going to hold you to that standard. We’re going to insist upon it.”

Douthat: But this is a pretty radically different setup than the entire liability setup we have right now.

Miller: Yeah, liability is tricky. American liability is based on the idea that no consumer can hope to stand up to a big company. So we put all of the weight in legal proceedings on the customer side, and that’s led to a jurisprudential culture, if I can use that word, where the cost of getting anything wrong from the manufacturer’s side is vast. It’s existentially vast.

So I told you earlier that there were three big companies in this space. There’s Waymo, there’s Zoox and there’s Tesla. There used to be a fourth. It was called Cruise. And it was an arm of General Motors. It was involved in an accident a few years ago where someone hit someone who was jaywalking, and it threw the human jaywalker into the path of a Cruise vehicle, which ran them over. And then, because the Cruise vehicle didn’t know what to do, it moved to the safe position. It pulled to the stop, dragging that poor, unfortunate soul with it. They weren’t killed, but they were severely injured.

Douthat: So their injury was much worse because the car did the extra thing.

Miller: Yeah. A human driver would never have made that mistake.

Douthat: A human driver might have hit the person, but wouldn’t have dragged them.

Miller: A responsible human driver, I think, would have hit them, but would’ve known there was a human under the car and would have stayed put, but the car didn’t have a sensor underneath. And by dragging that person exacerbated their injuries.

That incident ended up killing the company. Not just because of the lawsuit, but they were a bit squirrelly with the regulators who removed their license to operate, and General Motors said, “We can’t fund this anymore.” So it all got shut down with one incident.

So I understand why the firms are being very gun-shy of assuming liability here, but we need to insist upon it.

Douthat: But does that mean, essentially, you have to achieve not just a higher level of safety than a human driver, but some extraordinarily higher level because you will be liable in the way that a normal auto manufacturer wouldn’t be?

Miller: Because this is a new technology, regulators are absolutely holding a self-driving car to a much higher standard than a human-piloted or a human-operated car. Some people find that obnoxious. Like, you’d save lives. As soon as it’s better than average, let it rip because you’d be saving lives on net.

That’s not how lawmakers think. They don’t think about how do we get the best outcomes on net. We get a situation of “No one can be blamed.” So they insist that it’s got to be as safe as reasonably possible. What an engineer calls six nines, 99.9999.

I don’t think that’s an unreasonable standard. Sure, it’s going to slow down reaching scale with these things. But there is so much distrust of big tech and self-driving cars generally that I think the appropriate strategy of going slow, being safe and insisting on showing that you’re not harmful and you’re not cavalier is so important. If we’re going to get the good outcomes that I think this technology can give us.

Douthat: So in practice, how many people could a self-driving fleet kill to be viable, would you say? Is it like one?

Miller: Well, it’s important to note that to date, the only severe incident was that one Cruise incident. And that was a severe injury, it wasn’t a death.

Douthat: Right. But there are very few self-driving cars on the road. I mean, they’re in many cities, they’re coming, et cetera. But we’re not talking about millions and millions of cars or hundreds of thousands of cars. We’re talking about a small number.

Miller: Yep. But what we have is a courtesy of the state of California, and I hope this is something that the federal government is being encouraged to adopt — I hope they do. There are very strong transparency requirements, so we know about every incident that a Waymo has been involved in and we’ve combed through them. And we know that Waymo is safer than human drivers already. You could argue the denominator isn’t there compared to the hundreds of millions of miles that humans drive in the United States every year versus the relatively small fleet, so we can’t know.

But looking at where that data is coming from, San Francisco is not an easy city to drive in. Like, it is a complex environment. If it’s achieving safety there, I find it hard to believe that it would find Topeka to be a much more difficult place to work.

Douthat: But I just want to stay with the weirdness factor for a minute.

Because I think that’s an important hurdle here for people. Again, in the example that you gave of the Cruise disaster, it was the car doing a weird inhuman thing after it hits someone, right? And there have been other examples where Teslas in autopilot mode were involved in similar accidents in Florida. Where they collided with the side of white tractor-trailers, crossing highways because their cameras, as I understand it, just couldn’t see the white against the sky. Again, it’s not the kind of accident that human beings are used to getting into. And I just wonder, isn’t that part of the hurdle that people will have to get over to accept these cars?

It’s not just the number of accidents; it’s that when they do happen, they will feel weirder and more random, maybe, than just, like, a guy running a red light and hitting someone.

Miller: Well, I was writing about this for my newsletter that Waymo had an incident a few months back where they killed a bodega cat in San Francisco.

You’re right. Would a human have made that mistake? I’m not sure, but every time one of these vehicles makes a mistake, we notice it, and because it’s an inhuman thing, where we’re used to only having human activity, it does weird us out. It does make us nervous.

Regulators, I think, are responding to that, and to Waymo’s credit and Zoox’s credit, they’re moving slowly and carefully to avoid sparking concern that we’ve unleashed robots on our streets that are unaccountable. They don’t want us to think about it that way.

Douthat: Right. And there was a case in Santa Monica where a child was hit, not killed.

Miller: Yeah.

Douthat: And in that case, I think Waymo said, “Well, a human driver would’ve been much more likely to hit her at a higher speed.” And the Waymo car successfully braked at a speed that a human driver wouldn’t have.

But you could imagine a scenario where a Waymo enters a crowded area and drives faster than a normal human would because it isn’t picking up on weirder things going on in that area. Like, maybe there’s a fire in a building and everyone is slowing down to rubberneck. And the Waymo doesn’t see it, but then it successfully slams on the brakes.

But it’s a different kind of thing on the road, I think. It’s like a different way of seeing the road.

Miller: So the thing to say about that is just like other kinds of sophisticated A.I. systems, data is what it needs. I can only speculate that the Santa Monica incident happened because it was insufficiently aware that at this particular time of day near a school, it should be behaving even more cautiously than normal.

Well, it “knows” that now. And so, we’ll have fewer incidents like this every month that passes. The data sets of all these companies get richer. These sorts of incidents should get fewer, which is another reason I approve of the strategy of going slow and being humble and being safe because that’s how we win. That’s how we thread this needle.

Douthat: Is there a self-driving car equivalent of like a ChatGPT hallucination? Are there scenarios where the car just does something and you don’t know why it did it?

Miller: Oh, absolutely. You can find videos on YouTube, if you’ve got the stomach for it, of, again, Teslas — because they’ve got the most sophisticated driver assist systems — where it’s just moving along in the lane, then does a hard left, goes right off, through opposing traffic, off the road, and you struggle in vain to know what possibly encouraged it to do that.

So it does happen. Just like hallucinations with ChatGPT, they’re getting better all the time, but it’s not perfect. So again, if I was a regulator, I would say, given this scenario, if you’re going to operate in public spaces, you had certainly better stand 100 percent behind it, because otherwise it would be irresponsible.

Douthat: And what are the political obstacles to universal Waymo?

Miller: It’s interesting because it does scramble traditional Democrat, Republican, right-to-left lines. On the one side, you’ve got labor interests and you’ve got Democratic lawmakers who are sensitive to labor concerns, wanting to go slow.

But you’ve also got Democratic lawmakers who are sensitive to the plight of the most vulnerable, and they identify Uber drivers as one of those classes that is worthy of protection. But on the other hand, you also have people who are concerned about spying. As we’ve already talked about, the nature of a modern vehicle and certainly a self-driving car has sensors going all the time. It’s collecting data of everywhere it goes all the time. Who has access to that data? Certainly the operator of the vehicle, the Waymos or the Teslas or the Zoox of this world, do, and that means that a sufficiently motivated bad actor could get them as well.

Or, General Motors, just with conventional vehicles, was selling all the data of everyone driving a GM car to third parties, arguing that, “Well, we collected this data, it’s ours now. We can sell it.”

With Waymo or a self-driving car, it’s so much richer. There’s so much more potential for data capture, and so civil libertarians and people with national security concerns have got questions.

Douthat: And in terms of security, like fears of terrorism, for instance, right? Like someone who used superintelligent A.I. to hack into Waymo’s system would presumably have the capacity to take over hundreds or thousands of cars at once, right?

Is that, just in terms of scenarios that people are reasonably afraid of …?

Miller: So in that scenario, certainly the advent of L.L.M.s means that we’ve unleashed superhacking. The two points to make are: One, you couldn’t control every car. You’d have to hack into every one. And as previously mentioned, the car’s driving itself. So you’d need to find a very sophisticated way to confuse the car about its environment.

I’m no technical expert; I think it could be done. But I think it’d be really hard to do, which means the second point, which is in the language of security: Waymo is a hard target. They’ve got all this cybersecurity behind them. If I was a bad actor, America’s power grids, America’s utilities, there are so many softer targets out there where you can do more havoc with less effort. I’m not going to say more given ——

Douthat: No, that’s true. We don’t want to sketch out terrorist plans on this show. But I do think there’s a connection to these psychological elements that I’m interested in where I feel that the idea of having the automobile you’re in taken over is, because it’s unfamiliar and novel and tied to personal privacy and personal control in a way, that just seems like a more terrorizing act than a blackout. And people have lived through blackouts before.

Miller: The opening of the new “Naked Gun” movie features a murder committed with a self-driving car as the weapon.

There’s a long history of this in our popular culture, like this is an obvious place for our fears to go. So you’re onto something that this is weird and strange, but in a way that sort of triggers us to be afraid.

Douthat: So then how does the sale happen? When we started this conversation, you made a very strong case that there are these huge benefits in terms of just a much, much safer road.

Miller: Yeah.

Douthat: But that accumulates slowly and in patchwork, and you don’t have the data for a while or a long time. Most people don’t get into car accidents as a regular thing.

As many car accidents as there are in the U.S., most people go through a year or five years without getting in one. So, how do you, as an advocate for this technology or some version of this technology, see it getting over the hump of different forms of public resistance?

Miller: So in the first season of “Mad Men,” there’s an elevator operator that takes you up from the lobby up to the Sterling Cooper offices. By the end of this, there’s no elevator operator within a few years because the elevator operators were on their way out in the mid-60s.

I am sure the first time someone rode in an automatic elevator where they just pressed a button, and then it whisked them to their floor without a human there to intervene, it felt strange. But I imagine the fifth time it happened, it didn’t feel strange at all. That’s certainly everyone’s reported experience with Waymos in similar self-driving cars.

The first time you do it, it’s either eerie or magical. The second time you do it, you don’t notice. You pull out your phone and you’re doing whatever it is that you’re doing on that. And it’s just like someone is driving; you pay no attention to it any more than you pay attention to your Uber. So again, I don’t know if this is their strategy, but from what I can tell, one of the advantages of Waymo introducing very small fleets, but into many cities, is to inoculate us against this idea that it is strange. So the more people who get to ride even once, the spell will be broken. And we’ll see as for driving, this is something a machine should be good at. Why shouldn’t I have a machine do it?

And that’s a world as you’ve alluded to, which will be safer. But it requires us to be comfortable with it. So I hope that everyone listening to this podcast the next time they are — perhaps they’re traveling for business or pleasure — in a city where Waymo or Zoox or Tesla is operating, tries it out.

And I think they will see that this is like they say about other A.I., just another technology, a normal, boring technology.

Douthat: Right, normal and boring.

Go forward from that. Give me the good timeline. Because you’re an optimist about this tech, but you have a couple of different scenarios for the future. One of which is better than the other.

So give me the good scenario for 2035 and beyond the way this technology gets adopted and how the world changes.

Miller: So the good scenario would be Waymo and Zoox and Tesla have all — despite their different approaches — reached scale. So there’s healthy competition in the robotaxi market, and everyone in every major metro is using them.

It’s 40 to 50 percent less costly, which means that you travel more or you’ve got more discretionary income to spend on other things. People are giving up their cars. Every household that used to own two cars in an urban environment now owns one. Every household that owned one car now owns none. They use robotaxis to fulfill the space of one of those cars.

Consequently, we’ve got less need for parking. All the parking infrastructure and parking space can be returned to other uses, higher and better uses than just vehicle storage. And people are safer, fewer people are dying in road incidents. They get a certain number of hours back every week that they can put to whatever purposes they want. They are richer, but they’re also freer in the sense they can exercise those different parts of themselves more.

Douthat: And there’s less pollution or lower energy costs. We haven’t talked about energy and climate change much, but that’s part of the story too, right?

Miller: Every automated vehicle in development that I’m aware of is electric. So to the extent that you want to see a transition away from internal combustion engine cars, which I do, then that’s a better world too.

Yes, there’s going to be more demand for electricity, but it seems that that’s going to happen because of A.I. no matter what happens in this sector. So we’ll have to solve that problem anyway.

Douthat: And in your good scenario, people own fewer cars, right? Everything is more efficient.

People get more accustomed, maybe, to sharing cars and so on. So there might even be less electricity used.

Miller: Could be. I think the Jevons paradox suggests that we’ll just use more of it.

Douthat: We’ll just use more. Yes, that’s true. The car, if it’s cheaper, we’ll use more of it.

OK. Well, that’s a good bridge to what’s the bad scenario? Again, where self-driving cars spread and become ubiquitous, but the outcome isn’t as happy for society.

Miller: Congestion is much worse. Trip times get longer. If you’re sitting there playing Candy Crush, maybe you don’t notice, but pity the poor soul who doesn’t have access to this and has to drive and their driving gets worse all the time. It’s easy to imagine a world where we have enough Waymos to really increase congestion, but not enough to really put a dent in private car ownership so it isn’t rational on the margin to get rid of a lot of parking. So we have more congestion, but we don’t get to reclaim space.

But worse than that, public transit goes into a death spiral. In a world where robotaxis make ride-hail half the cost than it is now, you get so many people defecting to robotaxis, which means that public transit gets worse. At the same time, it costs more money to operate and more and more cities can’t afford it.

So they pull back, leading to a greater defection to robotaxis. So people that can’t afford even cheaper robotaxi fares now have a worse transit experience or no transit experience. So they experience less mobility. That’s a bad world. In many ways, it’s worse than the one we live in now.

Douthat: So what is the fundamental place where the fork happens?

Miller: I would say there are two inflection points, and they’re related to one another. The good scenario depends on Waymo being available quickly and cheaply to everyone. If there’s a hard cap on the number of Waymos, you don’t get there. So regulators need to be willing to say, “No, a future where every other car is a robotaxi is a good thing,” and they don’t try to prevent that outcome.

And I say it’s related because the other side of it is: What do public transit agencies do? Do they see robotaxis as the enemy that has to be kept out? Or do you go with what they call the 20th-century soft embrace and say, “We’re going to bring these in.” We don’t run long-feeder buses anymore that come twice an hour and take 35 minutes to get to the nearest hub.

Instead, we replace that with — we own some robotaxis, or we license some robotaxis, and anyone can get a robotaxi trip that takes them to or from the nearest higher-order stations. So we begin to bring automated driving into our transit.

Our buses ——

Douthat: Buses would be robobuses, right?

Miller: Yeah. That’s a really hard row to hoe because public transit agencies are some of the most unionized environments in this country. They’re going to see this as a threat to their livelihoods, which it is. What I hope we can do then is instead of — we shouldn’t just throw them out en masse. I’m a transit advocate. I want there to be good transit systems, but I also want transit to benefit from the best technology available. If that means doing a big buyout package at one time, then we should do that. We should take that deal. But it might be a hard sell in an era of limited budgets.

I don’t know. I think there’s going to be so much money to be made on the robotaxi’s side that there’s got to be some sort of deal that can be made to make some of the people who are going to lose out whole.

Douthat: So those obstacles to the better future that you’ve just sketched are kind of left-coded. There are obstacles associated with regulatory environments in big cities with how mass transit works, things like that. I’m also interested in obstacles to your happy future, though that might be sort of right-coded. And above all, the willingness of people in a country like the United States to actually own substantially fewer cars.

Because it seems like your good future depends on that too, right? It’s not just people are willing to take robotaxis, Waymo, and so on. It’s also that as they get willing to do that, they just decide they don’t need to have their own car available and that does, I think, pretty clearly cut against cultural and behavioral norms in a place like the United States, right?

Miller: Oh, yeah. We’ve seen how in urban spaces, because owning a car in a place like Manhattan is such a pain in the neck, more and more younger people are choosing to forgo a car. They’re not even getting driver’s licenses. There are always going to be people who want to own their own car. I think young parents will always want their own car to move their kids around.

Workers, like, who are going to want tools to carry from the job, they’re going to want their own vehicle to do that. The objective is not a world where no one doesn’t — it’s just where you don’t need to own as many as you do now.

Douthat: How is it sustainable, though, to have that kind of persistent private car ownership if self-driving is so much safer than regular driving? Like we talked earlier about the challenge of liability and how figuring out liability is how you figure this out, but isn’t there a certain point where that issue flips and everyone looks around and says, “My God, a Waymo is a thousand times safer than Ross Douthat behind the wheel of a Toyota Sienna, terror of greater New Haven.”

And therefore, my insurance premiums for owning a Toyota Sienna that I need to fill with gear for my oversized family go up and up and up, and effectively non-self-driving starts getting priced out.

Isn’t that a plausible corollary of your optimistic vision for a self-driving future?

Miller: I think it is a plausible corollary. I don’t think it’s in the near or even the medium term. But this century, assuming we don’t have some sort of catastrophe, could that happen? Absolutely, it could. But I think it would be so gradual because, Tesla’s ambitions aside, I think private cars are going to have steering wheels for decades to come.

They’re just going to have sophisticated driver assist systems or even self-driving, but only on the highway or only during the day. I think what will happen is that you will be expected to use such systems when you can. And if you choose not to and you get in an accident, your insurance might say, “Well, our policy says that you have to rely on the systems in situations where it’s appropriate.”

So it’s not going to go away overnight; it’ll be incremental. And I still think that’s true to the good as those systems get better and better. Once it reaches a point where it can drive better than us in all scenarios, why wouldn’t we want that?

Douthat: Let’s talk about that. Do you like to drive?

Miller: I cannot say that I do.

Douthat: OK. I like to drive. I’m not a car person. I’ve never bought an old car and tinkered with it, and I’m not any kind of car brand fanatic. I drive, as I said, minivans right now, but I have always enjoyed driving. It was a pretty big deal to me learning to drive in the middle of my teens as both an assertion of independence, separation from parents, and also just as a way of understanding and mastering the world, like a kind of step into adulthood. And it is distinctively American in certain ways, but it’s American in a way that fits our geography. We’re a big country where there’s lots of places where mass transit doesn’t work and driving has always made sense.

It makes sense that we have this kind of culture and this form of adult being in the world. Isn’t something lost if that is all given up?

Miller: Well, some of what is lost is what you’ve just described. It is a very American thing, the romance of the road, freedom, independence, the ability to go where you want and be in control of it.

There’s another angle to it. We don’t have, in contemporary liberal America, rights of passage for young people anymore. We don’t have many of them. One of them used to be learning to drive. It was a sign that you are an adult. We trust you with this very dangerous piece of machinery. And when you can do it, you know that you’ve arrived. And it’s also what I suppose a philosopher would call embodied knowledge, right?

You aren’t just a brain, you’re also moving this thing. And so you have to pay attention. You have to have good reflexes. These are valuable things. And yeah, we are on track to see them — probably not in our lifetimes, but sometime in this century — we’re on track to see them disappear or become very minor.

Douthat: The driver’s license as a rite of passage phenomenon has already weakened in parts of the United States. And it’s a famous part of the larger story of American teenagers being more risk-averse and going around less in the age of the iPhone. Teens are more likely to postpone getting their license. That’s already diminished to some degree. So you can fold this story into the larger story of the kind of safety-focused screenification of American youth.

Miller: And bigger than that. Like the death of embodied knowledge where it’s not just screenification. It’s — I’m a writer, which means I spend most of my time looking at a screen and writing. I’m not working with my hands. That’s the trend not just of youth, but that’s the trend of American life, right?

So we need to solve this somehow, but it shouldn’t be regarded as the special burden of our cars to solve it for us. We need rites of passage. We need more opportunities to live in our bodies and learn embodied skills. But let’s not say that we’re going to draw the line at driving cars. That seems the wrong place to draw it when they can offer us so many offsetting benefits.

Douthat: But what is the right place to draw it?

It just seems like people are going to say that about every step along the road to disembodied existence, right? Because at every stage you’re going to say, “Well, this new situation is much more efficient. It’s much safer.”

You don’t want your kid to die in a car accident. Obviously, I don’t want my kid to die in a car accident. But that sales pitch is going to be true for any form of embodied knowledge, right?

Miller: Mm-hmm.

Douthat: Doesn’t embodied knowledge, by its nature, contain risk and peril? Isn’t that what embodiment is all about?

Miller: It absolutely is. And all I can say is: If we want driving to make us have full and healthy relationships to the world and to ourselves, then I think we’re asking too much of driving. You asked me where we should draw the line. I have to say, I’m not a minister and I’m not a philosopher, so I can’t tell you that.

All I can tell you is that if we have a tool that can save lives while also giving people their time back; I think we would be a fool not to pick it up and then use that time and money we save to invest that into solving this problem.

Douthat: Well, be a political prophet then, though, just for a minute.

If the scenario you’re describing comes to pass, wouldn’t you expect this to be potentially just a vast culture war issue too? Where you have blue states in the United States, liberal states having one set of insurance rules for driving your own car, and red states having another set. And you cross over into the free state of Montana, and it’s much easier to get a driver’s license, or it’s much easier to own a car. It seems like what you’re describing is a potential political cultural fault line that could actually define American politics in an interesting way.

Miller: Oh, yes. I mean, there’s nothing Americans can’t turn into a culture war battle if they try.

Douthat: Well, that’s because we care so much, Andrew.

Miller: But the interesting thing about it is that right now it goes the other way.

Right now, Texas and Tennessee are much more open to self-driving than blue states. California is a big exception because it’s the home of the industry. But Washington and Massachusetts and right here in New York State, there’s much more friction for the arrival of self-driving cars. So it seems like, it’s ——

Douthat: No, that’s the fascinating thing.

The libertarian states are building the gallows on which human agency and independence will eventually be hanged. That seems like a total possibility.

Miller: Yeah. History will surprise you. The ironies run deep.

Douthat: No, that’s a really good point. You live in Toronto. Have you ever driven to Vancouver?

Miller: Oh, no.

Douthat: No, never?

Miller: No. I’ve driven to Montreal several times. I’ve driven as far out as Halifax. It’s a several-day drive.

Douthat: Several-days drive, OK.

I drove across the country with my family a few years ago. Whenever you do things in life that you come to with a set of philosophical priors, obviously, it tends to confirm them.

But I left that experience feeling very grateful that I have the right and the freedom to get behind the wheel of a car and steer it over giant, vast mountain ranges and so on. So really, my takeaway from the end of this conversation is I want to get The New York Times to pay you to rent a large American automobile and drive it from Toronto to Vancouver and see if it makes you any more inclined to defend one’s God-given right to drive a car.

Miller: I’d be happy to run that experiment.

Douthat: All right, we’ll talk about it off camera. Andrew Miller, thank you so much for joining me.

Miller: Thank you very much. It’s been a pleasure to be here.

Thoughts? Email us at [email protected].

This episode of “Interesting Times” was produced by Sophia Alvarez Boyd, Victoria Chamberlin and Emily Holzknecht. It was edited by Jordana Hochman. Mixing and engineering by Pat McCusker, Sophia Lanman and Efim Shapiro. Cinematography by Marina King. Video editing by Dani Dillon and Julian Hackney. The supervising editor is Jan Kobal. The postproduction manager is Mike Puretz. Original music by Isaac Jones, Sonia Herrero, Pat McCusker and Aman Sahota. Fact-checking by Kate Sinclair, Mary Marge Locker and Kelsey Lannin. Audience strategy by Shannon Busta, Emma Kehlbeck and Andrea Betanzos. The executive producer is Jordana Hochman. The director of Opinion Video is Jonah M. Kessel. The deputy director of Opinion Shows is Alison Bruzek. The director of Opinion Shows is Annie-Rose Strasser. The head of Opinion is Kathleen Kingsbury.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post Why Are We Still Driving? appeared first on New York Times.

U.S. Economy Grew 2 Percent in Early 2026 Even as War in Iran Began to Hit Energy Prices
News

U.S. Economy Grew 2 Percent in Early 2026 Even as War in Iran Began to Hit Energy Prices

by New York Times
April 30, 2026

The American economy extended a streak of resilience and expanded at an annual rate of 2 percent in the first ...

Read more
News

A billionaire lived on 6 continents. When he made his fortune, he chose 2 places to call home.

April 30, 2026
News

Long Beach’s hottest new restaurant is an old-school pizza tavern

April 30, 2026
News

Reddit Intentionally Breaks Its Mobile Website, Demanding Users Download Its App Instead

April 30, 2026
News

The ‘Great Man’ Presidency

April 30, 2026
The Secret Weapon Against AI Dominance

The Secret Weapon Against AI Dominance

April 30, 2026
Star Wars Clone Wars Fortnite Release Date Revealed – Skins & Bundle Price Leaked

Star Wars Clone Wars Fortnite Release Date Revealed – Skins & Bundle Price Leaked

April 30, 2026
Disruptive passengers are such a nuisance that one airline wants to build a database of the worst offenders

Disruptive passengers are such a nuisance that one airline wants to build a database of the worst offenders

April 30, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026