This is an edited transcript of “The Ezra Klein Show.” You can listen to the episode wherever you get your podcasts.
If you are living in New York’s 12th Congressional District, you may have seen these endless attacks on Alex Bores, one of the Democrats running there.
Archival clip of political attack ad: He made hundreds of thousands of dollars building and selling the tech for ICE, enabling ICE and powering their deportations while making bank. Now he’s running from his past. ICE is powered by Bores’s tech.
Yikes. Bores did work for Palantir. The rest of that attack is not what you might call true, but what interests me is who is paying for it: the super PAC Leading the Future and its subsidiary Think Big.
Who funds the super PAC Leading the Future? Well, among their largest donors are the co-founders of OpenAI, Andreessen Horowitz and — wait for it — Palantir.
So why is a co-founder of Palantir, Joe Lonsdale, in this case, funding a super PAC to try to destroy a candidate on the grounds that he once worked for Palantir? The reason is that Leading the Future is a super PAC dedicated to destroying anyone who might regulate the tech industry, in general, or A.I., specifically, in a way these funders don’t like.
And Bores is a member of the New York State Assembly. He co-wrote and passed the RAISE Act, one of the first pieces of A.I. regulation passed in any major state.
Sam Altman, a co-founder of OpenAI — who, it should be said, has been horribly targeted in recent violent attacks by anti-A.I. individuals — was trying to cool down temperatures here. He wrote: “It is important that the democratic process remains more powerful than companies.”
Altman is right.
There is a principle here that is much more important than any single congressional seat. You’ll hear it, honestly, if you just listen to A.I. founders talk; they say they believe in it.
But it’s his co-founder, Greg Brockman, who is one of the major donors for Leading the Future, who is trying to make sure the democratic process is subordinate to the companies. He is trying to do it by funding a super PAC that can unleash enough money to crush any legislators who cross them.
Bores, in general, has been a pretty effective legislator. In just over three years at the New York State Assembly, he has passed 30 bills and has been recognized by the Center for Effective Lawmaking as one of the most effective freshmen legislators.
But it’s his ideas on regulating A.I. that particularly interest me, in part because I think they make sense and are worth discussing — things like an A.I. dividend — but in part because I just really do not want to live in the world that Leading the Future is trying to create. A world where, if the A.I. industry hoovers in enough money, they can then destroy anyone who might try to regulate them.
What’s funny about all this is: Alex Bores is not an anti-A.I. kind of guy. I think he gets A.I. pretty well. I think he’s trying to balance its risks and its possibilities.
But if you’re looking for a pure A.I. backlash candidate, he’s not it. And I think that tells you something: that what Leading the Future and super PACs and groups that might emerge like them are actually trying to do is to stop anyone from legislating on A.I.
If the democratic process is actually going to mean something here, ideas are going to have to speak louder than this kind of money. So I wanted to hear what Alex Bores would actually do if given the chance.
Ezra Klein: Alex Bores, welcome to the show.
Alex Bores: Thanks for having me.
I want to begin with your early political memories. How did your politics begin?
Well, it began with something that I wouldn’t necessarily call politics — only in retrospect would I put that phrase on it. But it was with my parents in union fights.
In second grade, my dad and his colleagues were locked out by Disney for fighting for better health care. There were contract disputes for over a year, and Disney wouldn’t budge.
Finally, the workers went on strike. In response, Disney locked them out for three months and cut off their health care benefits, including my dad’s friend’s, who was about to start chemotherapy.
Thankfully, the union stepped in, and they paid for the treatment, and he survived. But my dad would pick me up from second grade and bring me to the picket line, and that was my first experience of people working together for change.
He would put me in front of the Disney Store. We’ve all seen people walk past picket lines — it’s not hard to do. It’s a lot harder to walk past an 8-year-old with a sign that says: Disney is mean to my dad.
So that was my first lesson — that health needs to be universal, but also that the way we win is by working together.
That if you’re one worker, you’re one person, you’re one anything advocating, it’s easy to get crushed. But if you have a union, you have an organization, you have a campaign, you have a movement — well, then you stand a chance.
What did your dad do for Disney?
My dad worked for “Monday Night Football” at the time. He did graphics and videotape and instant replay. He worked in the trucks, eventually became a technical director. But he was one of the people who was actually sending out the signal before it hits your TV.
So you then studied industrial labor relations at Cornell and got a computer science degree. I’m curious about what those two very different disciplines taught you.
Well, they sound very different, but every day they seem to be more and more intertwined. At the School of Industrial and Labor Relations, I learned economic theory. I learned collective bargaining. I learned how to run campaigns and organizations in ways that actually can change power and win things.
And I learned to stand up for working people and to view a lot of interactions in the world through that lens.
Wait, be specific about that. What did you learn about how to stand up for working people?
Well, my freshman year, we ran a campaign against Nike. Cornell was sponsored by Nike; our athletic teams were sponsored by Nike.
I was part of a group called Cornell Students Against Sweatshops. It was affiliated with USAS, United Students Against Sweatshops.
They taught us how to build a campaign over time. We learned how to be strategic. You start with a clear demand.
In this case, Nike had laid off 1,800 workers in Honduras without giving them the legally mandated severance pay. We argued that the Cornell code of conduct required that Nike be responsible for their subcontractor’s actions, that they make the workers whole.
So we put that into the demand. Then you build up over a period of educating. We’d have teach-ins, we’d have sort of ridiculous actions to grab attention.
We did a “working out for workers’ rights,” where we were in the quad and just playing ’80s music and getting people to ask: Hey, what’s going on? And we’d say: Let me talk to you about what’s going on in Honduras.
Then you build up to more aggressive actions that require a reaction from the administration. We ended up being successful in that campaign. Cornell decided it was going to cut its contracts.
I think something like three weeks after Cornell made that announcement, Nike about-faced, paid the workers all the money they were owed and gave them job training and health care for a year.
You’re telling me about how you learned to do activism in college, which is interesting.
But I want to go a level deeper than that. You’re doing industrial and labor relations. What is the deeper theory or thesis of the relationship between workers and corporations, between labor and capital, that you came out of that with?
There’s so much that’s in contention between workers and capital.
But in the best world, you’re actually working together to grow the economy. Workers are not out there to bankrupt any company. They want the company to grow. There are fights over how you distribute the pie, but theoretically, both want to grow that pie.
Then there are really interesting relationships internationally. One of the things that I discovered was that for so many of the countries where we thought labor conditions were awful, the laws on the books were actually quite good. The question was with enforcement, and if the home countries actually tried to do enforcement, the factories would just up and leave and go somewhere else.
The lever where maybe you could change that is in the countries that are buying most of the goods. So we would apply pressure in the U.S. about holding countries to the standards they had already set up for their workers.
I feel like you’re describing to me the education of a young radical here. You’re walking picket lines at 8 years old; you’re studying industrial labor relations; doing anti-corporate malfeasance campaigns; skeptical of globalization.
How do you end up at Palantir?
I really wanted to be a lawyer, but every lawyer I spoke to told me not to be a lawyer.
That was my experience, too.
[Both laugh.]
They were like: Take time off in between. Make sure that’s what you want to do.
I went to an economic litigation consulting firm called Cornerstone Research, where we were preparing expert witnesses for trial. We were doing economic modeling and playing with data. But I was interacting with lawyers all the time.
So I was building a skill set but could see what they were doing. I found I really enjoyed the economic modeling. I really enjoyed playing with data.
Also to that ideology, as I’m growing up, I’m a Democrat. I believe that government can and should be a force for good. But that also means we take on the burden of proving it.
I was a young believer in — I probably wouldn’t put it in these terms back then — expanding government capacity and making sure government is actually delivering.
Palantir in 2014, in the Obama administration, was about how we could expand government capacity while protecting privacy and civil liberties. So at the time, it felt like very much the natural fit.
I want to stay in this 2014 moment, because this is a period when there is a lot of optimism that technology is going to solve some very fundamental problems of democracy.
We’re going to have all the civic tech; the interface between citizens and the government is going to be much smoother, much better; these companies are fundamentally good.
Google doesn’t want to be evil. Facebook wants to connect the world. Palantir wants to make your data comprehensible.
I think there’s also an underlying view that the answers to our problems are out there somewhere in these masses of data. And if you can just make the whole thing legible, you could get the answers.
Something poisons pretty quickly after 2014. That really feels like a different ideological moment than we’re in.
Entirely.
What was wrong about that? Or what would you add or change to my rendition of that optimism?
A lot of that is true. The Palantir story that was told to prospective employees — and Alex Karp would do this a lot — was that he most feared fascism. He had just finished being a German philosophy student, and he was most afraid of fascism developing.
Fascism happens when a government fails to provide for its citizens, and they start blaming someone else for it, and people then feed that hunger and that hatred. He couldn’t do anything about the latter, but he could do something about the government failing to deliver.
The reason that he wanted to do Palantir was, after Sept. 11, after this real rise in a feeling of being unsafe, could we build the systems that would allow government to make people feel safe — but build them in such a way that was protecting privacy and civil liberties?
That was the pitch. The fundamental idea was that we were there, in many ways, to stop fascism.
How did it work?
Trump was elected in 2016. That was a weird bit.
With the aggressive support of Peter Thiel, one of the early investors in Palantir. Would you call Peter Thiel a Palantir co-founder?
I think so. I think that’s the phrase that is given.
But Alex Karp was very much fighting for Hillary at the time. And if you look at donations of employees at Palantir, they tell a very skewed story toward the Democrats, as well.
Yes. Silicon Valley is very Democratic in this period.
Absolutely. Absolutely.
You have a lot of Obama administration figures who can’t go to Wall Street anymore — that’s not kosher for a Democrat — but you could go to Silicon Valley.
Trump’s election in 2016, but even more so his re-election in 2024, is a real failure of that mission. To now see leaders of the company and Silicon Valley broadly throwing their lot in with what I think is a fascist regime is a real disappointing switch.
So you’re at Palantir from 2014 to 2019. You start as a data scientist. By the end, you’re one of the people leading the relationship with the government.
Yes. I focused on the federal civilian side.
So what is that work?
That was work with the Department of Justice, with the C.D.C. to track epidemics, with Veterans Affairs to better staff their hospitals and give veterans the care they deserve and need. It was helping a lot of the federal civilian agencies.
How much is what we now think of as A.I. and generative A.I. starting to come into the work you all are doing then?
Not at all. And here’s what I mean by that. Palantir was aggressively anti-A.I. in that period. It believed that data integration was the true source of value and that A.I. was a magic layer that would be applied on top. It was all marketing, and we were doing the real work that was getting data to come together.
Can you describe the differences in those two views? What is data integration versus whatever they thought A.I. was?
A.I., in a very naïve sense — we’ll talk about it in other ways now, and this is before agentic models and all of this — is doing analysis of data, and before you can do the analysis of that data, it needs to be organized in a way that A.I. can make sense of it.
But the actual thing that’s difficult is organizing all your data together. That requires hard work, and there’s no magic to do that yet. The software, plus engineers going on site and doing a lot of that hard work to do the manual hookups, was always going to be the true source of value.
So you’re at Palantir into the Obama administration and into the first Trump administration.
Yes.
Now Palantir’s working with the government is a different animal, depending on which government it’s working with.
Very much so.
How does that change?
I was leading the work at the Loretta Lynch and Barack Obama Department of Justice and then, all of a sudden, the Jeff Sessions and Donald Trump D.O.J. Priorities changed pretty drastically.
The work with the banks was probably wrapping up anyway, just because of time. But clearly, there was no more interest in that work.
Our contract had us choose three mutually agreed-upon case types. So I met with the new leadership after the transition — this is early 2017 — and said: What do you want to prioritize? What do you want to work on? And they said: the opioid epidemic. We said: Great, we definitely want to do that work.
They said: violent crime. We said: Cool, as long as it’s not a dog whistle, we’d love to work on that.
And then they said: civil immigration. And I said: We’re not touching that. That’s not the work that we are building this for.
I was empowered as the lead of the project to do that. I had a contract that allowed me to because it was three mutually agreed-upon case types. While I was there and in the D.O.J. project, we didn’t do any of that work. That’s not how the decision went at every customer or in every project.
So Palantir during this period does begin working on immigration with the Trump administration.
I never worked on any of those projects, so I was never cleared on it. But to the best of my understanding, during that time, it was not stopping the Trump administration from using it for immigration.
I don’t think there was a building of features specifically for deportations, but I could be wrong about that. But even the fact that they weren’t going to stop it from being used in that way got a number of employees — myself included — quite upset.
You leave Palantir in 2019. Why?
Separately from me, on a project that I never worked on, Palantir had signed a contract with a department within ICE called H.S.I., Homeland Security Investigations. During the Obama administration, it was focused on anti-human trafficking, anti-drug trafficking, sometimes counterfeiting — things that are not controversial and that everyone would support.
Then, when Trump comes in, in 2017, they try to change the nature of that work. They try to get another part of ICE called E.R.O., Enforcement and Removal Operations — the part that everyone thinks of as ICE — to get access to the software and to use it for deportations.
There were a lot of conversations internally at Palantir about what was actually happening — as employees we couldn’t always see that if we weren’t cleared on the project. A fundamental question came up: Why not write into the contract those same protections that we have elsewhere, where we can say: Don’t use it for deportations?
Eventually, executives made clear to us that they were not going to do that. They were going to renew the contract without putting in those guardrails.
So I made plans to quit.
There was a Bloomberg story that questioned this, clearly coming from somewhere inside Palantir. It says that there was, shortly before you left — I think it said five days before you left — a warning from H.R. about sexually explicit comments you had made to a co-worker.
And then, separately, when you did your exit interview, you said you were actually leaving because you were burned out and there was too much travel.
So I want to take these as pieces. Was there a sexual harassment claim against you at Palantir, and is that why you left?
No and no. This came out of an attack from executives at Palantir who are upset that I am pushing for A.I. regulation and that I’ve called out Palantir’s work in the past. As I told Bloomberg when they reached out, I had expressed my concerns about the work with ICE internally.
I had begun interviewing months and months before I had an offer in hand.
I then retold a story of something that had happened to me on the job. Someone who didn’t like that retelling had talked to human resources. H.R. had one conversation with me where I shared exactly what had happened, and that was the end of it.
There was no file, no letter, none of the things that are claimed in that story. They dropped the matter immediately.
You weren’t disciplined inside the company or something?
No, nothing like that.
This seemed like what the Bloomberg story said, but I wanted to check it. The infraction was a story you told or something you said, not something done with or toward a colleague.
Correct. The story goes into it. Can I retell the story here? It was a paper-goods manufacturer that was talking about the uses of tissues. It sold tissues. The marketing department was talking about how tissues are used. And I retold that example from the presentation on how tissues were being used as an odd thing that had happened while working at the company.
And then the burnout and travel side of it — the argument there is that you’re making this claim that you took a moral stand against the way it’s being used, but actually, you were just kind of tired of working there.
As has been cited in multiple sources, multiple current Palantir employees have backed me up that they heard me talk about ICE and stand up and do all of that. I have no idea what notes Bloomberg took from the exit interview.
I asked to see them. I was told by the Bloomberg reporter she didn’t actually have them, that this had just been told to her by the executives. So they could claim whatever they want on top of the notes that, again, I never saw.
I know what I had said before and during, and that I had brought this up many times. A year after I left, Palantir emailed and called me, begging me to come back. It feels like if there had actually been a real thing there, they probably wouldn’t have done that.
You just heard me be fairly critical about Palantir; I had been before, as well. The executives there didn’t take kindly to that. And the super PAC that’s attacking me is against any regulation on A.I., and this is just another desperate hit by them.
I have been amused that the super PAC that is attacking you — which is partially funded by Joe Lonsdale, a Palantir co-founder — one of its core attacks on you is that you worked at Palantir.
Correct.
That’s a pretty strong level of political shamelessness.
I would agree. So, I would say, is lying about an employee’s record.
But they are very terrified. They’re very afraid of me in office, and beyond that, they’ve said publicly that they are trying to make an example out of me. They want to beat up on me so bad that when the idea of regulating A.I. comes in the future, politicians run in the opposite direction.
They’re not primarily concerned with what is honorable or what is true. They are concerned with causing pain.
In 2022, you’re elected to the New York State Assembly. In 2025, you passed the RAISE Act, which gets us into the A.I. regulations you’re alluding to. This is one of the first major pieces of A.I. legislation passed by any state in the country.
Before we get into what it does, what was the philosophy behind it? When you were working on that bill, and I know you had co-sponsors on it, what were you all seeing and what were you all trying to achieve?
We were seeing A.I. develop extremely rapidly, and industry themselves warning about what was coming.
This is after the letter that was signed by so many executives, saying that we should treat the risk of extinction from A.I. equal to global nuclear war, and promoting perhaps a pause. Many of them had signed voluntary commitments with the Biden White House saying they were going to take certain safety precautions, and this was the first step toward binding federal regulation.
Then we saw no binding federal regulation come.
We’d also heard from companies themselves that they were OK with certain safety standards. But, they’re in a competitive marketplace, and if they see their competitors starting to skimp on safety and cut corners, they would be forced to, as well.
When you hear that call, you say: OK, we should establish some base line that people can’t go below, so that there are some established safety standards that everyone is playing by.
What’s the base line you tried to establish?
There were a few provisions in there. One was that you had to have a safety plan that you made public and actually stuck to — one that largely followed best practices in the industry around how you were going to test the models for specific risks; how you’re going to record those tests; what you would do with that information.
You had to report to the government critical safety incidents, which we specifically defined in the bill. If it goes wrong in these sorts of ways, it may not have harmed anyone yet — but could suggest something is coming — so you have to let us know about it.
Those provisions largely survived until the end. There were two others that were in the original that ended up getting cut out.
One of them was that you can’t release a model if it fails your own safety test — basically designed for the way that tobacco companies operated, where they were the first to know that cigarettes caused cancer but denied it publicly and continued to release their products, or fossil fuel companies that knew oil caused climate change but denied it. We’re saying if you knew your model was particularly risky, you would have to take action on that.
And the last provision was third-party audits, saying that you can put up whatever standard you want, you can assert that you’re going to follow it, but someone else should check your work — not the government but just a different party.
In the same way we have financial audits, the same way we have SOC2 security audits, another party needs to look at it and say: Yes, you are following this.
Presumably, you’re working on this bill, in 2024? 2025? Before it passes.
Yes.
How have your views on A.I., the risks it poses, the questions it raises, changed with the subsequent pace of model releases?
I think things have happened much faster than I thought they would. And I think our ability to pass legislation has moved much slower than I thought it would.
So that difference in speed between how A.I. is advancing and how the government reacts is wider than I was expecting when I started this process.
Have you thought about the change in public opinion? Because it looks to me like we’re seeing a pretty powerful A.I. backlash rising.
You have polls showing now that more Americans are worried about A.I. than are enthusiastic about it. There’s a lot of counter-data center energy playing out throughout the country.
What have you made of how quickly the politics have shifted beneath A.I.?
That surprised me. Both how many people have focused on it, but also how bipartisan it has remained.
You, of all people, know about polarization — and most issues end up polarized. This one hasn’t so far. It has resisted that longer than I thought it would.
If you talk to voters, across Republicans, Democrats and independents, you see pretty similar attitudes; across state legislators, pretty similar attitudes; even in Congress, there’s more bipartisanship than you would think.
Surveys regularly show that about 10 percent of people want to put the A.I. genie back in the bottle, to pretend it never existed. I empathize, but I don’t think that’s the way forward. Ten percent of people represented by the super PAC Leading the Future want to just let it rip.
That is the super PAC that’s attacking you.
Yes. They want to just let it rip. They don’t care how many people it hurts, just how fast it moves.
Eighty percent of Americans see some benefits. But they also see a lot of risk and think it’s moving too fast and want to have some say in its development. The fact that it has stayed so bipartisan has surprised me, and also the fact that it has risen up in people’s minds so much has surprised me.
Has the pessimism around it surprised you? We were talking earlier about the period when there was a lot of optimism about tech, about software, about the internet.
I think you can really look from early computers, the early internet, all the way pretty late into the social media era.
Probably around Trump things begin to turn — Cambridge Analytica, algorithmic feeds. But that’s a long time when these systems and technologies are present for people, and there’s a fundamental optimism about them.
A.I. — ChatGPT, I think, is when this really burst into public consciousness. It’s 2023. We’re here in 2026, and the polling has already turned negative. The week before we recorded this, Sam Altman was targeted in two separate violent attacks. There was a Molotov cocktail thrown at his home.
Awful.
Two other people shot at his door.
I was a little shocked to see people celebrating these attacks online, saying: Where can we support the bail fund?
Yes.
This has moved into fury and fear and pessimism really, really quickly. Why do you think that is?
Well, there was a separate split in A.I. around capabilities. The debate used to be: Is this real or is it stochastic parrots? But usually, even before that: Is it just slop that is never going to actually replace a human?
Fancy autocomplete.
Exactly. Exactly. We had these debates on one dimension, which was: Is it good for people? Is it bad for people?
And then there was this other dimension: How big of an impact is it going to have? And I think that debate has collapsed. People are not skeptical of its power anymore — or some are, but fewer and fewer each day.
The intensity with which we’re having that first debate has really ramped up. But I think it has also been that we saw what happened with social media.
We saw what happened with these previous revolutions that were supposed to change everything for the better. We’ve seen platforms established with great promise, and then over time, once they get power, really turn on their users.
People are no longer willing to believe the story that is told about a technology or a platform always benefiting people. You see this argument from some of the A.I. founders. They say: Well, it will create material abundance for everyone. There will be no more poverty. Everyone will have everything.
And everyone is looking around saying: Of course, that’s not what’s going to happen. You’re a private company — you’re going to profit, you’re going to keep it all for yourself.
Sam Altman recently said it will be like a utility. But utilities are really highly regulated.
People are just not willing to believe that spin anymore, and yet they’re seeing changes in their lives really quickly.
Jasmine Sun, the A.I. writer, just wrote this interesting piece on A.I. populism. I thought the way she defined it was interesting — and a little more subtle than you normally hear.
She wrote:
I define A.I. populism as a worldview in which A.I. is viewed not only as a normal technology but as an elite political project to be resisted.
What she’s getting at there is that A.I. populism and the A.I. backlash tend to include two dimensions.
One is that this technology is being overhyped. The other, as it’s often put to me in emails, is that it is being pushed down our throats. That it’s not a thing people want, it is a thing being forced upon them.
Now there’s all this investment behind it. The investment needs to be paid off, so the companies really have to do it. If you take the power seriously, you see it in a different way — almost like any version of having A.I. in the economy is going to just be a way of paying off these huge investments, that we are not getting the technology we want, we are having a new paradigm forced upon us.
How do you think about that?
I think it’s a beautiful description. What I hear from my neighbors is very much the feeling that this is moving so quickly. That we don’t have control. And the American people, so far, have not had a say in it. So I think the first part of that definition, of the belief in its capabilities, that part is shrinking as part of the dialogue as we’re seeing it do more and more.
But the fact that it is being thrown at us, and we currently don’t have control, I think is what has motivated so many people to be thinking about A.I.
It has always struck me that if you listen to the founders and leaders of the A.I. companies, they are very specific on the harms. The gains are very general sounding.
You’ll hear Dario Amodei talking about 50 percent of entry-level, white-collar workers seeing their jobs automated away. They’re actually are Waymos on the streets now, and you can see that those could take jobs from taxi drivers and Uber drivers.
There has been all this talk about existential risk, the sense that you could build something smart enough to disempower human beings.
There’s a lot of specificity on replacing coders, and then you get these very vague explanations: It’s going to help with drug development. It’s going to solve material scarcity.
And I think if you’re a normal person being offered this technology — that might make sure your 13-year-old son has an A.I. porn bot before he has a real girlfriend, and you might lose your job, and maybe there’s some chance that the human race doesn’t maintain control over its own future — why wouldn’t you want a pause on that?
Absolutely. When you’re seeing the harms day by day, whether it’s your kid — the pedagogy at schools hasn’t been updated and some people still think that assigning take-home essays teaches critical thinking. It doesn’t anymore.
On top of that, you see chatbots, and you see some of the truly horrific stories that have happened to teenagers. Maybe you go to your job, and your company now has a hiring freeze. They’re not laying people off yet, but they’re not doing their usual hiring, and you’re worried about what’s coming from that.
Are you all going to be necessary in the future?
Then you see your utility bill go up, and maybe a data center is built near you, maybe it wasn’t, but you’re starting to think about what’s causing that.
Then on top of that, you see people saying: Oh, yeah, and it might kill everyone. These are the news stories that are coming in, and you’re maybe not seeing that benefit.
And there are benefits, right? This is not a story of a technology that is just bad. But it’s moving really, really quickly, and a few people are controlling the direction, and many people have lost confidence in the government’s ability to steer it.
It becomes a question of whether democratic institutions can govern this technology before it governs us.
Well, I think pretty clearly, no.
Well, I’m running a campaign to change that.
I guess we’ll talk about that.
But I think being worried about how fast these systems are moving and having any awareness at all of how fast the U.S. government now moves should make one worried.
Absolutely.
One thing you do see are proposals emerging to try to slow A.I. down by functionally choking off some of the inputs.
There’s a Bernie Sanders and A.O.C. bill to have a data center moratorium. There’s some bipartisan interest in this. Ron DeSantis in Florida has a bill that would be very restrictive on data center construction.
Yes.
What do you think about a data center moratorium?
The Bernie Sanders-A.O.C. proposal is a moratorium until we pass real regulation that protects people. I agree with that. I think we should pass real regulation today.
Do you agree with the data center moratorium until we do?
Well, I think what they are calling for is that we need real regulation. They don’t think that bill is going to pass in this split Congress. They are setting the terms of the debate, which is: Why are we going forward with this until we’ve done the real work?
I think that’s the right question to ask. If I could wave a magic wand and pass any bill I want, it wouldn’t be the moratorium. It would be the regulations that the moratorium is calling for. But putting that as a negotiating tactic, I think, is meeting the moment in the scale.
Bernie talks about the potential benefits of A.I. and also talks about the risks and the downside. I think he’s been the clearest communicator on it.
But you’re right, it’s a bipartisan issue. It is not one that is left or right.
In your framework for A.I. regulation, you have a somewhat different approach to data centers. You seem to see them as an opportunity. An opportunity for what?
They could be an opportunity. Again, you need the regulation first. It’s not: Oh, yeah, this will work in the future.
And given the political power of these companies, I would be very skeptical of their doing it unless we pass regulation with teeth.
But the idea is that our electric grid is outdated and so in need of updates throughout the country — but even here in New York.
It also slows down the renewable energy transition, because if you want to have solar on homes, you need a grid that is more responsive to generation happening in a distributed manner, and it’s not right now.
We’ve tried to upgrade the grids — we need funds to do it. The only options on the table are the government pays for it — which is taxpayers — you and I. Or it adds to our utility bills, which is rate payers — again, you and I.
Here comes an industry with, for all intents and purposes, unlimited private capital, that is really willing to pay for time. They are desperate for speed in building these out.
What I’m saying is you can set the incentives such that if you want to build a data center and you’re doing X percentage renewable — it should be a very high percentage. And you will pay not just for the connection to the grid and all the infrastructure that’s needed for that, but you’ll also pay, on top of that, a fee to make the grid more resilient and help the upgrades elsewhere, so that you can truly make the grid more green and more reliable.
Well then we’ll move you to the front of the interconnection queue. And by doing that, we’ll push your competitors to the back of the interconnection queue, and you set up an incentive to actually build things in a way that benefits us.
Is it possible to do, given the way our build-outs and infrastructure really work?
The reason I’ve developed some cynicism here is I remember being promised the smart grid of the future in the 2009 American Recovery and Reinvestment Act.
Yes.
And we didn’t quite get that.
No.
I don’t think anybody said at the end of that: Our grid is now smart.
Then we passed the Inflation Reduction Act and the bipartisan infrastructure bill, which, between the two of them, had a lot of thoughts about energy generation. That, and other things, were meant to work on the grid.
I’m not saying there were no upgrades made to the grid anywhere, but I am saying that I keep getting promised gigantic grid overhauls.
Yes.
And then being told a couple of years later —
Whoops.
Somehow our grid is still this archaic mess, where the biggest problem for getting new green energy online is that we can’t connect it.
Your cynicism is warranted. One hundred percent.
Thank you. [Laughs.]
I dare say you wrote a whole book on ways that we could make that easier to do. But maybe the difference here is you have private capital coming up to do it, and the whole proposal is being precise on ways that we can expedite. And by expediting, shifting the ones that are dirty and not paying their way to the back of the line.
As I understand the theory underneath the data center approach, it’s really that if all this money is going to flood into A.I., and A.I. is going to be, at least in part, built on the collective commons of the entire culture that came before it, that we should benefit.
It’s not just that Sam Altman created some magic algorithm. Sam Altman, OpenAI, Anthropic, Grok and so on inhaled the entire internet, ate up my books and the books of everybody else around, and trained these systems on them.
You have an idea in there that I think tracks this theory more closely than other things I’ve seen, which is an A.I. dividend. Talk me through that.
The A.I. dividend starts from thinking about how we can give Americans a real stake in the A.I. economy.
It starts with humility. We don’t know exactly how it’s going to go. We don’t know how disruptive it’s going to be. But right now is the time to plan for the potential outcomes that could come.
There’s always been this conversation. In my economics classes at I.L.R., it was that every technology revolution has always created more jobs than it has destroyed. Arguable, maybe.
But this is the first time someone is building a technology and stating that the goal is to replace all human labor: It is to be better than humans at everything.
And the metric by which we understand how good the technology is getting is, functionally, how well it is capable of mimicking different forms of human labor and then exceeding them.
Exactly right.
You are creating a replacement-for-human-labor machine.
Exactly. It’s the first time that it has been tried, and it doesn’t mean it will succeed, but it certainly means the government needs to take it seriously.
So the idea of the A.I. dividend is: What if we end up in that world where all human labor is replaced — or just a significant portion of it is displaced? How do you have a society that is actually functioning then?
You have to start talking about a universal basic income. The idea is to make sure that we are setting up the structures now for Americans to be protected if we end up in that future.
I have a lot of things about how we can prevent that future or changes, etc., but the A.I. dividend is almost that insurance policy.
You could fund it via boring things, like a wealth tax, that have been talked about. You could fund it via a token tax. So putting a tax on the usage of A.I. — it may be limited to commercial opportunities where you’re replacing human labor or not.
And that’s a fine policy, as long as investment in capital always leads to more jobs, which has been economic theory for hundreds of years. But maybe A.I. is shifting that. And if it’s shifting that, we need to shift our tax policy to be taxing A.I. and to be discounting hiring humans, and token tax starts to get at that.
But then the other funding mechanism that I talk about for the A.I. dividend is actually taking warrants in these companies. Large, out-of-the-money warrants, where you say that if the value of the A.I. companies were to go up an enormous amount, then the government would have the right to buy shares at a set price.
They basically only pay off if one or multiple of the companies are wildly successful. Basically, if they are replacing all human labor. And if you institute that now, then venture capitalists celebrate it and say you’re participating in the upside. And if you try to implement it after one of them is successful, then you’re seizing the means of production and seizing wealth.
So my idea is you go down all of these paths. You start to find ways to have the revenue to actually fund universal basic income or investments in job retraining or just a broader safety net. But do it in ways that automatically scale and adjust and kick in at the speed of A.I.
Here’s a concern I’ve always had about this set of policies or this set of answers to the problem of A.I. and job displacement.
I’ve been very, very near the universal basic income debate a long time. My wife, Annie Lowrey, wrote a book on universal basic income called “Give People Money.” I used to work closely with Dylan Matthews, who did a lot of writing on universal basic income.
The trick of universal basic income, to me — which maybe you support on its own merits, which is fine — is that under any plausible scenario of A.I. job displacement, it is happening to some people and not all people.
I see you looking skeptical, but I don’t see a world in which one day we wake up, and everybody’s job is gone. It’s going to start with some people’s job.
It will start with some people’s job.
So if I thought it was going to be everybody’s job all at once, I wouldn’t worry about it, because then we would just figure out a policy to compensate everyone.
But imagine you’re a teamster. You drive a truck. You’re making $80,000 or $120,000 a year, and the autonomous truck companies put you and your fellow teamsters out of work.
And: Don’t worry — we’ve actually passed universal basic income.
No, that’s totally insufficient.
And you’re now getting $37,000 from your universal basic income.
Yes. One hundred percent.
And I’m getting $37,000 from the universal basic income.
Yes.
And I’m still here in my podcasting studio. You got screwed, I got a check.
What worries me the most is I don’t think we’re going to a world of full automation, but even if you believed we were, it’s a transition. And some people are going to really lose out, and other people are going to be unaffected or gain.
I don’t hear policy ideas that seem to know what to do with the people who are losing out along the way, the people who are actually getting displaced.
Not the world where everybody’s displaced, but the world where graduating with a marketing degree, you are three times more likely to be unemployed than you were before. Or coders are suddenly seeing a contraction in demand for their services, but some coders are making a ton of money.
How do you think about the differentials here?
Universal basic income by itself is insufficient.
And I would love to understand why you think we’re not headed to a world of full automation, because it’s tough for me to see where that stops once we start on it. But we can come back to that.
There will be a period of transition either way. I don’t think it will be all at once. The idea is not just: Oh yeah, we’re all going to have this basic income. Because you’re right, people will be screwed by that.
The idea is to do a number of things simultaneously — which include changing the tax code so that we’re actually charging for the use of A.I. and discounting the use of labor. And that’s a way to protect jobs and slow down the transition itself.
It’s investments, not just in universal basic income, but in job retraining programs and in structures that help people go into new careers.
Now, granted, they have a really bad track record.
This is my concern.
A really bad track record, but it doesn’t mean you shouldn’t still be investing in community colleges and finding ways to improve it as much as possible.
But you’re right. To say that we’re just going to give a universal basic income is not enough.
We have to think about other ways of adjusting that transition, which could include when you have people who have a permit or training or a license that takes a number of years to acquire. Maybe you still require that for the transition for five or 10 years, so that people can turn that training into equity, and that’s another way that they have a stake in the A.I. economy.
We’re going to need a lot of policy solutions. That’s why the framework I put out has 43 different ideas in it.
Let’s get very specific on this. And I want to come back to the question of full automation.
Yes.
New York City is facing a near-term question here. Waymo, the autonomous vehicle company, has had permits to do the mapping and testing here needed to eventually roll out Waymo in New York City, the way it has been rolled out in San Francisco and Phoenix and other places. And that set of permits has expired.
Mayor Mamdani has been, I would say, very noncommittal about whether or not he wants to extend them. He said:
If a company like Waymo finds itself in New York City, what they will also find is a city government that is committed to delivering for the workers who keep the city running. Those workers also include our taxi drivers.
So here you have this very near question. I mean, Waymo is a technological advance. They are nice to ride in. They are safer from all the data we have. They also will, if you roll them out en masse in the coming years, displace taxi drivers, Uber drivers, Lyft drivers. How do you balance that?
It’s a tough and ongoing question that the speed of the transition only makes worse.
There are ways. Again, maybe you require medallions for Waymos for a set amount of time, and that’s what enables some bit of transition. But then you’re only protecting the medallion owners and not the drivers. That’s maybe a piece of what that transition looks like, especially for those who have gone into a huge amount of debt to buy that medallion.
You think about job retraining and other places that can go in. You think about a broader safety net, but we don’t have a full policy solution for any sort of disruption that happens this quickly. It just hasn’t been developed. And we need people in government who are willing to take that problem seriously and look for solutions that aren’t just stop or go, because this technology is coming.
What’s your version of that solution for Waymo? Because Waymo is interesting to me — or autonomous vehicles. You can think of many different companies trying to do this.
In the public conversation around generative A.I., it is sometimes hard to see what the gains are — at least in the way people talk about it.
Yet driverless cars really do have gains. A world of driverless cars is safer. There are a lot of people who have mobility issues right now, or discrimination issues in getting picked up, all kinds of things, who could really be helped.
Autonomous cars are just a fascinating technology. You’re not going to have people falling asleep and then hitting somebody on the road.
Slowing them down has a cost, not just in the convenience people might experience, but also in safety and, potentially, in lives saved. And speeding them up has a cost in displacement.
You said we need politicians willing to take this seriously. You’re a politician, you’re looking to take this seriously. What do you do?
Well, I’ve said a few different options and things that we can do together, which is the medallions.
But should Waymo keep going? That’s the answer? That you’ll charge Waymo for medallions and that’s the money that goes into the coffer? Who gets that money?
I think you can specifically be focused on job retraining and on people who are displaced, and you can try to share the benefits in that way. That is a portion of that answer that we have to go to.
But the real question is: Should we be investing in Waymos or in public transit? We have a great system to move people around, and we actually need an investment in improving that.
I took Waymo for the first time in L.A. It was a light rain by New York City standards, but I think a thunderstorm by L.A. standards. And I got in the Waymo, and it went 20 feet and pulled over to the side of the road, and it just said: Dialing support. Didn’t say what or why it was calling, etc. I found out later, almost every Waymo in the city had done it at the same time, because it couldn’t handle rain.
And so support timed out. And I was sitting there for 12 minutes — first Waymo I ever rode. And I dialed an Uber or Lyft or something, and finally support came through, and the person was like: Oh yeah, it seems like you’re stuck — I’ll drive you out of there.
So I have questions about how they function in the rain in New York City, and I have questions about, when the backup is human drivers, it seems like it’s another form of outsourcing, as well.
So yes, in the long term, theoretical, will autonomous vehicles be safer than humans? In most cases, yes. But to say that we are definitely there right now ——
Oh, I wouldn’t say we’re necessarily there right now. It’s only the conditions in which they’re willing to do them, which are quite limited.
There you go.
Like, you can’t take a Waymo from San Francisco to Phoenix. You can only take one inside San Francisco or Phoenix.
So all of that is to say, this hypothetical of: They’re ready to go and be safer right now — is not right.
But I think they’re safer in the places they drive. And the reason I’m pushing on this is not because I’m pro-Waymo or anti-Waymo. It’s that there is a question that public officials are facing right now about how quickly to move forward into that world.
Zohran Mamdani could extend the permits and accelerate Waymo’s coming to New York City, or he could drag his feet and keep it out of New York City. And then there are some ideas in the middle, where maybe you could have Waymo paying high prices. But even to the extent that you’re doing that, what you’re doing is pulling Waymo in.
I think people sometimes don’t quite want to face up to that there is a yes or no question on some of these issues. And in the long run: Do you want to protect the jobs of taxi drivers, or do you want to have autonomous vehicles operating inside of your city? — is kind of a yes or no question.
As Keynes says: In the long run, we’re all dead.
There’s a question of speed, not yes or no. And I think most people here are, from zero to 100, somewhere between 40 and 60. And we’re being described as yes or no.
I think it’s not ready right now for the environment of New York City. It will be ready sometime in the future. And like with a lot of A.I., we need to be thoughtful on that transition, on how it benefits people and how it hurts them.
I think it is almost easier to imagine ways of handling the financial consequences of A.I. for people, even though I don’t actually think we’ve figured that out, than the consequences for their dignity, for their purpose.
People train for jobs. That job is part of their identity. And then all of a sudden, it’s getting taken from them, and you’re going to say,: Hey, taxi worker, over here at the community college, you can retrain to be a home health care aide?
There’s something here that we’re going to have to balance — the economic efficiencies or pushes forward, with the basic deal we offer people in this country and in this economy. Which is that you study for something, you learn how to do a job, you apprentice, and we value you for doing that. And then we’re supposed to treat that as having value.
I feel like we don’t talk about this dignity dimension enough, so I’m curious how you think about it.
For so long, humans have been defined by their job. That has become a piece of the dignity that you, in this worldview, have purpose, have value — because of the thing that you do. That has been ingrained in people for a while.
If we keep that mind-set, then universal basic income is an extremely disappointing answer to it. And I think, for lots of reasons, it’s not the full solution.
The world that is painted by the A.I. optimists is that we’re going to get to this post-working era, where people no longer derive their purpose from work. I’m skeptical.
We’ll be like the British gentry.
Yes. I’m skeptical.
But you believe in full automation, so then you think we’re going to dystopia?
On our current path, yes. But I think we have the chance to change it.
When you throw the ball down the field mentally, if you’re skeptical, what is the good outcome here? What is the good outcome of: We have automated away — which you seem to think is very possible, at least a very large percentage of the economy’s jobs — and yet what we have is something better than at least where we’ve been or where we are?
It would have to be at the point where it’s not just that your basic material needs are met, but the standard of living is higher than it is now, where you can go about your day and be in a better place than you are right now.
This isn’t a perfect analogy — A.I. is different in all kinds of ways — but if you look back 100 years ago, the average American worked 60 hours a week and had a much lower standard of living. Now an American working 40 hours a week has a higher one.
We could get to one where we work 20 hours or 10 hours and have a higher one yet. But we were able to do that transition because workers had power, because Americans had political power, because we were able to shape that technology to work for us, either directly through legislation or by organizing unions and doing it indirectly at the workplace.
If this transition happens too quickly and we lose that political power, it doesn’t just happen.
So I want to talk about something we are already seeing the effects of. And you talk about this — it’s very early in your plan — which is kids. One of my theories of legislating, having covered a lot of this, is that sometimes a crucial thing in building legislative capacity is to just find the places where there’s enough consensus to legislate a bit so that people learn about the issue and learn how to legislate on it.
There are all kinds of experiments consenting adults can run on themselves. I am pretty worried about the situation with A.I.s and kids. We really don’t know what it’s going to mean for kids to have relationships with A.I.s and to grow up where they’ve got A.I. friends and so on. What is your approach to kids and generative A.I.?
I agree with you. I think kids in some ways need more protection, and we don’t know a lot of the impact that A.I. will have.
That doesn’t mean we don’t look at places where it can benefit kids. I can imagine a world where having a personalized tutor at exactly your level in each subject and with the ability to communicate with you in exactly the way you like to learn, as a supplement to what you’re getting from teachers in the classroom and your parents, is a helpful thing.
But teachers and parents need a view into all of the interactions, and we need strong data protection.
And I think, broadly, a lot of these projects, even if you think some teenagers should be allowed on or not, need to be thoughtful on the mental health impacts.
This is a really scary period. We’ve seen the big stories about chatbots, but then we’ve also seen ChatGPT integrated into teddy bears and things that just feel really unnecessary.
So what’s in your plan on this? What do you actually want to do?
So age verification for certain aspects of these interactions, the mental health checking, engaging in updating pedagogy, making sure that teachers and parents have a view into any interaction that goes with A.I., broad protection on training of kids’ data and data privacy aspects, as well. And yes, we need to prepare kids for the jobs of the future.
I don’t think you should shut off access to A.I. People should be exposed to these tools as they are in high school and college and getting there — but being really thoughtful about what those interactions are.
When you say updating pedagogy, how do you want to update it?
So you can still assign essays, but if you do a take-home essay, people are just putting it into ChatGPT. Everyone knows this. But I’ve done a few things where high school students come up to Albany, and when the teacher leaves the room, I say: How many have you used ChatGPT to write an essay? And every hand goes up.
So should we be requiring essays written by hand? Should we require them to be written in Google Docs or a program like it, so you can actually watch keystrokes being entered? Just updating for the tools that are up there and making sure the old way of teaching is still teaching.
I’m hiring for something right now, and it has really disoriented me that cover letters are now completely useless.
I’ve been involved in hiring for hundreds of positions now, given my time at Vox, and cover letters were always quite important to me as a way of sussing out somebody whose qualifications were maybe less obvious for the role, but you could see in the way they wrote an unusual mind at work.
And now, I’m not saying that’s completely impossible — you can still write a great cover letter — but it is getting harder and harder to know what you’re looking at. Are you looking at somebody who’s a great mind at work? Or are you looking at somebody who’s cyborging it with an A.I. system? And maybe that’s fine, because you know that’s the world, and somebody who’s very facile at using them is actually showing they have a skill that others don’t.
But on the other hand, I actually want to know how the person thinks — not how good they are at prompting. To completely knock out our ability to evaluate somebody’s writing skills ——
Can I ask — not any of your current employees, obviously, but people you’ve interviewed, have you noticed a loss of skill in writing?
I haven’t noticed it yet, but I would say I have not hired since A.I. got good enough.
I’ve definitely noticed it.
I think people underestimate this because they’re used to the quirks of poorly prompted ChatGPT writing, and it is incredibly, incredibly easy to spot.
Yes.
But if you know how to use the systems, and you’re better at it, and you’re using more advanced forms of ChatGPT or Claude or Gemini, you can’t tell.
But when you ask people to write things, I think there’s been a few years now where that skill is not being taught. And you have pointed out that writing is how many people strengthen their ideas. The work that goes into that is part of the work of thinking.
And I have noticed — again, not speaking to anyone I’ve hired — as people have applied or others, that I think there has been a decrease in people’s ability to write well and express their thoughts clearly and do the editing work.
One thing in your A.I. framework that I thought was interesting was that you want to expand the government’s capacity on A.I. What does that mean?
It means making sure that we have the expertise within government to understand this technology and help contribute in a positive way to its development.
And this has been horribly underinvested in. And so we’re not taking this technology as seriously as we need to.
This is the first major technology that has developed basically without any government progress or work in it. Al Gore didn’t invent the internet, but the Defense Advanced Research Projects Agency did develop the intranet that became the internet.
Even the space race was obviously primarily government led. A.I. was completely developed in the private sector. I mean, some grants on research, but it was done outside the structures of government.
We need to be hiring in the expertise within government if we are going to help to govern and lead to good outcomes here.
Can we do that with the way government hires? I’ve run into this question before, talking to people inside the federal government, inside state governments. Government hiring, for very good reasons, has structured pay scales and worries about horizontal equity and a million things that make sense when you’re very worried about corruption and patronage and favoritism.
The market for top A.I. talent is insane. What Meta will pay you, what Google Alphabet will pay you, what OpenAI and Anthropic will pay you — what they can pay you.
I don’t think any of them are going to pay me.
Yes. Not you specifically, but someone. There’s a question of not cutting funding for the parts of government trying to do this. But also: How do you just make sure the government has the staffing talent to keep up in a market this hot?
We absolutely should make it easier for government to hire experts and to pay more in order to compete in that way. We’ve found a way to let states contribute to more hiring. It’s usually the football coach in any state. I’d rather it be a real A.I. expert who’s working to make this future actually work for Americans.
I want to get you to expand on this a bit. I think, as we’re hearing a lot of reports of Anthropic’s Mythos, which I have not had access to, so I don’t know how good it really is at hacking every computer system on the planet. But they are saying it is very capable of that.
And I think, really quickly, if we’re going to have A.I. companies creating what are functionally cyber-superweapons, the ability of the government to actually oversee these systems becomes pretty paramount very quickly.
I think Anthropic is an interesting place and is posing a lot of governance challenges in opposite directions at the same time.
On the one hand, you can’t just have a private company creating cyber-superweapons and hope for the best.
On the other hand, we just watched, with the Anthropic and Department of Defense-Department of War controversy, when you’re dealing with the Trump administration, do you really want this quasi-nationalization of labs?
I think we’re seeing simultaneously that it is uncomfortable having these systems as private as they are. It is uncomfortable recognizing that if the government gets its hands on them, they could be used for whatever a particular government’s purposes might be. And so it has left a lot of us, I think, who care about regulation and care about governance in an awkward spot.
It is deeply uncomfortable because we are talking about such extreme power, and it’s a question of where that power lies. If you take as a given that a superintelligence will be developed, and I don’t see any reason there won’t be at this point, then of course it’s an uncomfortable question about where that sits. Because you’re talking about something that is smarter than any human ever. That is a real power question.
This is a real question that needs to be settled by policy, that needs to be settled by law. If you’re just leaving it up to the whims of an executive branch where there are no restrictions on them, or private companies where there’s no law, both of those feel deeply uncomfortable.
This is why we need Congress to step up to the plate and actually decide how this division should happen.
In the answers you’ve given me, there are two things that have become clear in the background of the way you think about this. One, you seem to believe we’re going to go to full automation. Not necessarily tomorrow, but you reacted with a lot of skepticism when I said I didn’t think we would get there.
I think there’s a significant likelihood, and we should take it seriously.
And that superintelligence is also a real possibility, and we’re not necessarily going to stop at the human level or even a bit beyond your average worker.
Yes.
We could soon be dealing with something that I think a lot of people would hear and say: Why not stop it? Why do we want a superintelligence, the machine god, that will put us all out of work, that we have no guarantee we will know how to control?
If this is your set of views, why move forward as opposed to trying to throw your body on the train tracks?
Well, I don’t think, right now, metaphorically throwing your body on the train tracks will make a strong difference.
I do think we should slow down the development until we’ve made a lot more progress on the alignment problem. I do think we’re getting into really risky territory.
What you need — and one of the sections of the plan is about — diplomacy. It’s about international action.
We should be engaging with other countries. We should be engaging with China. We should be building universal verification systems on what is happening both at the chip level, where you can look at the geography and how it’s being used, and in the models themselves. We should be trying to lower the temperature on there being an arms race.
So yes, I am worried. If I had a magic wand, I would slow things down until we had better guarantees about what we were stepping into and where we were going.
So now I want to flip the valence of this conversation. We’ve been talking, as I think most of the A.I. conversation does, about what I would call A.I. harm reduction: If this technology is moving forward, how do we make sure it causes as little harm as possible?
But I think for people to want this technology to move forward — for it to actually even be conceptually a good idea for this technology to move forward — I think the case has to be better than that.
We were talking earlier about, in many ways, the absence of a positive vision for A.I. These companies have to make back a lot of investment in the coming years. And as best I can tell, the business model they’ve come up with is replacing white collar workers and, to some degree, subscription fees for people asking ChatGPT to look at a mole.
What I have been wondering about for some time is all these promises of A.I. for drug development, A.I. for energy innovations. What would it look like to have a public agenda that actually tried to make that real? That actually tried to make it such that there was more A.I. development that went in those directions and that we got more out of it?
I’ve heard you talk before about your interest in A.I. drug development. I want to hear your thinking, even if it’s not a full policy agenda, on what it would mean to have a positive agenda for A.I., where the public sector is shaping this toward social good as opposed to simply private profit.
We would build out an initiative that we’ve done in New York called Empire AI, in which the state government bought a large cluster of GPUs and committed to continuing to build that out, gave our public universities access to it so they could run experiments at a much cheaper rate and made a public investment on a research front to go after lots of things, including A.I. alignment and A.I. safety. But we could be directing grants to that specific research, and we could be building the infrastructure in the government to make that cheaper.
I absolutely believe we should be trying to use A.I. for good, and New York was the first state to do this. Others are following, but the federal government has the resources to really do a deep investment here.
And yes, for a while, A.I. benefits have been riding on the story of AlphaFold and serving and solving protein folding, which was an incredible advance and has sped up drug discovery.
But there could be more like that out there. There are definitely more like that out there. [Chuckles.]
If there’s not, then we’ve been sold a bill of goods here.
And I think the government should be making use of this technology for good and directing research in that way.
That doesn’t, by the way, solve alignment problems. It could be that you want it to do really good things — and then actually in pursuing that, it goes off in a whole other, different direction. But yes, that is a good use of public investment.
Let’s focus in on drug development for a minute, because I think it’s in some ways the clearest case. GLP-1s, for instance, are a revolution right now. But they’re actually quite an old drug — been around for decades. And all of a sudden, we have all of these new candidates either to develop or to test.
Let’s say you imagine what certainly seems possible, which is that in the next three to five years, A.I. systems begin generating a pace of molecules worthy of investigation — either new molecules or existing molecules, where the A.I. systems scour the data and realize they might have other uses.
But if you know anything about drug development, you have choke points all across that process. There’s what the F.D.A. can do. There’s getting everything from rats to monkeys to humans for trials. And a world in which we suddenly had more good candidates would be a world where the choke points became something very different.
This gets a little bit more toward the way you were thinking, I think, about the grid. Which is if we imagine A.I. will create all this pressure for investment, and it will create all this demand for something, how do you use that pressure to open up parts of the system that have been clogged, that have fallen somewhat into disrepair?
How would you make it possible for your economy to actually benefit from A.I. —which requires operating not just in the world of probabilistic predictions but actually in the world of things, of steel, of cement, of human beings who are willing to sign up for a drug trial?
Well, that’s why there’s more to my platform than just the A.I. piece. We’ve got to —
I’m giving you a good opportunity to talk about it here!
We have to cut red tape and cut regulations.
One of the ways that I have already used A.I. is that I put every statute in New York State through a large language model and asked it to identify laws that are out of date, that require paper when we could do something digitally, a bunch of ways of checking that we have requirements that are just getting in the way of getting things done — what Jen Pahlka might call the policy “cruft” that develops over time. I put together now a 60-page bill for this session of just pulling out a bunch of these old requirements that are getting in the way of doing things.
We can do a similar thing with regulations, not just with statutes but where have we developed practices that are now in the way of moving forward in drug discovery.
We need to change policies that stop government from getting things done, and sometimes that’s in technology’s doing the thing more efficiently. Sometimes that’s in using the technology — or not — but finding ways to identify choke points and find ways to alleviate them.
As we speak, it’s tax week. A lot of us who waited until the end paid our taxes this week. And it was already possible for the I.R.S. to prefill a tax form for most Americans who have pretty straightforward taxes. Lobbying has made that very hard, and the Trump administration has made that harder.
But it would be fundamentally, as a technical matter, trivial for there to be, through the I.R.S., a tax preparation A.I. system that every American had access to, where they uploaded their forms, it was cross-checked with I.R.S. data, and it did their taxes for them in seconds, saving people a lot of time and energy.
The capacity to actually give every American an A.I. accountant under the auspices of the I.R.S. If we don’t do it, it’s not because we can’t.
There’s a real question of whether or not the lobbyists would allow people to do that. But the relationship between people and the state could really be transformed if government chose to transform it.
One hundred percent. And I think we need to make that a priority.
So I have a bill I’ve been pushing for a few years to make it easier for different agencies within New York City to share data that you give to them for the purpose of signing you up for benefits. So that if they sign you up for one benefit, you can automatically be signed up for another one.
That, right now, is restricted, and we should change that. Obviously, New York City invested about $100 million into building a portal, but actually what we need are changes on the back end to laws that make it easier to share that data.
I’ll go a step forward. I was speaking with the tax department in New York State and advocating, saying: OK, Free File, it makes it easy for you. You don’t need another software, but why can’t we just do it for New Yorkers? We have a lot of the same information as the New York State Department.
The answer I got back is that so much of the information we have is actually wrong. They had this need to just improve the data internally first.
And I said: OK, why don’t you just find companies that are wrong, or build systems to help them? And they were like: We’re working on that, but give us five years. That’s where we want to get so that we can automate it.
So maybe it does come back around to data integration and just having the data correct. And it might not be any more that the technical aspects of how to do your taxes is the limitation — but whether the underlying data that we’re feeding it is accurate enough.
I guess the principle I’m trying to get at here is, to the extent you don’t believe we’re going to pause — I’m not saying you don’t, but one doesn’t — that we are going to move forward at some pace here, which seems likely, I think actually benefiting from A.I. as a public is a harder challenge than people have given it credit for.
I don’t think just because the systems get better, there is necessarily a public benefit. There could be individual benefits, individual harms.
But if we want drug discovery to accelerate, we need to open up the systems that would allow drug discovery to move faster. If we want the relationship between people in the state to get cleaner, we need to actually create the conditions for it and overhaul very, very, very difficult and archaic and multilayered and error-filled government databases.
And it’s interesting, because I do think right now, throughout the private sector, you see companies with greater and lesser degrees of success trying to figure out: What does it mean to rebuild ourselves to use A.I.? Everything from how teams are structured to how our data works.
The government, because it doesn’t get competed out of business by new governments, is working on much older systems, and it’s very, very hard to build them. But I think for A.I. to be worth it, you’re going to need a lot more of this kind of investment at a much higher level of ambition.
Right now, we don’t even seem to be able to legislate on the harms very effectively, so I’m not confused as to why we are focusing there. But I do worry a bit about it, because there’s a world where we’ve done some reasonable harm reduction legislation and done very little to benefit from it.
That’s a world where we’ve kind of pushed A.I. toward being a worker replacement machine — as opposed to having a public vision for what we want from it.
I agree 100 percent.
This is the hard work of governing. I don’t think these are the easy places where we can build the legislative muscle. I would hope so — I think that’s probably around kids — but I think these are parts of the places where we have to work together to change that. And part of it will be on A.I. and setting up incentives, and part of it will be building the infrastructure that allows that to happen.
We’re talking a lot about pretty high concepts here. One of my first bills in the state legislature was to help the state get on cloud computing, because it mostly uses mainframes. And the speaker of the assembly ——
Mostly used mainframes — in 2023. [Chuckles.]
Yes. The speaker of the assembly codes in Fortran, and I always joke that his retirement plan is going to be fixing all the state systems because they still run on Fortran.
There’s just work that needs to be done on modernizing to allow us to take advantage of the benefits. And that will require both direct investments and a lot of legislating to encourage that direction.
One of the reasons I wanted to have this conversation with you is that you’ve ended up, whether you wanted to or not, a bit of a test case for how all this is going to work.
So you’re running for Congress, and there is, as I’ve mentioned before, this super PAC that’s funded by co-founders of Palantir, OpenAI and Andreessen Horowitz. They’ve spent $1 million opposing your campaign so far.
Two-and-a-half million dollars, so far.
Oh, $2.5 million. And suggested they might spend up to $10 million.
At the same time, I’ve looked at some of their statements. Greg Brockman, who’s one of the OpenAI founders and is a major donor of this PAC, has said:
Being pro-A.I. does not mean being anti-regulation. It means being thoughtful — crafting policies that secure A.I.’s transformative benefits while mitigating risks and preserving flexibility as the technology continues to evolve rapidly.
So what’s their problem with you?
If they really truly believed in having one national framework that regulates A.I. and balances the benefits and risks, they’d be supporting me. I think it’s a difference between what they say for marketing purposes and what they actually believe. And their actions portray that.
Last week, OpenAI released a policy document that mirrors a lot of my policies. The emphases are different ——
I wouldn’t say that. I felt ——
Parts of it. Parts of it.
Yes. They’re like: We believe in a 32-hour workweek.
Yes. But they did say they wanted third-party audits, but sometime in the future. I think we’re already there. And there was much more of an emphasis on society dealing with the problems after the fact, as opposed to restrictions on the developers. I’m not saying it’s a match, but they put forward some policies there.
They also put out, later in the week, policies specifically around kids that included safe harbor provisions and testing, and encouraged red-teaming of models. When you red-team a model or red-team any software, you get people to try to intentionally break it, to do something it’s not supposed to do. And you might want to red-team it around producing child sexual abuse material to make sure that it can’t out in the world.
And right now, in every state in the country, red-teaming it and producing that material would be illegal. We have a no-tolerance policy on the production of the material. Now, obviously, no D.A. is going to go after you for that, but one of the things they talk about there is they want to extend safe harbor provisions so that you can actually encourage red-teaming.
This is my concern, and I’ve heard this from people on the Hill — people in the Senate. Elissa Slotkin said a version of this to me on the record: that at the exact moment that A.I. is becoming so powerful that it would be irresponsible for Congress not to be starting to construct regulations, legislative structures, transparency. That the A.I. industry now has so much money that, much as crypto did before, it’s able to create a kind of super PAC that has a Death Star–like capability.
Now, it’s weird because Anthropic is one of the funders of another PAC that is sort of more pro-regulation and is supporting you, so you have players on both sides. But a world where A.I. will have this much money and the political system is this permeable to money, is a world where, in order to regulate A.I., you’re going to need to have to sign up your own A.I. patron to support you.
And so I feel like there is some bigger question of political economy and power here that has ended up being a bit of a test case in this race, which is, I think, quite worrisome. I just think we could very, very quickly end up in a scenario where politicians are terrified of the issue.
And that’s the goal of Leading the Future. The goal, as they’ve stated, is to extract so much pain in this race and to beat me up so badly that when the idea of A.I. regulation is proposed in the future, politicians run in the other direction. They have said publicly that they want to make an example out of me.
Think about what that means. Not: Oh, we have a different view — but: We want to make an example out of Alex Bores.
And they want to do that, not because I have ideas that are outside the mainstream. When I proposed my framework, I got praise from those on the left. Also the chief futurist of OpenAI retweeted it.
They’re coming after me because I successfully passed the bill. Frameworks — there are lots of frameworks. Those are cheap.
Who’s going to put political capital forward and get something actually done? And they tried to prevent any states from moving forward by putting this pre-emption language in legislation that failed.
So they instead got this executive order from Donald Trump to target states that want to regulate A.I. and try to extract punishment, where they would cut off funding and sue the states. And it targeted the RAISE Act along with a few other bills throughout the country.
So why are they coming after me? Because I might actually get a bill passed.
What actually in the RAISE Act do they fight? Because as somebody who cares about A.I. regulation, and I think it’s a good start, what actually got enacted there is a pretty soft bill.
It is the strongest A.I. safety bill in the country, and I’m embarrassed by that fact. It should be much stronger.
When they come after it, when they’re trying to get it changed, what are they so upset about?
It’s that there’s any regulation whatsoever. That really is the challenge. And that there is any regulation, that they have to play by any rules, is such anathema to them.
And they don’t have to win forever. They only have to push this off for an election cycle or two. The speed with which A.I. is developing, the amount of political power, let alone capital, that they will have to deploy in the future will be unbounded.
We already have elected officials who are terrified to take up this cause, despite how popular it is, because they see all the money on the other side, and they’re risk averse.
I’m running for Congress. I talk to every member of Congress I can. And I hear from them in quiet conversations: Yeah, we’re watching this race. We want to see if this is an issue that you can win on standing with people, or if the money just swamps everything.
And the lesson that will be learned by members of Congress if the super PAC wins is: Run the other way. Don’t actually touch this. Maybe you can give a speech on it, maybe you can go on a podcast about it. But don’t try to pass the bill, because they will end your career.
I think that’s a place to end. Always the final question: What are three books you’d recommend to the audience?
The first is my favorite book of all time — and I know you have thoughts on this book. It’s “A Theory of Justice” by John Rawls. I think it does the best job of setting up a broad framework of the rights of humans while also understanding when inequalities could be justified. And I think it’s the best place to start for political philosophy.
I know you’ve tried it a few times. I will point out that in the intro, he says: This is the third of the book that you have to read to get the basics of it, and here’s the half of the book you have to read to really deeply understand it, and the rest is for the academics. So I’d encourage you to give it another try.
The second one is “World Eaters” by Catherine Bracy, which is marketed as this deeply anti-venture capitalist book — but that I actually think is written by a tech insider and is a much more nuanced approach to the incentives that venture capital sets up. And that is always for growth, growth, growth — and don’t think about the social consequences.
I’ll add that since V.C.s are always pushing for a company that will scale no matter what — I saw this happen to my wife, who’s a Y Combinator founder who built a business that probably could have been fine on its own but had the venture investment, and it was scale or die. And so with the negative externalities that have come from that, I think it’s a really timely look as we are building out A.I.
The last one is a little more whimsical, but it goes back to our conversation about the skill of writing. And it’s “Bird by Bird” by Anne Lamott, which is just a delightful read and is a good reminder for any procrastinators to just break down your work and do it bird by bird. That’s where the title comes from. It is such a well-written, leads by example and in the instructions on the art of writing. And I encourage, especially when our writing skills are being degraded, for people to be intentional in that practice and to read that book.
Alex Bores, thank you very much.
Thanks for having me.
You can listen to this conversation by following “The Ezra Klein Show” on the NYTimes app, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts. View a list of book recommendations from our guests here.
This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Lori Segal. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota and Isaac Jones. Our recording engineer is Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Cascione, Michelle Harris, Rollin Hu, Kristin Lin, Emma Kehlbeck, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Shannon Busta and Lauren Reddy. The director of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Brianna Johnson. Transcript editing by Sarah Murphy, Andrea Gutierrez and Marlaine Glicksman.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Why Are Palantir and OpenAI Scared of Alex Bores? appeared first on New York Times.




