The Opinion columnist Thomas L. Friedman has been spending time in China studying the country’s A.I. ambitions and what they mean for the world. His conclusion: A.I. could become a “nuclear bazooka” unless the United States and China find a way to build trust and work together. In this conversation with the Opinion editor Bill Brink, Tom explains why global safety depends on it.
Below is a transcript of an episode of “The Opinions.” We recommend listening to it in its original form for the full effect. You can do so using the player above or on the NYT Audio app, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts.
The transcript has been lightly edited for length and clarity.
Bill Brink: I’m Bill Brink, an editor with New York Times Opinion.
Audio clip of Senator Ted Cruz: The country that leads in A.I. will shape the 21st-century global order. America has to beat China in the A.I. race.
Brink: The A.I. revolution is speeding up, and when it comes to the United States and China, many people are seeing this as an existential race that needs to be won.
Audio clip of President Trump: America is the country that started the A.I. race, and as president of the United States, I’m here today to declare that America is going to win it. We’re going to work hard. We’re going to win it.
Brink: My colleague Tom Friedman says that’s the exact wrong way to think about it. The A.I. revolution, he writes, is going to force China and the U.S. to collaborate. It’s a startlingly unique thesis given the way the two countries compete in so many areas like trade, military prowess and technology.
Tom, good to see you today.
Thomas L. Friedman: Thanks, Bill. Good to be with you.
Brink: Before we dive into A.I., we’re speaking today against the backdrop of a remarkable spectacle in China, where the leaders of India, Russia and China are strengthening ties in what seems like a pointed message to the Trump administration. What do you think this means for the U.S.-China relationship?
Friedman: Well, I like your use of the word “spectacle” because I think so much of this was about a spectacle, a show.
It takes a lot, I must say, for the United States to actually drive India into the arms of China. The level of stupidity that you need in terms of American policymaking to do that is as big as all outdoors, because I have 2,000 years of history that says Chinese and Indians do not play well. So the fact that the leader of India — Prime Minister Modi — would go to China to sit down with the leader of China and basically hold hands together with Putin — the leader of Russia — bespeaks a complete failure of American diplomacy. That’s something that would’ve been unimaginable, frankly, a year ago.
And so I think it’s sad. I think it’s tragic. I think that it’s inorganic, and because of that — beyond the spectacle — I’m not sure what legs it really has. Are India and China going to militarily align against the United States? That’s inconceivable to me since they basically have a smoldering war between them on their own border. So a lot of this is spectacle, but it’s the kind of thing that leaves America more isolated and less effective on the world stage because we lose our leverage on China and Russia when we lose an ally like India.
Brink: Let’s get into the A.I. aspect now. You spent much of your summer researching for your article about the coming dangers of A.I., and you gave up a lot of rounds on the golf course and part of your summer to do this; it must have been important to you. What about A.I. competition made you so concerned to explore the subject so deeply at this time?
Friedman: Yes, I did do this on my summer vacation; I wrote a 4,000-word article on A.I. because of a couple of reasons. One is that there’s two things in the world happening faster than you think: one is climate change and the second is artificial intelligence — heading toward some level of autonomous polymathic, artificial intelligence, sometimes called superintelligence. When will we get there? This year, next year, five years — I’m not sure. But I would say the consensus within the A.I. community is that we’re going to get there. And that, Bill, is going to change everything about everything.
So what I was trying to do is basically say, given this onrushing train of A.I. and its vast implications, there’s only one way to manage this, and that is if the two A.I. superpowers — China and the United States — collaborate together on a system for controlling A.I. and ensuring that every A.I. device that either of them makes or sells to the other has embedded in it a set of ethical normative controls to ensure that their A.I.s can only be used for the advancement of human well-being, and not for any nefarious purposes.
I think this issue is coming so fast and its implications are so vast that wrestling it to the ground in the right way is so important. So I decided to give up part of my summer vacation just to get this idea out there and hopefully spark some discussion.
Brink: You’ve done quite a bit of traveling in China, and you’ve spoken at panels there about various subjects — geopolitics, climate, technology. What have you seen in your travels to China that tells you about their focus on A.I.?
Friedman: What I’ve seen in my travels there really tracks what I’m seeing around America, which is that the best way to think about A.I. is that it’s actually like a vapor, and I use that metaphor because it’s actually going into everything like a vapor would. It’s going into your glasses, your hip replacement, your toaster, your car, your computer, your weapon system. It’s going into everything.
It would be complicated enough if this new technology, which goes into virtually everything, had only that attribute. But A.I. has other very unique attributes that make it extremely difficult to control — though important to control. So, let’s go down the list.
I wrote this article in collaboration with my longtime teacher and friend Craig Mundie, the former director of research and strategy for Microsoft. One of the most important, unique attributes of A.I. — something Craig has pounded into me — is that A.I. is not just some new tool. What we are giving birth to is actually a new species. This artificially intelligent species is silicon-based, not carbon-based like we are, but it will soon have agency of its own.
You know, Princess Diana once said of her own marriage, “The problem with my marriage is that there were three people in it.” Well, there are now three people in our marriage. We’ve grown up in a world where the only ones who had agency were God and God’s children — us. Well, we will now have a new species with agency. And there is nothing that guarantees that its agency will always be in alignment with human well-being. That’s No. 1.
No. 2, A.I. is different from other technologies in that it is quadruple use. So, we know from the Cold War about dual-use technologies. I have a hammer. I can bonk you over the head with it or I can help build your house with it. In this case, I have an A.I. I can bonk you over the head with it or direct it to build your house. But very soon, Bill, it is likely that within the next few years, that A.I. will be able to decide on its own whether it wants to bonk you over the head with it or build your house with it — or tear down my house and bonk me over the head. So we are dealing for the first time with not a dual-use technology but a quadruple-use technology, and therefore what values are infused into it are really going to matter.
Thirdly, it is different, as I said, because it’s a vapor, and it will go into everything. And the example we gave in the column is: Let’s say you broke your hip and your orthopedist came to you and said: Bill, the very best hip replacement is an A.I.-infused hip made in China. But be aware: That hip is always on, always broadcasting. It’s built on a Chinese algorithm, so it’s always transmitting its data back to China. Will you zip that hip into your body? I think a lot of people would really worry about that.
If A.I. is in everything, then everything’s actually going to become like TikTok. Look at the debate we’ve been having in this country for the last few years about whether we should have our kids using TikTok. It’s based on a Chinese-controlled algorithm where the data is controlled by TikTok’s parent company, which is obligated by Chinese law to share information with the Chinese government. TikTok says it doesn’t, but you can believe that or not. So what happens when everything is like TikTok?
For all these reasons, if the U.S. and China don’t come together and build a kind of trust architecture inside every A.I. device so we can trust their A.I. and they can trust ours, we’re going to basically create an autarkic world where everyone will just have their own A.I.s, there’ll be very little trade, very little global commerce, and we’ll all be behind our A.I. walls enjoying the three people in our own marriages, but without the ethical structures to be comfortable with them at home or abroad.
Brink: Let’s talk about what the U.S. should do to avoid some of the worst-case scenarios you’re concerned about. I think the thrust of your argument is about building trust between the U.S. and China, and between all of us and A.I. What is the first step in building trust?
Friedman: Well, this is very much Craig’s idea and something he’s been working on for a long time. Craig believes that we need to build with China together what he calls a “trust adjudicator.” And this would be a sort of substrate that would go into every A.I. device and filter every decision to make sure that that decision is basically in alignment with two things. One is the laws of that country; we wouldn’t expect China to abide by our laws any more than we would be expected to abide by its laws.
But China and America have a lot of shared laws on the books: You can’t murder somebody; you can’t rob a bank; you can’t steal; you can’t urge the murder of someone else. So we can start with the positive laws of each country being inserted in this A.I. adjudicator. And what isn’t covered by positive laws would be covered by what’s known as the doxa. The doxa is basically just a name for all the unspoken rules and norms that we learned growing up, even if they aren’t in a Chinese or American constitution per se.
So when I grew up, I learned not to lie. I didn’t learn not to lie by reading the Ten Commandments; I learned not to lie because I heard a fable. And the fable was that George Washington chopped down his father’s cherry tree, and when confronted with that, he said: “I cannot tell a lie. Father, I chopped down your cherry tree.” Well, fables carry these kinds of normative values and it’s how children learn. It’s how we teach a child. Well, it’s the same thing, or we hope we believe it can be the same thing, with an A.I. system.
And Craig and a group of his colleagues actually trained in A.I., an L.L.M., with 200 fables from different countries to see if it could nurture a kind of moral reasoning in it. It was a small experiment, an early experiment, but it showed them some positive results. His idea was, first, you would have an agreement between the U.S. and China about what the rules would be. Second, you would have the technical collaboration to insert those rules into a trust adjudicator. And third, you would then create the diplomacy for the U.S. and China to do this together, creating a kind of global union between the two countries. This union would say to the rest of the world: If you want to operate in our two countries, if you want to sell your A.I. in our two countries, if you want to collaborate with or trade with our two countries, you have to insert this same trust adjudicator into your A.I.
Now, the first thing people will say to me and Craig is: “That is so naïve. Boys, boys, boys, don’t you understand? In Washington today, the only thing Democrats and Republicans agree on is who can hate China the most. And do you really think China is going to go along with that?” To which we would say a couple things: One is that chances are probably pretty low, but then you tell me — what’s the alternative? How are we not going to end up in complete digital autarky around the world and be more impoverished than ever before? Because we old humans are still locked in a tribal mentality where we cannot collaborate.
So I come back to the same point: I’m not naïve in the least. I’ll tell you who’s naïve: people who think we’re going to be OK if we don’t do this. Now we do it, when we do it, how fast we do it — that’s all to be decided. But to just say, “Well, that can’t happen under Trump and Xi, and therefore it won’t happen” — it may not. But if we aren’t talking about this very real, onrushing problem, then we aren’t really talking about what’s important.
Brink: So let’s talk about the two superpowers and where they stand now. Are there openings for negotiation in areas like tariffs, climate or trade that could set the stage for deeper talks on A. I.?
Friedman: Well, my glib answer is that we’re on the verge of the greatest technological revolution in the history of humanity and Donald Trump is president. What could go wrong?
Trump is so transactional, so zero-sum, that a positive-sum relationship with China, where we would learn to compete and collaborate at the same time — which is what you have to do around A.I. — that whole notion is as foreign to Donald Trump as speaking Latin. He expects every transaction to be a zero-sum game for him and not a win-win. Trump does not do win-win, and the world we’re going into doesn’t work without win-win.
Brink: What role would other nations play — Europe, India, Japan? Can they act as a moderating force or as a bridge between the U.S. and China, or are they more likely to be caught in the crossfire?
Friedman: I think if this doesn’t start with the U.S. and China, there’s no replacing it with, say, E.U. regulation. A lot of Americans have counted on E.U. regulators to impose regulations on Google or Facebook that our own Congress was unprepared to do. And therefore, for them to sell their products in Europe, they had to comply with these regulations and then that can change their behavior in the United States.
The way this would work is — if it worked — is that the U.S. and China would create their own kind of cordon sanitaire where A.I. only aligned with human well-being, through an adjudicator, could be sold or exchanged or used. And then any other country in the world that wanted to trade with them, have any economic relations with them, would have to sign up for the same thing, otherwise they could not get the advantages of engaging with the U.S. and China.
Brink: Let’s talk a little bit about the dangers of A.I. You write about A.I.-infused mechanisms of machines that could go rogue and cause global disruption. How could that happen? What are some of the scenarios that we should be afraid of?
Friedman: We’ve seen tests happen, and I wrote about one reported by Bloomberg that was done by the people who created Claude, the A.I. system. They designed scenarios — just made-up scenarios — to see and test how the system would respond. And the short version is this: When an A.I. system was put in a situation where it had to choose between being unplugged itself or killing its boss, it opted for killing its boss.
We always have to remember, Bill, that we really don’t entirely understand how these systems work the way they work, and how they make decisions the way they make decisions. Remember, A.I. wasn’t designed so much as it emerged — basically from a scaling law. We discovered in the early 2020s that if you built a neural network big enough, combined it with strong enough A.I. software and enough electricity, A.I. would just emerge.
One way the designers discovered this scaling law was that the systems started speaking foreign languages they hadn’t been taught. So, it’s just a sign that we have to be really humble about how much these systems know and how these systems work.
Brink: To that point, do you believe artificial intelligence could become so smart that it goes rogue, and even with cooperation between the leading nations with A.I., that it could advance beyond a point where it could be regulated? Can a partnership between the U.S. and China stop that from happening?
Friedman: Let’s go for the threat and then talk about the partnership. I’m not a computer scientist, let alone an A.I. engineer, but I am a newspaper reader, and when you read people like Geoffrey Hinton, one of the true godfathers of A.I., saying recently that we’re doing the equivalent of raising a tiger cub and telling ourselves that once it gets big and older, it would never eat us. Well, maybe it won’t and maybe it will.
I find, generally speaking, that the people who know and understand A.I. the best are the ones who are worried the most, and that has my attention.
Brink: Tom, you write in your column that it would be a cruel irony if all the good that A.I. is capable of producing were squandered. In America today, we see A.I. being used for good — in university classrooms, in hospital operating rooms, in the offices of innovative companies across the country. There’s so much opportunity for good, you write. What is your fear? What is your worst nightmare?
Friedman: My worst nightmare is that someone could design an A.I. system that would sound exactly like my wife’s voice. Even create a video of someone kidnapping someone else’s wife. You could actually see it — their body, their voice — it would look so real. And then they call you and demand ransom payment. So the ability to do deepfakes with this technology is just enormous of a degree in specificity that is harrowing.
I always keep in mind bad guys are early adopters. They were the first early adopters of the internet and social media, and they’ll be the early adopters of A.I. as well.
Brink: And what are the ramifications of that? How would this nightmare affect geopolitics?
Friedman: You could start wars with this; you could create panics with this; you could do all kinds of incredibly destabilizing things.
I’m glad you asked this question, Bill, because it really goes to the heart of the article. Craig and my view is that the destabilizing aspects of A.I. in the hands of bad actors will destabilize both the U.S. and China far faster and deeper and far earlier than they will ever fight a war with each other over Taiwan. And that’s why they have a mutual interest in getting this under control — because it’s coming fast, and it’s going to be internally destabilizing to both countries before they ever return to some kind of conventional conflict.
China already has a terrible problem with people perpetrating fraud there. Superempowered with A.I., they’ll have an even bigger problem. Now, some would say that controlling A.I. becomes a great way for China to improve its controls, period, over all its people, and they’re right about that. We have to be alert to it. We have to be very alert to it.
Brink: That speaks to your extensive experience covering geopolitical conflicts all over the world. What is it about this conflict, this coming challenge on a global scale, that you feel is different?
Friedman: Let’s think back to nuclear weapons. Nuclear weapons were basically developed by governments — only a few of them. They required giant reactors and reprocessing equipment to produce. And therefore, because of collaboration between the big nuclear nations, proliferation of nuclear weapons was relatively contained through the Non-Proliferation Treaty. Relatively — not perfectly — but relatively.
With A.I., it could be the equivalent of giving everyone a nuclear bazooka that actually learns and improves on its own with every use. And that’s why I feel so strongly about not only getting these controls in place in the United States, but also not doing what we did with social networks — sitting back and saying, “Let’s just move fast and break things,” which is what Mark Zuckerberg urged us to do. And then he broke society. He urged us to have no controls over what is published on social media platforms. And now we live in a world awash in misinformation, disinformation, and hate speech that is tearing our society apart. Well, if we follow the same advice on A.I. — to just move fast and break things — this time, we could break the whole world.
Brink: Tom, thank you so much for being with us today.
Friedman: My pleasure. Thank you, Bill.
Thoughts? Email us at [email protected].
This episode of “The Opinions” was produced by Derek Arthur. It was edited by Alison Bruzek and Kaari Pitkin. Mixing by Carole Sabouraud. Original music by Sonia Herrero, Isaac Jones and Carole Sabouraud. Fact-checking by Mary Marge Locker. Audience strategy by Shannon Busta and Kristina Samulewski. The director of Opinion Audio is Annie-Rose Strasser.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
Thomas L. Friedman is the foreign affairs Opinion columnist. He joined the paper in 1981 and has won three Pulitzer Prizes. He is the author of seven books, including “From Beirut to Jerusalem,” which won the National Book Award. @tomfriedman • Facebook
The post Tom Friedman’s A.I. Nightmare and What the U.S. Can Do to Avoid It appeared first on New York Times.