Subscribe here: Apple Podcasts | Spotify | YouTube | Overcast | Pocket Casts
With all the hype and hysteria around AI, it’s important to remember that AI is still just a tool. As powerful as it is, it is not a promise of dystopia or utopia.
Host Garry Kasparov is joined by cognitive scientist Gary Marcus. They agree that on its own, AI is no more good or evil than any other piece of technology and that humans, not machines, hold the monopoly on evil. They discuss what we all need to do to make sure that these powerful new tools don’t further harm our precarious democratic systems.
The following is a transcript of the episode:
Garry Kasparov: In 1985, at the tender age of 22, I played against 32 chess computers at the same time in Hamburg, West Germany. Believe it or not, I beat all 32 of them. Those were the golden days for me. Computers were weak, and my hair was strong.
Just 12 years later, in 1997, I was in New York City fighting for my chess life against just one machine: a $10 million IBM supercomputer nicknamed Deep Blue. It was actually a rematch. I like to remind people that I beat the machine the year before in Philadelphia.
And this battle became the most famous human-machine competition in history.
Newsweek’s cover called it “The Brain’s Last Stand.” No pressure. It was my own John Henry moment, but I lived to tell the tale. A flurry of books compared the computer’s victory to the Wright brothers’ first flight and the moon landing. Hyperbole, of course, but not out of place at all in the history of our love-hate relationship with so-called intelligent machines.
So are we repeating that cycle of hype and hysteria? Of course, artificial intelligence is far more intelligent than all chess machines. Large language models like ChatGPT can perform complex tasks in areas as diverse as law, art, and, of course, helping our kids cheat on their homework. But are these machines intelligent? Are they approaching so-called AGI—or artificial general intelligence—that matches or surpasses humans? And what will happen when they do, if they do?
The most important thing is to remember that AI is still just a tool. As powerful and fascinating as it is, it is not a promise of dystopia or utopia. It is not good or evil, no more than any tech. It is how we use it for good or bad.
From The Atlantic, this is Autocracy in America. I’m Garry Kasparov.
[Music]
My guest is Gary Marcus. He’s a cognitive scientist whose work in artificial intelligence goes back many decades. He’s not a cheerleader for AI. Anything but. In fact, his most recent book is called Taming Silicon Valley: How We Can Ensure That AI Works for Us. He and I agree that humans, not machines, hold the monopoly on evil, and we talk about what humans must do to make sure that the power of artificial intelligence doesn’t do harm to our already fragile democratic systems.
[Music]
Kasparov: Gary Marcus, welcome to our show.
Gary Marcus: This is the Gar(r)y Show!
Kasparov: You are an expert on artificial intelligence, and you have worked on it for many decades, starting at a very young age. So before we talk about AI, I have to ask you, back then in 1997, who you were rooting for.
Marcus: Who was I rooting for?
Kasparov: Me or Deep Blue? But be honest, please. No bad blood.
Marcus: You know, in 1997 I had become disenchanted with AI. And I don’t think I had really cared that much. I knew that eventually a chess machine was going to win. I had actually played Deep Blue’s predecessor, Deep Thought, and it had kicked my ass—even, I think, with its opening book turned off or some humiliating thing like that. Not that I’m a great chess player, but you know, I saw the writing on the wall. I wasn’t really rooting; I was just watching as a scientist to see, like, Okay, when do we sort this out? And at the same time I was like, Yeah, but that’s chess, and you can brute-force it. And that’s not really what human intelligence is about. So I honestly didn’t care that much.
Kasparov: You said “brute force.” With all the progress being made, would you say that machines are still relying almost exclusively on brute force, or we see some, you know, transformation from simple quantity into some quality factors?
Marcus: I mean, I hate to say it’s a complicated answer, but it’s a complicated answer.
Kasparov: It is a complicated answer. It’s, you know—I wouldn’t ask a simple question.
Marcus: I figured not. In some ways we’ve made real progress since then, and in some, not. The kind of brute force that Deep Blue used is different from the kind of brute force that we’re using now. You know, the brute force that beat you was able to look at an insane number of positions, essentially simultaneously, and go several moves deep and so forth.
And large language models don’t actually look ahead at all. Large language models can’t play chess at all. They make illegal moves. They’re not very good. But what they do do is: They have a vast amount of data. If you have more data, you have a more representative something-or-other. So, like, if you take a poll of voters, the more voters you have, the more accurate the poll is. So they have a very large sample of human writing—in fact, the entire internet. And they have a whole bunch of data that they’ve transcribed from video and so forth. So they have more than all of the written text in the internet. That’s an insane amount of data. And what they’re doing every time they answer a question is they’re trying to approximate what was said in this context before. They don’t have a deep understanding of the context. They just have the words. They don’t really understand what’s going on, but that deep pile of data allows them to present an illusion of intelligence. I wouldn’t actually call it intelligence. It does depend on what your definition of the term is, but what I would say is it’s still brute force.
So let me come back to chess for a second. If you ask a large language model, even a recent one, to play chess, it will often make illegal moves. That’s something that a six-year-old child won’t do. And I don’t know when you learned chess, I can’t remember, but you were probably quite young, so I’m guessing you were four or something like that.
Kasparov: Five. Five and a half.
Marcus: Five. So when you were five and a half, you know, pretty much immediately, you understood the rules. So basically, you probably never made illegal moves in your chess career starting when you were a little child. And [OpenAI’s model] o3 was making them this weekend. I asked a friend to go try it out.
And when you were five and a half, you’d only seen whatever, one game, two games now, 10 and whatever. There are millions of games—maybe tens of millions or hundreds of millions—that are available in the training data. And the lord knows they use any training data they can get. So there’s a massive amount of data. The rules are there. Wikipedia has rules of chess entry. That’s in there. All of that stuff’s in there. And yet still, it will make illegal moves—like have a queen jump over a knight to take the other queen.
Kasparov: Making mistakes, not mistakes—actually violating the rules. So again, just tell us: how come? Why? Because the rules are written, and technically they can just extract all the information that is available. And they’re still making illegal moves?
Marcus: Yeah. And, in fact, if you ask them verbally, they will report the rules. They will repeat the rules, because in the way that they create text based on other texts, they’ll be there. So I actually tried this. I asked it, I said: Can a queen jump over a knight? And it says: No; in chess, a queen cannot jump over any piece, including a knight. So it can verbalize that. But when it actually comes to playing the game, it doesn’t have an internal model of what’s going on. So even though it has enough training data that can actually repeat what the rules are, it can’t use those in the service of the game—because it doesn’t have the right abstract representation of what happens dynamically over time in the game.
Kasparov: Yes. It’s very interesting, because it seems to me that, you know, what you are telling us is that machines know the rules because rules are written, but it still doesn’t know what can be done or cannot be done unless it’s explicitly written. Correct?
Marcus: Well, I mean, it’s worse than that. I mean, the rules are explicitly written, but there’s another sense of knowing the rules—which, we actually understand what a queen is, what a knight is, what a rook is. What a piece is. And it never understands anything. It’s one of the most profound illusions of our time that most people witness these things and attribute an understanding to them that they don’t really have.
Kasparov: Okay. So now, I think our audience understands why you’re often called an AI skeptic. But I believe “AI realist” is better, because I share your overall view of the future of AI and human-machine collaboration.
Marcus: Let me just drop in. I love that you called me an AI realist rather than a skeptic.
Kasparov: I share that, and I always say AI is not a magic wand, but is not a terminator. It’s not a harbinger of utopia or dystopia. It’s a technology. It doesn’t buy you a ticket to heaven, but it doesn’t open the gates of hell. So let’s be realistic.
Marcus: Yeah. So let me talk about the realism first, and then the gates of hell. So on the realism side, I think you and I have a lot in common. We are both realists, both politically and scientifically. We both just wanna understand what the truth is and, you know, how that’s gonna affect society and so forth.
I mean, the fact is, I would like AI to work really well. I actually love AI. People call me an AI hater. I don’t hate AI. But at the same time, to make something good, you have to look at the limitations realistically. So that’s the first part.
Is it gonna open the gates to heaven or hell? That’s actually an open question, right? AI is a dual-use technology, like nuclear weapons, right? Can be used for good, can be used for evil. And when you have a dual-use technology on the table, you have to do your best to try to channel it to good.
Kasparov: But look, I also keep repeating that humans still have a monopoly for evil. I think, you know, we can disregard the fact that every technology can be used for good or bad, depending on who is going to use it. And I think that the greatest threat coming from the AI’s world is potentially this technology being controlled and used by those who want to do us harm.
Marcus: Mostly agree with you there. First of all, neither of us are that worried about the machines becoming deliberately malicious. I don’t think the chance of that is zero, but I don’t think it’s very high. Agree we should be worrying about malicious humans and what they should, what they might, do with AI—which I think is a huge, huge concern. We also have to worry because of the kind of AI that we have now, that it will just do really bad things by accident. Because it’s so poorly connected to the world, it doesn’t understand what truth is. It can’t follow the rules of chess, etcetera. It can just accidentally do really bad things. And so we have to worry about, I think, the accidents and the misuse. Maybe less about the malice.
Kasparov: Now let me ask a very primitive—just a question that has no scientific background. So while analyzing our chess decisions, we always say, Okay, this part is being made through calculation; this one through recognition of patterns. Now, in your view, what percentage of these decisions or suggestions made by AI are based on calculations, and what percentage is attained through understanding? I mean, I don’t want to use the word intuition, but recognition of patterns. So let’s say: strategy versus simple tactical calculation?
Marcus: First thing is, I should clarify something, which is: There are different kinds of AI out in the world. So, for example, a GPS navigation system is all what I would call calculation and no intuition. It simply has a vast table of different locations and the routes that you can take between those places for different segments of it and the times and that are typical and so forth.
All calculation. Nothing I would describe as pattern recognition. I would still call it AI. It’s not a sexy piece of AI, and it’s not what most people talk about when they talk about AI right now. Most people are talking about chatbots like ChatGPT. When Deep Blue beat you, that was all calculation. Maybe you could argue there’s a tiny bit of pattern recognition. Stockfish is now kind of a merger of the two. It’s kind of a hybrid system, which I think is the right way to go. The things that are popular mostly aren’t hybrids, although they’re increasingly kind of sneaking some hybrid stuff in the back door.
I would say they’re not doing any calculation at all. I would say that they’re all pattern recognition. A pure large language model is all pattern recognition, with no deep conceptual understanding and no deep representations at all. There’s no deep understanding even of what it means to “jump a piece” or “illegal move.” None of that is really there, so everything it does is really pattern recognition. When it does play chess, it’s recognizing other games. There’s an asterisk around this, which is they can do a little bit of analogy in certain contexts. So it’s not pure memorization; it’s not pure regurgitation. But it comes close to that, and it’s never kind of deep and conceptual.
Kasparov: So, before we move into politics—and I will just, you know, give you some statements. And you tell me if I’m right, or maybe they have to be corrected.
So this infrastructure and this whole industry has not solved the alignment problem.
Marcus: Not even close. The “alignment problem” means making machines do what you want them to do or the things that are compatible with humans. And already we saw a great example, which is chess. You know, you tell it “I want to play chess; here are the rules of chess”—and it can’t even stick to that. Now you get to something harder like “Don’t cause harm to humans,” which is much more complicated to, to even, you know, define what harm means, and so forth. They can’t do that at all. There is no real progress, I would say, in the alignment problem. Adding more data doesn’t help that much with the alignment problem. There’s another thing called reinforcement learning. It helps a little, but we have nothing like a real solution to alignment.
Kasparov: Okay, so, the bottom line is that simply adding information—or just, you know, cleaning this human data and just building, you know, the skyscrapers of this data—doesn’t help very much.
So we reach a plateau. So the idea that if we simply keep, you know, piling more and more data, and we will transform this quantity into a quality to move to the next level. It doesn’t work. Because, again, there’s no evidence this kind of superintelligence is going to happen tomorrow or just in the foreseeable future.
Marcus: It’s not gonna work. We will get to superintelligence eventually, but not by just feeding the beast with more data. You know, I thought what you were gonna ask me was, is this field intellectually honest? And my answer is: not anymore. AI used to be an intellectually honest field, at least for the most part.
And now we just have people hyping stuff, praying. There’s actually a great phrase I heard: “Pray and prompt.” Like, you pray and prompt and hope you’ll get the right answer. And it’s just—it’s not reasonable to suppose that these things are actually gonna get us to AGI [artificial general intelligence]. But the whole field is built on that these days.
[Music]
Kasparov: We’ll be right back.
[Break]
Kasparov: Okay. So, to support your reputation as an AI realist and not be just on the negative side, because you already said enough—and again, couldn’t agree more with everything you just said—we have to support our reputation of those who believe that AI still brings something good into this world. So how do we benefit from AI’s interference or infusion of virtually every aspect of our life?
Marcus: So I think there’s, it’s a multipart answer, because AI affects so many parts of our life. Right now, the best AI for helping people, in my opinion, is not the chatbot. The best piece of AI right now, I think, is AlphaFold, which is a very specialized system that does one thing and only one thing, which is to take nucleotides in a protein and figure out what their 3D structure is likely to be.
It may help with drug discovery. Lots of people are trying that out. That seems like a genuinely useful piece of AI and should be a model. But I would say of the big AI companies, DeepMind is the only one seriously pursuing AI for science at scale. Most people are just like, Well, can I throw a chatbot at something?
And mostly that’s not gonna lead that much advance, as opposed to creating special-purpose solutions. I think we have to be intellectually honest about the limitations of this generation of AI and build better versions of AI and introduce new ideas and foster them. And right now we’re in this place where the oxygen is being sucked outta the room, as Emily Bender once said.
And nobody else can really pursue anything else. Like, all the venture funding is to do large language models and so forth. So it’s the research side of it. There’s a “finding the right tools for the job” side of it. There’s also a legal side of it—which is, if we want AI to be net benefit to society, we have to figure out how to use it safely and how to use it fairly and justly. If we don’t—which is what’s happening right now in the United States, when we’re doing nothing—then of course there’s gonna be lots of negative consequences.
Kasparov: Negative consequences. I think the one place where we all feel these negative consequences is politics, or things related to politics: like propaganda and simply, you know, just sharing information.
That’s where AI plays a massive role, because again, we saw the influence of various forms of AI being used to influence the elections. And it seems unstoppable now. So just briefly: So what do you think? That anything can be done, or we are just, we have entered the era of these information wars that will be run by these chatbots? And the sheer power behind them could at one point decide results of any election?
Marcus: This is a place where I may be an AI optimist, although not short term. So I genuinely believe that, in principle, we can build AI that could do fact-checking automatically faster than people, and I think we need that. Right now it’s sort of politically hot, so nobody even wants to touch it. But I think in the long run, that’s what we need to do.
Think about the 1890s, with all the yellow journalism of people like [William Randolph] Hearst and so forth. All bullshit. Some people think it led to war based on false facts, and that led to fact-checking being a thing. And we may return to that, because I think people are going to get disgusted by how much bullshit they are immersed in.
And I think in principle—not current AI, but future AI could actually do that at scale faster than people. And I think that could be part of the solution eventually. Part of it is political will, and right now we lack it. The far right has so politicized the notion of truth that it is hard to get people to even talk about it.
But I think that there will be a swing back in the pendulum of that, someday. Whether that happens in the United States is a very complicated situation right now, but I think the world at large is not gonna be satisfied with the state of affairs where you can’t trust anything. Dictators love it. It’s great for them. That’s why it was the Russian-propaganda model. [Vladimir] Putin loves the idea that nobody knows what to believe, and so you just kind of have to go along with what he makes you do.
Kasparov: But it seems to me that the political moment definitely in this country, also in Europe now, is not very friendly to this notion. So fact-checking—
Marcus: Very unfriendly.
Kasparov: People believe what they want to believe. And unfortunately, fake news has this element of sensationalism that always attracts the attention. And, I think lies become weapons on both sides. There’s some blatant lies; there’s some more covert lies.
But at the end of the day, I think no political, meaningful political force in this country now is interested of defending the truth, defending the pure, correct fact-checked data—because it may, and most likely will, interfere with their political agenda. And also the facts—they always lose in the battle of public opinion these days against fake news.
Marcus: I mean, my one moment of optimism is: We saw this before in the 1890s, and eventually people got fed up. It’s not gonna happen soon, though. Right now, people are complacent and apathetic, and they have given up on truth. I could also be wrong in my rare moment of optimism. I think that things are going to get so bad that the people will resist. But I mean, that’s an open question. At least once in history, people did get fed up with that state of affairs. It is also true what you’re saying. Lies do tend to travel faster than truth. And that’s part of what happened in the social-media era; that whole thing got accelerated, right? The social-media companies don’t care about truth, and they realized they would make more money by trafficking in fake narratives. And that’s part of why we are where we are now.
Kasparov: Yeah, you mentioned a couple of times—1890s and early 20th century—as one of the moments of transition. So what about the, let’s say, mid-20th century with the booming sci-fi-book industry—had many, many stories about the future influence of technology. Technology to dominate our society; technology interfering with democracy. The great writers, you know, they predicted that at one point we would have to deal with this direct challenge of technology in the hands of few to influence the opinion of many. Are we now at this point?
Marcus: I keep thinking about rewriting a word-for-word remake of 1984, which I think was written in the late ’40s. Um, you know, we are exactly where [George] Orwell warned us about, but with technology that makes it worse. Large language models can be …I don’t know. We call them super-persuaders.
They can persuade people of stuff without people even realizing they’re being influenced. And the more data you collect on someone, the easier that job becomes. And so we are exactly living in the world that Orwell warned us about.
Kasparov: Okay. So let’s talk about tech bros. So they believed that all-powerful technology could actually help to improve the society, because society has too many problems. They cannot be resolved any other way. But to lead the public, to educate the public, to control the public mind to cure these problems—is this threat real? And is it doable? Because some people even say that it may lead us to something called techno-fascism, where, while preserving all the elements of representative democracy, we will end up in some kind of dystopian society where the few in charge of massive data will make election results predictable and bend them to their favor.
Marcus: I mean, that’s exactly what’s happening in the United States right now—techno-fascism. You know, the intent appears to be to replace most people—most federal workers—with AI, which is gonna have all the problems that we talked about. The intent is to surveil them, to surveil people. To get, you know, massive amounts of data, put it all together in one place, accessible to a small oligarchy. I mean, that’s just what they’re doing. This is not science fiction that could happen in 10 years. This is essentially the active thing that is happening right now, that has, you know, been happening for the last few months.
Kasparov: Question. So is it inevitable? So how does society at large resist this pressure, from this new tech oligarchy that has all the money, that has control of technology? And again: Let’s be honest, most of the public, they care more about convenience rather than about, you know, the security of their devices. I mean, for instance, that’s known, that people want these devices, this new technology, to bring some short-term benefits.
Marcus: iPhones are the opiates of the people.
Kasparov: Exactly. So because our reliance on these new devices. Because we are willing to use the simplest passwords, because to do a complicated password is too, you know, time consuming. So again, ignoring even threats to our personal data—so can we rally enough people to make this threat?
Marcus: I think the default path is what you described. I would add privacy to it. So people have given up on privacy. They won’t do the basic things on security, and they have given up an enormous amount of power. And the power hasn’t even just gone to the government. Power has really gone to the tech companies, who have enormous influence over the government.
And unless people get out of their apathy, that’s, you know, certainly where the United States is likely to stay. It’s only if there is mass action, and if people realize what has happened to them. There were huge protests specifically directed toward Elon Musk, and he was kind of, as far as I can tell, pushed aside.
Those protests were somewhat effective in mitigating some of the more egregious things that he tried to do. And so he’s at least kind of not at center stage anymore. But short of that, I think the default is the sort of dark world that we’re talking about. That, you know, reminds me a lot of contemporary Russia, where a few people have most of the power. Most people have essentially no power. And, to a surprisingly large degree, people just consent to that: giving up their freedom, giving up their privacy, maybe giving up their independence of thought as these systems start to shape their thoughts. And, to me, that’s extremely dark. Not everybody seems to understand what’s going on, and unless more people understand what’s going on, this is, you know, where we’re gonna stay.
Kasparov: Yes. So to wrap it up, can you give us just, you know, some glimpse of hope? Any idea how we can fight back by using the enormous power that AI and all these devices give to us? Because we are many; we are millions. And they are few—though they’re a very powerful few. So what’s the best bet for us to take back our future, back in our hands? And also to make sure that the political institutions of the United States, the obvious great republic, will survive its 250th anniversary that will be celebrated next year?
Marcus: I think our powers are the same as they always were. But we’re not using them. So we have powers like striking. We could have a general strike. Strikes, boycotts. You know, we could all say, Look, we are not going to use generative AI unless you solve some of these problems. Right now, the people making generative AI are sticking the public with all of the costs to the information ecosphere—all the enormous climate cost of these systems.
Like, they’re just sticking everything to the public. And we could say: That’s not cool. You know, we would love to have AI, but make it better. Make it so it’s reliable, it’s not destroying the environment, massacring the environment, you know—and then we’ll use your stuff. Right now, we’ll boycott it, so we could say, Hey, we’re not gonna do this anymore. You know, we’ll come back to your tools later. They’re nice, but I think we could live without them. You know, they save some time, and that’s cool. But—
Kasparov: Are you sure, Gary? I mean, let’s be realistic. So I hate pouring cold water in your concept of our hot resistance. But do you seem to think that people today—I mean it starts with students will stop using ChatGPT.
Marcus: I think it’s very unlikely. But the reality is that the students, by adding to the revenue streams and user numbers—massively, like students are a huge part of it—they’re adding to the valuations of the companies.
They’re giving companies power—and what the companies are trying to do is to keep those students from ever getting jobs. And the companies probably are gonna succeed in that, right? The people who are losing their jobs first are students. The students graduating are entering this world where junior workers aren’t getting hired as much. And probably in part because of AI. In some ways, they’re the most screwed by all of this.
And they have given birth to this monster, because they drive the subscriptions up. So, you know, OpenAI can raise all of this money, because a lot of people are using it. A large fraction, I don’t know the exact numbers of those people who are students using them to write their term papers. If students just stop doing that, it would actually undermine OpenAI. It might lead to the whole thing collapsing, and that would actually change what their employment prospects are like.
Kasparov: Yeah. I’m very skeptical about them, just—
Marcus: I’m skeptical about it too.
Kasparov: So is it fair to say that regarding AI, short term, you are pessimistic? You have very uneasy feelings. Midterm, you are optimistic, and long-term you’re bullish.
Marcus: No; it’s more agnostic. It’s like, I think this could work out—but we have to get off of our asses if we want it to work out. We may reach some point where people in the U.S. do fight back. We have more of an expectation, historically, of having certain kinds of freedoms than I think the Russian people do. And so it could turn around, and to the extent that it makes me an optimist to think it could turn around. Yeah.
[Music]
Marcus: But generally, I like the metaphor that we’re kind of on a knife’s edge and we have choice. It’s important to realize that we still have choice. It’s not all over yet. We still have some power to get ourselves on a positive AI track, but it is not the default. It is not where we’re likely to go unless we really do stand up for our rights.
Kasparov: So it’s not the most optimistic forecast, but at least it’s a call for action.
Marcus: But we could. We could take action.
Kasparov: Exactly.
Marcus: We are America, and we still could, and we should. Our fate rests 100 percent on political will.
Kasparov: Gary Marcus, thank you very much for this most enlightening conversation.
Marcus: Thank you so much for the conversation.
Kasparov: This episode of Autocracy in America was produced by Arlene Arevalo. Our editor is Dave Shaw. Original music and mix by Rob Smierciak. Fact-checking by Ena Alvarado. Special thanks to Polina Kasparova and Mig Greengard. Claudine Ebeid is the executive producer of Atlantic audio. Andrea Valdez is our managing editor.
Kasparov: I’m Garry Kasparov. See you back here next week.
The post AI and the Fight Between Democracy and Autocracy appeared first on The Atlantic.