In the future, if there is a future, the current moment in artificial intelligence will be remembered as a thick penumbra, a pea-soup fog of confusion and mystery and fraud and silliness.
It’s been nearly three years since the arrival of ChatGPT, the fastest-growing consumer application in history; everybody who has access to a computer has almost certainly tried it. Yet public familiarity has not lessened the grandeur and emptiness of the claims about A.I. The feared and promised advent of superintelligent robot overlords who will destroy humanity remains just around the corner — as it was last year, and the year before that.
It turns out that A.I. may not be the end of the world; it may only be the end of interns at law firms. Whatever the case, in the public consciousness artificial intelligence can never be just another tool. It is either the gold rush or the apocalypse.
That extreme polarization makes A.I. the perfect subject for a book, in a way: You can project whatever anxieties or hopes you possess onto it, and no one can definitively state that you’re wrong. The new raft of A.I. books out this season amply demonstrates the condition of intellectual maximalism and minimal clarity that pervades the discourse around the stuff.
If Anyone Builds It, Everyone Dies
by Eliezer Yudkowsky and Nate Soares
Yudkowsky is a prominent A.I. researcher who co-founded the Machine Intelligence Research Institute, a nonprofit aimed at mitigating the technology’s risks where Soares, his co-author, serves as president. In IF ANYONE BUILDS IT, EVERYONE DIES: Why Superhuman AI Would Kill Us All (Little, Brown, 272 pp., $30), the pair attempt to give the “robot overlord” hypothesis its fullest expression.
Their book’s claim is simple enough: “If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die.” The authors cannot be faulted for indirectness.
Critics of A.I. doomerism maintain that the mind-set suffers from several interlocking conceptual flaws, including that it fails to define the terms of its discussion — words like “intelligence” or “superintelligence” or “will” — and that it becomes vacuous and unspecific at key moments and thus belongs more properly to the realm of science fiction than to serious debates over technology and its impacts. Unfortunately, “If Anyone Builds It, Everyone Dies” contains all these flaws and more. The book reads like a Scientology manual, the text interspersed with weird, unhelpful parables and extra notes available via QR codes.
Chapter 3, “Learning to Want,” is typical. It begins with a conversation between a professor and a student about whether a chess computer “wants” to win at chess. The authors then continue with an inexcusably lazy explanation of why the computer chess program does what it does: “We’ll describe it as ‘wanting’ to win. In saying this, we’re not commenting one way or the other on whether a machine has feelings. Rather, we need some word to describe the outward winning behavior and ‘want’ seems closest.”
From that vague foundation they proceed to extrapolate not only fantasies of the end of the world, but frantic, pointless questions like “But wouldn’t the A.I. keep us around as pets?”
“How many families would still own an original biological dog,” the authors wonder, “if with biotechnology you could make a synthetic sort of dog that was just as bouncy and cuddly and cheerful, and never threw up on your couch or got sick and tragically died?” Following their unspooling tangents evokes the feeling of being locked in a room with the most annoying students you met in college while they try mushrooms for the first time.
The AI Con
by Emily M. Bender and Alex Hanna
In THE AI CON: How to Fight Big Tech’s Hype and Create the Future We Want (Harper, 274 pp., $32), Bender, a computational linguist, and Hanna, a tech sociologist, look at artificial intelligence and see not a force for destruction or creation, but a colossal scam. They have found a rich lode to mine; the hype machine around artificial intelligence has entered its rococo period.
The authors carefully peek behind the claims of self-driving cars to see just how much human supervision they require. (A lot.) They delve into the upsetting and macabre history of using A.I. in social services. They gleefully dissect, in detail, the numerous debacles of early attempts to automate processes in law and politics that cannot be automated. They point out what everybody knows by now: that the use of A.I. in education doesn’t lead to better or more efficient instruction; it leads to the performance of educational tasks without the fulfillment of their original purpose: learning.
Bender and Hanna conclude that “there is ample evidence that the ability of language model-based systems to score well on benchmarks that ostensibly test for language understanding is a kind of Clever Hans effect” — Clever Hans being the early-20th-century horse trained to do arithmetic whose “calculations” were eventually exposed as responses to subtle signals from his master. The authors are excellent at tearing down Silicon Valley overstatement, and their skepticism is a welcome corrective.
There’s just one problem. You can use ChatGPT right now, and it is astonishing. “To put it bluntly,” Bender and Hanna write at one point, “‘A.I.’ is a marketing term.” Is it? Silicon Valley has a history of vastly overrating itself and ignoring its own dangers. But that doesn’t mean its products aren’t occasionally miracles.
The authors mention one of A.I.’s first casualties, a man known as Pierre who died by suicide in 2023 after consulting a chatbot for therapy. Bender and Hanna cite his death as a typical example of hype, revealing the chatbot’s uselessness as a therapeutic tool. Yet no merely mechanical object has ever talked somebody into suicide before. That is evidence of an extraordinary new power, no?
How to Think About AI
by Richard Susskind
The shelf of general guides to artificial intelligence is crowded by now, but HOW TO THINK ABOUT AI: A Guide for the Perplexed (Oxford, 202 pp., $13.99) is one of the best, filled with real insight and common sense, and refusing to engage in either fear-mongering or a casual dismissal of other, more opinionated takes. Susskind, a prolific British writer on A.I., has been studying the subject since the 1980s, and both fear and loathing diminish with perspective.
The problem with having a balanced and intelligent perspective on our automated future is that you are mostly left with paragraphs like this one: “Thinking systematically about the risks of A.I. requires two lines of thinking. First of all, we have to establish what it is we want and don’t want. Secondly, we then have to find the most effective way of imposing our preferences.”
This is true for A.I., as far as it goes, but it’s also true for my plans to make dinner for my family this evening. (Then again, in the world of A.I. players, even such ostensibly basic assumptions as whether “we want” humanity to persist in a future of superintelligent machines sometimes seem in doubt. When the question was posed recently to the billionaire tech investor Peter Thiel, he appeared unsure.)
Susskind is honest and clear, but at this juncture in the history of artificial intelligence, honesty and clarity are unfortunately deeply unsatisfying. “We do not have the vocabulary and concepts to capture and discuss the way that our increasingly capable systems work,” Susskind writes. “Instead, we root our debate in language that relates to humans.” He is absolutely correct. I have been waiting for somebody to say this in a book for years. The truth is that when you turn a decent and informed mind like Susskind’s on to the state of artificial intelligence, the most perceptive thing he has to say is that we don’t know much.
But at least his book is not one-sided or catastrophizing. It is frank about the confusion and mystery that any candid approach to artificial intelligence entails, and that is as good a record as exists right now.
The post A.I. Bots or Us: Who Will End Humanity First? appeared first on New York Times.