Will artificial intelligence bring about an intellectual revolution as profound as the Enlightenment? Enthusiasts like to make this claim, and even use the term “second Enlightenment” to describe what A.I. may ultimately generate. As a historian of the original Enlightenment, I’m intrigued by the idea — and I see some similarities. But what a close comparison in fact illuminates is not just how pernicious A.I. can be to intellectual life, but how it can undermine the very principles of the Enlightenment itself.
The parallels are, admittedly, striking. The Enlightenment — the name historians give to the intellectual upheaval that swept across the European world in the 18th century — not only filled supporters with the hope that an Age of Reason was dawning but also led them to see their historical moment as part of an evolution toward a dramatically different, and better, future.
Like the proponents of A.I., with their ceaseless talk of an imminent revolution (“the greatest reshuffling of power in history,” according to the Microsoft A.I. chief executive Mustafa Suleyman), the 18th-century French mathematician and philosopher Jean le Rond d’Alembert spoke of “a watershed in the history of the human mind” being underway and a “revolution” in ideas and history. Denis Diderot, his co-editor on the Enlightenment’s greatest single publishing project, the “Encyclopédie,” boasted of “changing the common way of thinking.”
Enlightenment authors also hoped to advance their goals by providing new ways of organizing human knowledge and conveying it to the public. The “Encyclopédie,” the most creative and original reference work in history, exemplified this ambition. Not only did its 28 volumes, published between 1751 and 1772, contain some 74,000 articles on everything from “Aabam” (an alchemical term for lead) to “Zzuéné” (a city in southern Egypt), it also mapped out a system of human knowledge in a way that deeply challenged earlier, religion-based efforts to do so. Among other things, it relegated knowledge of God to a small branch of its tree and placed theology in the same category as divination and black magic.
The knowledge it mapped out was not just abstract. Today, A.I. enthusiasts burble that with the right prompts, ChatGPT can teach you anything, from a foreign language to auto mechanics to accounting. The “Encyclopédie”’s 11 volumes of plates offered detailed instructions on virtually everything, including how to construct a mirror, furnish a bakery or cast an equestrian statue.
Enlightenment authors, like A.I. commentators, even speculated about blurring the boundary between humans and machines. Some Enlightenment inventors designed fantastically complex automatons — machines that could write, draw and play music. It is no coincidence that Mary Shelley’s 1818 novel “Frankenstein” is often read as a critique of the Enlightenment’s hubris and its supposed attempts to mimic divine creation.
A.I. is already transforming how we learn, as universities rush to incorporate the tool into their teaching. Enthusiasts like my Princeton colleague Graham Burnett integrate it into their courses and speak of how the technology will “reinvent” humanistic education. Enlightenment authors had similarly ambitious goals, sharply criticizing the traditional educational institutions of the day, which were mostly run by churches, and proposing new schemes for instruction at every level, from the crib to the doctorate.
Indeed, one of the most exciting aspects of A.I. is its ability to deliver personalized, interactive instruction on virtually any topic. As Mr. Burnett writes: “I can construct the ‘book’ I want in real time — responsive to my questions, customized to my focus, tuned to the spirit of my inquiry.” Enlightenment authors imagined books in an oddly similar manner: not as didactic tracts, but as sites in which to engage readers in virtual dialogue.
The great political philosopher Baron de Montesquieu wrote: “One should never so exhaust a subject that nothing is left for readers to do. The point is not to make them read, but to make them think.” As for Voltaire, the most famous of the French “philosophes,” he claimed, “The most useful books are those that the readers write half of themselves.” The idea of trying to engage readers actively in the reading process of course dates back to long before the modern age. But it was in the Enlightenment West that this project took on a characteristically modern form: playful, engaging, readable, succinct.
It is here, with this question of engagement, that the comparison between the Enlightenment and A.I.’s supposed “second Enlightenment” breaks down and reveals something important about the latter’s limits and dangers. When readers interact imaginatively with a book, they are still following the book’s lead, attempting to answer the book’s questions, responding to the book’s challenges and therefore putting their own convictions at risk.
When we interact with A.I., on the other hand, it is we who are driving the conversation. We formulate the questions, we drive the inquiry according to our own interests and we search, all too often, for answers that simply reinforce what we already think we know. In my own interactions with ChatGPT, it has often responded, with patently insincere flattery: “That’s a great question.” It has never responded: “That’s the wrong question.” It has never challenged my moral convictions or asked me to justify myself.
And why should it? It is, after all, a commercial internet product. And such products generate profit by giving users more of what they have already shown an appetite for, whether it is funny cat videos, instructions on how to fix small appliances or lectures on Enlightenment philosophy. If I wanted ChatGPT to challenge my convictions, I could of course ask it to do so — but I would have to ask. It follows my lead, not the reverse.
By its nature, A.I. responds to almost any query in a manner that is spookily lucid and easy to follow — one might say almost intellectually predigested. For most ordinary uses, this clarity is entirely welcome. But Enlightenment authors understood the importance of having readers grapple with a text. Many of their greatest works came in the form of enigmatic novels, dialogues presenting opposing points of view or philosophical parables abounding in puzzles and paradoxes. Unlike the velvety smooth syntheses provided by A.I., these works forced readers to develop their judgment and come to their own conclusions.
In short, A.I. can bring us useful information, instruction, assistance, entertainment and even comfort. What it cannot bring us is Enlightenment. In fact, it may help drive us further away from Enlightenment than ever.
David A. Bell is a professor of history at Princeton. He is writing a history of the Enlightenment.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post A.I. Is Not a 21st-Century Enlightenment appeared first on New York Times.