I’m generally optimistic about all the ways artificial intelligence is going to make life better — scientific research, medical diagnoses, tutoring and my favorite current use, vacation planning. But it also offers a malevolent seduction: excellence without effort. It gives people the illusion that they can be good at thinking without hard work, and I’m sorry, that’s not possible.
There’s a recent study that exposes this seduction. It has a really small sample size, and it hasn’t even been peer reviewed yet — so put in all your caveats — but it suggests something that seems intuitively true.
A group of researchers led by M.I.T.’s Nataliya Kosmyna recruited 54 participants to write essays. Some of them used A.I. to write the essays, some wrote with the assistance of search engines (people without a lot of domain knowledge are not good at using search engines to identify the most important information), and some wrote the old-fashioned way, using their brains. The essays people used A.I. to write contained a lot more references to specific names, places, years and definitions. The people who relied solely on their brains had 60 percent fewer references to these things. So far so good.
But the essays written with A.I. were more homogeneous, while those written by people relying on their brains created a wider variety of arguments and points. Later the researchers asked the participants to quote from their own papers. Roughly 83 percent of the large language model, or L.L.M., users had difficulty quoting from their own paper. They hadn’t really internalized their own “writing” and little of it sank in. People who used search engines were better at quoting their own points, and people who used just their brains were a lot better.
Almost all the people who wrote their own papers felt they owned their work, whereas fewer of the A.I. users claimed full ownership of their work. Here’s how the authors summarize this part of their research:
The brain-only group, though under greater cognitive load, demonstrated deeper learning outcomes and stronger identity with their output. The search engine group displayed moderate internalization, likely balancing effort with outcome. The L.L.M. group, while benefiting from tool efficiency, showed weaker memory traces, reduced self-monitoring and fragmented authorship.
In other words, more effort, more reward. More efficiency, less thinking.
But here’s where things get scary. The researchers used an EEG headset to look at the inner workings of their subjects’ brains. The subjects who relied only on their own brains showed higher connectivity across a bunch of brain regions. Search engine users experienced less brain connectivity and A.I. users least of all.
Researchers have a method called dynamic directed transfer function, or D.D.T.F., which measures the coherence and directionality of the neural networks and can be interpreted in the context of executive function, attention regulation and other related cognitive processes. The brain-only writers had the highest D.D.T.F. connectivity. The search engine group demonstrated between 34 percent to 48 percent lower total connectivity, and the A.I. group demonstrated up to 55 percent lower D.D.T.F. connectivity.
The researchers conclude, “Collectively, these findings support the view that external support tools restructure not only task performance but also the underlying cognitive architecture.”
In their public comments over the past few weeks, the authors of the study have been careful not to overhype their results. But the neuroscience cliché is that neurons that fire together wire together. That’s the key implication here. Thinking hard strengthens your mental capacity. Using a bot to think for you, or even just massaging what the bot gives you, is empty calories for the mind. You’re robbing yourself of an education and diminishing your intellectual potential.
It’s not clear how many students use A.I. to write their papers. Open A.I. says one in three students uses its products. I think that’s a vastly low estimate. About a year ago I asked a roomful of college students how many of them used A.I., and almost every hand went up. There’s a seductiveness to the process. You start by using A.I. as a research tool, but then you’re harried and time pressured, and before long, A.I. is doing most of the work. I was at a conference of academics last month in Utah, and one of the professors said something that haunted me: “We’re all focused on the threat posed by Trump, but it’s A.I. that’s going to kill us.”
Hua Hsu recently published a piece in The New Yorker titled “What Happens After A.I. Destroys College Writing?,” which captures the dynamic. Hsu interviewed a student named Alex who initially insisted that he used A.I. only to organize his notes. When they met in person, he admitted that wasn’t remotely true. “Any type of writing in life, I use A.I.,” Alex said. Then he joked, “I need A.I. to text girls.”
In 1960 college students were assigned about 25 hours a week of homework, and by 2015 that number was closer to 15. But most students I encounter are frantically busy, much busier than I remember my friends and me being, often with many student activities overshadowing academic work. So of course they are going to use a timesaving technology to take care of what they consider to be that trivial stuff that gets assigned in the classroom.
A.I. isn’t going anywhere, so the crucial question is one of motivation. What do students, and all of us, really care about — clearing the schedule or becoming educated? If you want to be strong, you have to go to the gym. If you want to possess good judgment, you have to read and write on your own. Some people use A.I. to think more — to learn new things, to explore new realms, to cogitate on new subjects. It would be nice if there were more stigma and more shame attached to the many ways it’s possible to use A.I. to think less.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
David Brooks is an Opinion columnist for The Times, writing about political, social and cultural trends. @nytdavidbrooks
The post Junk Food for the Mind appeared first on New York Times.