DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

The People Outsourcing Their Thinking to AI

December 1, 2025
in News
Trump Has Never Been More Isolated

This story is part of a series marking ChatGPT’s third anniversary. Read Charlie Warzel on the precarity that ChatGPT introduced to the world, Ian Bogost on how ChatGPT broke reality, or browse more AI coverage fromThe Atlantic.


Tim Metz is worried about the “Google Maps–ification” of his mind. Just as many people have come to rely on GPS apps to get around, the 44-year-old content marketer fears that he is becoming dependent on AI. He told me that he uses AI for up to eight hours each day, and he’s become particularly fond of Anthropic’s Claude. Sometimes, he has as many as six sessions running simultaneously. He consults AI for marriage and parenting advice, and when he goes grocery shopping, he takes photos of the fruits to ask if they are ripe. Recently, he was worried that a large tree near his house might come down, so he uploaded photographs of it and asked the bot for advice. Claude suggested that Metz sleep elsewhere in case the tree fell, so he and his family spent that night at a friend’s. Without Claude’s input, he said, “I would have never left the house.” (The tree never came down, though some branches did.)

I witnessed Metz’s compulsive AI use firsthand: Before I interviewed him for this article, he instructed Claude to reverse engineer the questions I might ask by using web-search tools and, if it wanted, a team of AI agents. Claude spent a few minutes searching for information on me before compiling its research into a one-pager. A section offered a mini biography on me; another detailed potential responses to questions I was likely to ask. “It did a pretty good job,” Metz told me halfway through our interview. Indeed, Claude had successfully predicted three of my interview questions.

Many people are becoming reliant on AI to navigate some of the most basic aspects of daily life. A colleague suggested that we might even call the most extreme users “LLeMmings”—yes, because they are always LLM-ing, but also because their near-constant AI use conjures images of cybernetic lemmings unable to act without guidance. For this set of compulsive users, AI has become a primary interface through which they interact with the world. The emails they write, the life decisions they make, and the questions that consume their mind all filter through AI first. “It’s like a real addiction,” Metz told me.  

Three years into the AI boom, an early picture of how heavy AI use might affect the human mind is developing. For some, chatbots offer emotional companionship; others have found that bots reinforce delusional thinking (a condition that some have deemed “AI psychosis”). The LLeMmings, meanwhile, are beginning to feel the effects of repeatedly outsourcing their thinking to a computer.

[Read: AI is a mass-delusion event]

James Bedford, an educator at the University of New South Wales who is focused on developing AI strategies for the classroom, started using LLMs almost daily after ChatGPT’s release. Over time, he found that his brain was defaulting to AI for thinking, he told me. One evening, he was trying to help a woman retrieve her AirPod, which had fallen between the seats on the train. He noticed that his first instinct was to ask ChatGPT for a solution. “It was the first time I’d experienced my brain wanting to ask ChatGPT to do cognition that I could just do myself,” he said. That’s when he realized “I’m definitely becoming reliant on this.” After the AirPod incident, he decided to take a month-long break from AI to reset his brain. “It was like thinking for myself for the first time in a long time,” he told me. “As much as I enjoyed that clarity, I still went straight back to AI afterwards.”

New technologies expand human capabilities, but they tend to do so at a cost. Writing diminished the importance of memory, and calculators devalued basic arithmetic skills, as the philosopher Kwame Anthony Appiah recently wrote in this magazine. The internet, too, has rewired our brains in countless ways, overwhelming us with information while pillaging our attention spans. That AI is going to change how we think isn’t a controversial idea, nor is it necessarily a bad thing. But people should be asking, “What new capabilities and habits of thought will it bring out and elicit? And which ones will it suppress?,” Tim Requarth, a neuroscientist who directs a graduate science-writing program at NYU’s school of medicine, told me.

Ines Lee, an economist based in London, told me that at times she has slipped into the habit of “not being able to start meaningful work without first consulting AI.” On her Substack, Lee has written that ChatGPT and Claude are now more seductive distractions than social-media apps such as YouTube and Instagram: She frequently turns to them to get her work done, even while feeling her critical-thinking skills may be atrophying in the process. Mike Kentz, an educator and AI-literacy consultant, told me that he, similarly, has found himself depending on chatbots for help writing emails. “Areas where I used to feel confident in my own skills and abilities—like writing concise, thorough, balanced emails—have now become areas where I consistently reach out to AI for feedback,” he wrote in a recent blog post. “The 2015 version of me would be quite disturbed.”

The trouble with AI tools is that they seem to “exploit cracks in the architecture of human cognition,” as Requarth has written. The human brain likes to conserve energy and will take available shortcuts to do so. “It takes a lot of energy to do certain kinds of thought processes,” Requarth told me; meanwhile, “a bot is sitting there offering to take over cognitive work for you.” In other words, using AI to write your emails isn’t laziness so much as it is a naturally adaptive behavior.

Chatbots are engineered to take advantage of this human tendency by offering compelling answers to any query, even if many of those answers are false or misleading. Say someone asks the chatbot an anxious question about their love life. Even if the chatbot’s responses are completely off-base or unhelpful, they give people something to do other than sit with their discomfort, Carl Erik Fisher, an addiction psychiatrist at Columbia University, told me.

Indeed, one tech worker in her 20s, who asked to remain anonymous out of embarrassment, told me that she sometimes finds herself asking Claude questions that she knows the bot can’t answer. On a recent occasion, when her friends were out late at a club and she hadn’t heard from them, she asked Claude, “What’s the probability that they’re okay?” Another time, after losing her phone, she started asking the chatbot about the chances her identity might get stolen. “Obviously, it’s not gonna know,” she told me. “I just wanted, I guess, reassurance.” On still another occasion, she asked Claude whether she should call 911 when her fire alarm kept going off. It told her not to and walked her through the steps of disabling the device.

Anthropic has raised concerns over students off-loading cognitive work to AI systems, and OpenAI has acknowledged that dependence on AI tools more generally is a problem. “People rely on ChatGPT too much. There’s young people who just say, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on,’” the company’s CEO, Sam Altman, said at a conference this summer. “That feels really bad to me.” When I reached out to ask what OpenAI is doing about compulsive use, Taya Christianson, a spokesperson for the company, told me that the start-up is actively designing features that discourage the use of ChatGPT to outsource thinking. As evidence, she pointed to OpenAI’s recent release of “study mode,” a tool that offers learners step-by-step guidance to understanding new concepts, rather than automatic answers.

But there’s a tension here. For OpenAI and other chatbot makers, dependence is the business model. The more people rely on AI for their personal and professional lives, the more these businesses stand to gain. (The Atlantic entered into a corporate partnership with OpenAI in 2024; neither the magazine’s business team nor OpenAI has any oversight over editorial work.) Many of the power users I spoke with shell out hundreds of dollars each month for premium AI subscriptions. Meanwhile, these companies are facing some financial stress: In October, Nick Turley, the head of ChatGPT, wrote to employees that OpenAI was experiencing “the greatest competitive pressure we’ve ever seen,” and the company reportedly hopes to persuade roughly 200 million users to pay for premium subscriptions over the next few years.

[Read: The Gen Z lifestyle subsidy]

Perhaps one way for AI companies to curb unhealthy dependence would be to program their chatbots to tell users to take a break, Fisher, the addiction psychiatrist, suggested. The bot could say, “‘I think you’re overthinking this. Why don’t you go for a walk?’” he said. Over the summer, OpenAI introduced a reminder that encourages users to take breaks during periods of extended use. Anthropic has also been experimenting with interventions during long conversations. Kentz told me that Claude recently interrupted a heated interaction he’d had with the chatbot while on a flight to Seattle. He had asked the bot to role-play as the audience for an upcoming presentation he was preparing for. Some of Claude’s feedback was helpful, but Kentz felt himself getting too caught up in it, arguing with the bot and even growing defensive. Eventually, Claude said, “You’re spiraling and you need to chill out,” Kentz told me.  

He found Claude’s intervention useful, but sometimes chatbots have difficulty determining what counts as unhealthy behavior. A friend of mine was recently using Claude to edit an essay when the chatbot started refusing to help. “You need to stop,” Claude wrote. “This isn’t productive editing anymore.” At one point, it demanded, “Submit your application,” adding, “I will not respond to further requests for micro-edits.” My friend was alarmed; he had simply been asking for help with grammar and word choice. Others have reported similar experiences, where basic requests for assistance have been met with unwarranted accusations of self-destructive perfectionism. (When asked about these examples, a spokesperson for Anthropic told me that the company is working to train Claude to push back when needed without being overly harsh or judgmental.)

For now, some AI power users are taking their own steps to break their dependence: Starting today, Bedford is commencing another month-long break from AI, which he has launched formally as a challenge called #NoAIDecember. The movement’s website encourages people to prioritize using their RI (as in “real intelligence”) in place of AI. The challenge is open to anyone, and a few thousand people have already signed up. Kentz is one of them, though he’s disappointed that the break from AI coincides with the holidays: He has developed a habit of using ChatGPT to help with his Christmas shopping.

The post The People Outsourcing Their Thinking to AI appeared first on The Atlantic.

Lakers try to fight the boredom of seventh straight win
News

Lakers try to fight the boredom of seventh straight win

by Los Angeles Times
December 1, 2025

1 p]:text-cms-story-body-color-text clearfix”> The Lakers cruised to their seventh consecutive win Sunday, taking down the overmatched New Orleans Pelicans 133-121 ...

Read more
News

‘Sick man’: Hegseth faces furious blowback as he posts war crime-mocking meme

December 1, 2025
News

First lady Melania Trump unveils White House Christmas decorations: ‘Warmth and comfort ‘

December 1, 2025
News

Watch Sean ‘Diddy’ Combs angrily confront lawyer in first glimpse at Netflix doc: ‘We’re losing!’

December 1, 2025
News

Metroid Prime 4 Leak Says Annoying NPCs Aren’t a Big Problem After All

December 1, 2025
Nvidia just made a $2 billion investment in Synopsys, adding to its thick web of AI deals

Nvidia just made a $2 billion investment in Synopsys, adding to its thick web of AI deals

December 1, 2025
‘How dare you?’ Jasmine Crockett dig at GOP hypocrisy sends Newsmax into spiral

‘How dare you?’ Jasmine Crockett dig at GOP hypocrisy sends Newsmax into spiral

December 1, 2025
The Silent Film Comedian Whose Corpse Was Stolen and Held for Ransom

The Silent Film Comedian Whose Corpse Was Stolen and Held for Ransom

December 1, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025