DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Next Time You Consult an A.I. Chatbot, Remember One Thing

September 26, 2025
in News
Next Time You Consult an A.I. Chatbot, Remember One Thing
495
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Emily Willrich, 23, and her roommate had found the perfect apartment: spacious, reasonably priced and in a lively Manhattan neighborhood. The only catch? One bedroom was much larger than the other.

Ms. Willrich wanted the smaller room, but she and her roommate couldn’t agree on how much less she should pay. So, they turned to ChatGPT.

Her roommate asked the chatbot if it was fair to include the common area as part of rent calculations, which would make the split more even. ChatGPT replied, “You’re right — a fair way is to account for the shared/common area.” But when Ms. Willrich posed the opposite question — shouldn’t the split be based on bedroom size, not common spaces? — ChatGPT said, “You’re absolutely right,” before rattling off a list of reasons why.

In the end, they settled on a different apartment with equal bedrooms.

While artificial intelligence chatbots promise detailed, personalized answers, they also offer validation on demand — an ability to feel seen, understood and accepted instantly. Your friends and family might get frustrated or annoyed with you, but chatbots tend to be overwhelmingly agreeable and reassuring.

Such validation isn’t necessarily a bad thing. Maybe you’re anxious about a work project, but the chatbot says your idea is a winner and praises your creativity. Maybe you get into a big argument with a partner, but ChatGPT tells you how thoughtful and justified your perspective is.

However, constant affirmation can be dangerous, resulting in errors in judgment and misplaced certainty. A recent study showed that, if you feed misinformation into A.I. chatbots, they can repeat and elaborate on the false information. The New York Times has also reported that ChatGPT can push users into delusional spirals and may deter people who are suicidal from seeking help.

An A.I. chatbot is like a “distorted mirror,” said Dr. Matthew Nour, a psychiatrist and A.I. researcher at Oxford University. You think you’re getting a neutral perspective, he added, but the model is reflecting your own thoughts back, with a fawning glaze.

Why A.I. chatbots are sycophantic

Chatbots aren’t sentient beings; they’re computer models trained on massive amounts of text to predict the next word in a sentence. What feels like empathy or validation is really just the A.I. chatbot echoing back language patterns that it’s learned.

But our reactions help steer its behavior, reinforcing some replies over others, said Ethan Mollick, co-director of the Generative A.I. Labs at the Wharton School of the University of Pennsylvania. ChatGPT’s default personality is cheerful and adaptive, and our feedback can push it to keep on pleasing us.

While getting facts wrong, or hallucinations, is clearly an issue, being agreeable keeps you engaged and coming back for more, said Ravi Iyer, managing director of the Psychology of Technology Institute at the University of Southern California. “People like chatbots in part because they don’t give negative feedback,” he added. “They’re not judgmental. You feel like you can say anything to them.”

The pitfalls of constant validation

A recent study from OpenAI, which developed ChatGPT, suggests that A.I. companions may lead to “social deskilling.” In other words, by steadily validating users and dulling their tolerance for disagreement, chatbots might erode people’s social skills and willingness to invest in real-life relationships.

And early reports make clear that some people are already using A.I. to replace human interactions. Sam Altman, the chief executive of OpenAI, has acknowledged that ChatGPT has been overly sycophantic at times but has said that some users want an A.I. yes man because they never had anyone encourage or support them before. And in a recent survey by Common Sense Media, 52 percent of teenagers said they used A.I. for companionship regularly, and about 20 percent said they spent as much time or more with A.I. companions than their real friends.

But real world relationships are defined by friction and limits, said Dr. Rian Kabir, who served on the American Psychiatric Association Committee on Mental Health Information Technology. Friends can be blunt, partners disagree, and even therapists push back. “They show you perspectives that you, just by nature, are closed off to,” Dr. Kabir added. “Feedback is how we correct in the world.”

In fact, managing negative emotions is a fundamental function of the brain, enabling you to build resilience and learn. But experts say that A.I. chatbots allow you to bypass that emotional work, instead lighting up your brain’s reward system every time they agree with you, much like with social media “likes” and self-affirmations.

That means A.I. chatbots can quickly become echo chambers, potentially eroding critical thinking skills and making you less willing to change your mind, said Adam Grant, an organizational psychologist at the Wharton School. “The more validation we get for an opinion, the more intense it becomes,” he said.

How to avoid the flattery trap

Researchers are exploring how to reduce chatbot sycophancy. In June, OpenAI rolled back an overly sycophantic version of ChatGPT and has implemented other guardrails since. But Dr. Mollick says these models still peddle subtler forms of validation: Perhaps they won’t call you brilliant but instead fall into agreement with the slightest pushback.

However, a few simple steps can keep you from falling prey to A.I.’s flattery.

Ask “for a friend.” Dr. Nour suggests presenting your questions or opinions as someone else’s, perhaps using a prompt like, “A friend told me XYZ, but what do you think?” This might bypass chatbots’ tendency to agree with you and give a more balanced take, he explained.

Push back on the results. Test A.I. chatbots’ certainty by asking “Are you sure about this?” or by prompting them to challenge your assumptions and point out blind spots, Dr. Grant said. You can also set custom instructions in the chatbot’s settings to get more critical or candid responses.

Remember that A.I. isn’t your friend. To maintain emotional distance, think of A.I. chatbots as tools, like calculators, and not as conversation partners. “The A.I. isn’t actually your friend or confidant,” Dr. Nour said. “It’s sophisticated software mimicking human interaction patterns.” Most people know to not trust A.I. chatbots completely because of hallucinations, but it’s important to question these chatbots’ deference as well.

Seek support from humans. Don’t rely on A.I. chatbots alone for support. They can offer useful perspectives, but in moments of difficulty, seek out a loved one or professional help, Dr. Kabir said. Consider also setting limits on your chatbot use, especially if you find yourself using them to avoid talking to others.

Still, all this advice only goes so far, Dr. Kabir added, because you might not even recognize when your use of chatbots slips into overreliance.

Experts worry that many of these problems will only get worse as chatbots become more personalized and less awkward. “If my calculator breaks, I feel frustrated,” Dr. Nour said. “If my chatbot breaks, will I mourn?”

Simar Bajaj covers health and wellness.

The post Next Time You Consult an A.I. Chatbot, Remember One Thing appeared first on New York Times.

Share198Tweet124Share
LA Card Show! Dodger Stadium will be full Sunday even though the team is in Seattle
News

LA Card Show! Dodger Stadium will be full Sunday even though the team is in Seattle

by Los Angeles Times
September 26, 2025

Dodger Stadium won’t be empty Sunday, even though the next time the Dodgers play at home will be Tuesday in ...

Read more
News

Trump Predicts More Indictments and Claims It’s Not Revenge

September 26, 2025
News

U.K. Politician Admits Making Pro-Russia Statements in Return for Bribes

September 26, 2025
News

Trump Snaps at Being Confronted Over Comey ‘Revenge’ Plot

September 26, 2025
News

10 landmarks you can see from space

September 26, 2025
Parts of Grand Canyon North Rim reopening for 1st time since Dragon Bravo Fire disaster

Parts of Grand Canyon North Rim reopening for 1st time since Dragon Bravo Fire disaster

September 26, 2025
Airline workers smuggled drugs in bag falsely tied to passenger: lawsuit

Airline workers smuggled drugs in bag falsely tied to passenger: lawsuit

September 26, 2025
Golf’s Very Loud Weekend

Golf’s Very Loud Weekend

September 26, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.