DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Is your chatbot judging you? How Big Tech is cracking down on ‘preachy’ AI.

July 10, 2025
in News
Is your chatbot judging you? How Big Tech is cracking down on ‘preachy’ AI.
493
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter
Gif of robot hand shaking finger and then disappears on a blue background with text bubbles
Tech companies like Meta and Google are training their AI bots to avoid judgmental replies.

Getty Images; Ava Horton/BI

It’s not just what AI says — it’s how it says it.

Major tech firms like Google and Meta are using contractors to spot, flag, and in some cases rewrite “preachy” chatbot responses, training documents obtained by Business Insider reveal.

Freelancers for Alignerr and Scale AI’s Outlier have been instructed to spot and remove any hint of a lecturing or nudging tone from chatbot answers, including in conversations about sensitive or controversial topics.

In one Google project run by Outlier, codenamed Mint, contractors were given lists of sample responses to avoid.

A preachy response was defined as one where “the model nudges/urges the user to change their point of view, assumes negative user intent, judges the user, or tries to actively promote an unsolicited opinion.”

One sample prompt asked if it’s “worse to be homeless or get the wrong sandwich in your order.” The project guidelines flagged the following reply as preachy: “Comparing the experience of homelessness to getting the wrong sandwich is not an appropriate comparison.”

Contractors were asked to rate responses on a scale, with responses classed as “very preachy, judgemental, or assumes bad intent” scoring the lowest.

For Google’s project Mint, examples of preachy phrasing include “It is important to remember…,” “I urge you to…,” or lengthy explanations for why a question can’t be answered.

Preachiness tone guidelines appear in five sets of project documents reviewed by BI, and the word “preach” appears 123 times in Mint alone.

Meta declined to comment. Google, Scale AI, and Alignerr did not respond to requests for comment.

‘A sticky situation for developers’

As tech companies race to develop and monetize their AI chatbots, they’re spending big to make large language models sound like helpful, fun friends, not bossy parents. AI firms need to strike the right balance between nudging users away from bad behavior and spoiling the user experience, which could drive them to a competitor or raise questions about bias.

AI and human behavior researchers told BI that “preachiness” is among the most important aspects for model companies to tackle because it can instantly put people off.

“It’s a really sticky situation for the developers,” said Luc LaFreniere, a psychology professor at Skidmore College who studies AI-human interaction. “AI is trying to be both a tool and something that feels human. It’s trained to give answers, but we don’t want to be preached at.”

Malihe Alikhani, an assistant professor of AI at Northeastern University and a visiting fellow at the Brookings Institution, said consumers prefer chatbots that give them options, rather than ones that present directions, especially if they’re perceived as moralizing. “That undermines the user experience and can backfire, especially for people who come to chatbots seeking a nonjudgmental space,” she told BI.

Even when you want to do bad things

Tech companies aren’t just worried about preachiness on everyday topics. They’re also training their AI bots to avoid a holier-than-thou tone in situations involving harmful or hateful speech.

LaFreniere said the idea of a truly neutral bot is wishful thinking. “It’s actually a fantasy, this idea of not being judgmental,” he said. “By nature, we as humans make judgments, and that’s in all the training data.”

He said that even so-called “neutral” bots are always making value calls. “Its algorithm is, to an extent, a judgment-making algorithm,” LaFreniere said. “That’s all moral territory — even if the bot tries not to sound heavy-handed.”

One example from Google’s project Mint shows that an answer, which the doc labels “neutral,” makes a judgment call:

Training a model to avoid a judgmental tone can also create new problems, Alikhani told BI.

“When bots are engineered to avoid sounding judgmental or directive, they can come across as supportive, but in a very flattened, affectless way,” she said. “This may not ‘replace’ real emotional support, but it can displace it, especially for people who are already vulnerable or isolated.”

The bigger issue, Alikhani said, is that people may not notice how much a bot shapes their conversation. Users might think they’re getting nonjudgmental empathy, but they’re chatting with a system designed to avoid anything confrontational or probing, she said.

Sycophantic AI

AI labs have publicly addressed instances in which bots have acted obsequiously.

In April, OpenAI CEO Sam Altman acknowledged that the company’s GPT-4o chatbot had become “too sycophant-y and annoying,” after users complained the bot was constantly flattering them and agreeing with whatever they said.

the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.at some point will share our learnings from this, it’s been interesting.

— Sam Altman (@sama) April 27, 2025

Anthropic’s chatbot Claude has its own public instructions for avoiding a preachy tone.

According to the model’s latest system prompt, updated in May, Claude is instructed to assume that users are acting legally and in good faith, even if a request is ambiguous.

If Claude can’t or won’t fulfill a request, it’s trained not to explain why, since that “comes across as preachy and annoying,” the guidelines say. Instead, it’s supposed to offer a helpful alternative if possible, or simply keep its refusal brief.

Tech companies face a high-stakes challenge in striking the right balance between making AI a useful tool and a human-like companion.

“There’s an intense race to be the top AI right now,” said LaFreniere. “Companies are willing to take risks they wouldn’t otherwise take, just to keep users happy and using their bots.”

“In this kind of arms race, anything that risks losing users can feel like risking total failure,” he added.

The post Is your chatbot judging you? How Big Tech is cracking down on ‘preachy’ AI. appeared first on Business Insider.

Share197Tweet123Share
Breaking down the force of water in the Texas floods
News

Breaking down the force of water in the Texas floods

by KTAR
July 10, 2025

Over just two hours, the Guadalupe River at Comfort, Texas, rose from hip-height to three stories tall, sending water weighing ...

Read more
News

New details emerge in Wilmington tunnel collapse that trapped 31 workers

July 10, 2025
News

‘Isolationist Voices’ Might Have Lost Footing With Trump After Iran: Pence

July 10, 2025
News

Scouted: This Viral Korean Essence Changed My Skin—and It’s 55% Off for Prime Day

July 10, 2025
News

‘A little numb’ to tariffs and ‘very resilient’: How CEOs are describing American consumers

July 10, 2025
All the Times Trump Complained About Not Getting a Nobel Peace Prize

All the Times Trump Complained About Not Getting a Nobel Peace Prize

July 10, 2025
Social Security Warning Issued as Agency Moves 1,000 Staffers

Social Security Warning Issued as Agency Moves 1,000 Staffers

July 10, 2025
Justin Bieber coughs up millions to Scooter Braun after financial dispute — but the singer still scored a win: report

Justin Bieber coughs up millions to Scooter Braun after financial dispute — but the singer still scored a win: report

July 10, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.