DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

The Chatbot Culture Wars Are Here

July 23, 2025
in News
The Chatbot Culture Wars Are Here
500
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

For much of the last decade, America’s partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices.

Those battles aren’t over. But a new one has already started.

This fight is over artificial intelligence, and whether the outputs of leading A.I. chatbots like ChatGPT, Claude and Gemini are politically biased.

Conservatives have been taking aim at A.I. companies for months. In March, House Republicans subpoenaed a group of leading A.I. developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri’s Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a “new wave of censorship” by training their A.I. systems to give biased responses to questions about President Trump.

On Wednesday, Mr. Trump himself joined the fray, issuing an executive order on what he called “woke A.I.”

“We are getting rid of woke,” he said in a speech on Wednesday. “The American people do not want woke, Marxist lunacy in the A.I. models, and neither do other countries.”

The order was announced alongside a new White House A.I. action plan that will require A.I. developers that receive federal contracts to ensure that their models’ outputs are “objective and free from top-down ideological bias.”

Republicans have been complaining about A.I. bias since at least early last year, when a version of Google’s Gemini A.I. system generated historically inaccurate images of the American founding fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading A.I. companies were training their models to parrot liberal ideology.

Since then, top Republicans have mounted pressure campaigns to try to force A.I. companies to disclose more information about how their systems are built, and tweak their chatbots’ outputs to reflect a broader set of political views.

Now, with the White House’s executive order, Mr. Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force A.I. companies to address their concerns.

If this playbook sounds familiar, it’s because it mirrors the way Republicans have gone after social media companies for years — using legal threats, hostile congressional hearings and cherry-picked examples to pressure companies into changing their policies, or removing content they don’t like.

Critics of this strategy call it “jawboning,” and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-to-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.)

Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring A.I. companies through the federal procurement process is necessary to stop A.I. developers from putting their thumbs on the scale.

Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its longstanding fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics.

This time around, the critics cite examples of A.I. chatbots that seemingly refuse to praise Mr. Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as A.I. is integrated into fields like education and health care.

There are a few problems with this argument, according to legal and tech policy experts I spoke to.

The first, and most glaring, is that pressuring A.I. companies to change their chatbots’ outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration’s argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech.

“What it seems like they’re doing is saying, ‘If you’re producing outputs we don’t like, that we call biased, we’re not going to give you federal funding that you would otherwise receive,’” Genevieve Lakier, a law professor at the University of Chicago, told me. “That seems like an unconstitutional act of jawboning.”

There is also the problem of defining what, exactly, a “neutral” or “unbiased” A.I. system is. Today’s A.I. chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they’re using. And testing an A.I. system for bias isn’t as simple as feeding it a list of questions about politics and seeing how it responds.

Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration’s executive order would set “a really vague standard that’s going to be impossible for providers to meet.”

There is also a technical problem with telling A.I. systems how to behave. Namely, they don’t always listen.

Just ask Elon Musk. For years, Mr. Musk has been trying to create an A.I. chatbot, Grok, that embodies his vision of a rebellious, “anti-woke” truth seeker.

But Grok’s behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as “Mecha-Hitler.”) At other times, it acts like a liberal — telling users, for example, that man-made climate change is real, or that the right is responsible for more political violence than the left.

Recently, Mr. Musk has lamented that A.I. systems have a liberal bias that is “tough to remove, because there is so much woke content on the internet.”

Nathan Lambert, a research scientist at the Allen Institute for AI, told me that “controlling the many subtle answers that an A.I. will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.”

It’s not, in other words, as straightforward as telling an A.I. chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the “model spec,” a set of instructions given to A.I. models about how they should act — there’s no guarantee that these changes will consistently produce the behavior conservatives want.

But asking whether the Trump administration’s new rules can survive legal challenges, or whether A.I. developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, A.I. companies, like their social media predecessors, may find it easier to give in than to fight.

”Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,” Ms. Lakier said. “I’m surprised by how easily these powerful companies have folded.”

Kevin Roose is a Times technology columnist and a host of the podcast “Hard Fork.”

The post The Chatbot Culture Wars Are Here appeared first on New York Times.

Share200Tweet125Share
Map Reveals Most Popular Baby Boy Names in Each State
News

Map Reveals Most Popular Baby Boy Names in Each State

by Newsweek
August 9, 2025

Data from the Social Security Administration (SSA) revealed the most popular baby boy names in the United States in 2024. ...

Read more
News

Ghislaine Maxwell puts an uncomfortable spotlight on this prison camp town in Texas

August 9, 2025
Football

Benjamin Sesko: Manchester United complete signing from Leipzig

August 9, 2025
News

Texas company creates drones to confront school shooters in seconds

August 9, 2025
News

Rwanda-backed rebels have killed at least 80 civilians in recent weeks, Congolese authorities say

August 9, 2025
Nagasaki marks 80th A-bomb anniversary as survivors put hopes of nuclear ban in the hands of youth

Nagasaki marks 80th A-bomb anniversary as survivors put hopes of nuclear ban in the hands of youth

August 9, 2025
My 7-year-old wanted to make money, so I helped him start a small business. He learned about confidence and rejection.

My 7-year-old wanted to make money, so I helped him start a small business. He learned about confidence and rejection.

August 9, 2025
L.A. County fire captain faked work injury to collect insurance, prosecutors allege

L.A. County fire captain faked work injury to collect insurance, prosecutors allege

August 9, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.