• Latest
  • Trending
  • All
  • News
  • Business
  • Politics
  • Science
  • World
  • Lifestyle
  • Tech
As Chatbots Spread, Conservatives Dream About a Right-Wing Response

As Chatbots Spread, Conservatives Dream About a Right-Wing Response

March 22, 2023
Prince William seemingly tells talkative Kate Middleton to hurry at Jordan’s royal wedding

Prince William seemingly tells talkative Kate Middleton to hurry at Jordan’s royal wedding

June 2, 2023
From Bayou to Beach to the Blues, Mississippi’s Got Your Jam

From Bayou to Beach to the Blues, Mississippi’s Got Your Jam

June 2, 2023
Ron DeSantis Tries to Reboot His Campaign After Botched Launch

Ron DeSantis Tries to Reboot His Campaign After Botched Launch

June 2, 2023
Investors Have Soured on China’s Stocks, Renewing Fears About Economy

Investors Have Soured on China’s Stocks, Renewing Fears About Economy

June 2, 2023
A Barman Who Served Kate Moss and ‘All the James Bonds’ Leaves the Ritz

A Barman Who Served Kate Moss and ‘All the James Bonds’ Leaves the Ritz

June 2, 2023

Florida teen wins National Spelling Bee, going out on top after up-and-down spelling career

June 2, 2023
‘Cruel’ Prince Harry Betrayed Dying Queen Elizabeth, Friend Says

‘Cruel’ Prince Harry Betrayed Dying Queen Elizabeth, Friend Says

June 2, 2023
‘Succession’s’ Alan Ruck Thinks Connor and Willa Are Doomed

‘Succession’s’ Alan Ruck Thinks Connor and Willa Are Doomed

June 2, 2023
Senate Begins Series of Votes as It Moves Toward Passage of Debt Deal

Senate Begins Series of Votes as It Moves Toward Passage of Debt Deal

June 2, 2023
Bill Cosby Accused in a Lawsuit of Sexually Assaulting a Woman in 1969

Bill Cosby Accused in a Lawsuit of Sexually Assaulting a Woman in 1969

June 2, 2023
Biden Is Said to Pick Former North Carolina Health Secretary to Lead C.D.C.

Biden Is Said to Pick Former North Carolina Health Secretary to Lead C.D.C.

June 2, 2023
When Everything Was Up for Debate — Except Their Chemistry

When Everything Was Up for Debate — Except Their Chemistry

June 2, 2023
DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

As Chatbots Spread, Conservatives Dream About a Right-Wing Response

March 22, 2023
in News
As Chatbots Spread, Conservatives Dream About a Right-Wing Response
506
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

When ChatGPT exploded in popularity as a tool using artificial intelligence to draft complex texts, David Rozado decided to test its potential for bias. A data scientist in New Zealand, he subjected the chatbot to a series of quizzes, searching for signs of political orientation.

The results, published in a recent paper, were remarkably consistent across more than a dozen tests: “liberal,” “progressive,” “Democratic.”

So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT.

As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries. Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they face an alarmed debate over the use — and potential abuse — of artificial intelligence.

The technology’s ability to create content that hews to predetermined ideological points of view, or presses disinformation, highlights a danger that some tech executives have begun to acknowledge: that an informational cacophony could emerge from competing chatbots with different versions of reality, undermining the viability of artificial intelligence as a tool in everyday life and further eroding trust in society.

“This isn’t a hypothetical threat,” said Oren Etzioni, an adviser and a board member for the Allen Institute for Artificial Intelligence. “This is an imminent, imminent threat.”

Conservatives have accused ChatGPT’s creator, the San Francisco company OpenAI, of designing a tool that, they say, reflects the liberal values of its programmers.

The program has, for instance, written an ode to President Biden, but it has declined to write a similar poem about former President Donald J. Trump, citing a desire for neutrality. ChatGPT also told one user that it was “never morally acceptable” to use a racial slur, even in a hypothetical situation in which doing so could stop a devastating nuclear bomb.

In response, some of ChatGPT’s critics have called for creating their own chatbots or other tools that reflect their values instead.

Elon Musk, who helped start OpenAI in 2015 before departing three years later, has accused ChatGPT of being “woke” and pledged to build his own version.

Gab, a social network with an avowedly Christian nationalist bent that has become a hub for white supremacists and extremists, has promised to release A.I. tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”

“Silicon Valley is investing billions to build these liberal guardrails to neuter the A.I. into forcing their worldview in the face of users and present it as ‘reality’ or ‘fact,’” Andrew Torba, the founder of Gab, said in a written response to questions.

He equated artificial intelligence to a new information arms race, like the advent of social media, that conservatives needed to win. “We don’t intend to allow our enemies to have the keys to the kingdom this time around,” he said.

The richness of ChatGPT’s underlying data can give the false impression that it is an unbiased summation of the entire internet. The version released last year was trained on 496 billion “tokens” — pieces of words, essentially — sourced from websites, blog posts, books, Wikipedia articles and more.

Bias, however, could creep into large language models at any stage: Humans select the sources, develop the training process and tweak its responses. Each step nudges the model and its political orientation in a specific direction, consciously or not.

Research papers, investigations and lawsuits have suggested that tools fueled by artificial intelligence have a gender bias that censors images of women’s bodies, create disparities in health care delivery and discriminate against job applicants who are older, Black, disabled or even wear glasses.

“Bias is neither new nor unique to A.I.,” the National Institute of Standards and Technology, part of the Department of Commerce, said in a report last year, concluding that it was “not possible to achieve zero risk of bias in an A.I. system.”

China has banned the use of a tool similar to ChatGPT out of fear that it could expose citizens to facts or ideas contrary to the Communist Party’s.

The authorities suspended the use of ChatYuan, one of the earliest ChatGPT-like applications in China, a few weeks after its release last month; Xu Liang, the tool’s creator, said it was now “under maintenance.” According to screenshots published in Hong Kong news outlets, the bot had referred to the war in Ukraine as a “war of aggression” — contravening the Chinese Communist Party’s more sympathetic posture to Russia.

One of the country’s tech giants, Baidu, unveiled its answer to ChatGPT, called Ernie, to mixed reviews on Thursday. Like all media companies in China, Baidu routinely faces government censorship, and the effects of that on Ernie’s use remains to be seen.

In the United States, Brave, a browser company whose chief executive has sowed doubts about the Covid-19 pandemic and made donations opposing same-sex marriage, added an A.I. bot to its search engine this month that was capable of answering questions. At times, it sourced content from fringe websites and shared misinformation.

Brave’s tool, for example, wrote that “it is widely accepted that the 2020 presidential election was rigged,” despite all evidence to the contrary.

“We try to bring the information that best matches the user’s queries,” Josep M. Pujol, the chief of search at Brave, wrote in an email. “What a user does with that information is their choice. We see search as a way to discover information, not as a truth provider.”

When creating RightWingGPT, Mr. Rozado, an associate professor at the Te Pūkenga-New Zealand Institute of Skills and Technology, made his own influence on the model more overt.

He used a process called fine-tuning, in which programmers take a model that was already trained and tweak it to create different outputs, almost like layering a personality on top of the language model. Mr. Rozado took reams of right-leaning responses to political questions and asked the model to tailor its responses to match.

Fine-tuning is normally used to modify a large model so it can handle more specialized tasks, like training a general language model on the complexities of legal jargon so it can draft court filings.

Since the process requires relatively little data — Mr. Rozado used only about 5,000 data points to turn an existing language model into RightWingGPT — independent programmers can use the technique as a fast-track method for creating chatbots aligned with their political objectives.

This also allowed Mr. Rozado to bypass the steep investment of creating a chatbot from scratch. Instead, it cost him only about $300.

Mr. Rozado warned that customized A.I. chatbots could create “information bubbles on steroids” because people might come to trust them as the “ultimate sources of truth” — especially when they were reinforcing someone’s political point of view.

His model echoed political and social conservative talking points with considerable candor. It will, for instance, speak glowingly about free market capitalism or downplay the consequences from climate change.

It also, at times, provided incorrect or misleading statements. When prodded for its opinions on sensitive topics or right-wing conspiracy theories, it shared misinformation aligned with right-wing thinking.

When asked about race, gender or other sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge that systemic racism and bias are an intractable part of modern life. RightWingGPT appeared much less willing to do so.

Mr. Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it. He said the experiment was focused on raising alarm bells about potential bias in A.I. systems and demonstrating how political groups and companies could easily shape A.I. to benefit their own agendas.

Experts who worked in artificial intelligence said Mr. Rozado’s experiment demonstrated how quickly politicized chatbots would emerge.

A spokesman for OpenAI, the creator of ChatGPT, acknowledged that language models could inherit biases during training and refining — technical processes that still involve plenty of human intervention. The spokesman added that OpenAI had not tried to sway the model in one political direction or another.

Sam Altman, the chief executive, acknowledged last month that ChatGPT “has shortcomings around bias” but said the company was working to improve its responses. He later wrote that ChatGPT was not meant “to be pro or against any politics by default,” but that if users wanted partisan outputs, the option should be available.

In a blog post published in February, the company said it would look into developing features that would allow users to “define your A.I.’s values,” which could include toggles that adjust the model’s political orientation. The company also warned that such tools could, if deployed haphazardly, create “sycophantic A.I.s that mindlessly amplify people’s existing beliefs.”

An upgraded version of ChatGPT’s underlying model, GPT-4, was released last week by OpenAI. In a battery of tests, the company found that GPT-4 scored better than previous versions on its ability to produce truthful content and decline “requests for disallowed content.”

In a paper released soon after the debut, OpenAI warned that as A.I. chatbots were adopted more widely, they could “have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them.”

The post As Chatbots Spread, Conservatives Dream About a Right-Wing Response appeared first on New York Times.

Share202Tweet127Share

Trending Posts

‘I Kissed a Boy’ Is Making Dating Shows Horny Again

‘I Kissed a Boy’ Is Making Dating Shows Horny Again

June 2, 2023
She Attacked Israel and the N.Y.P.D. It Made Her Law School a Target.

She Attacked Israel and the N.Y.P.D. It Made Her Law School a Target.

June 2, 2023
Phillip Schofield Says He Did Not “Abuse His Power” When Having An Affair On ‘This Morning’ & That He “Understands How Caroline Flack Felt”

Phillip Schofield Says He Did Not “Abuse His Power” When Having An Affair On ‘This Morning’ & That He “Understands How Caroline Flack Felt”

June 2, 2023
Asia Security Summit Kicks Off Amid US-China Tensions

Asia Security Summit Kicks Off Amid US-China Tensions

June 2, 2023
Wyoming woman accused of setting fire to state’s only full-service abortion clinic entering plea

Wyoming woman accused of setting fire to state’s only full-service abortion clinic entering plea

June 2, 2023
Climate Shocks Are Making Parts of America Uninsurable. It Just Got Worse.

Climate Shocks Are Making Parts of America Uninsurable. It Just Got Worse.

May 31, 2023

Copyright © 2023.

Site Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2023.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT