• Latest
  • Trending
  • All
  • News
  • Business
  • Politics
  • Science
  • World
  • Lifestyle
  • Tech
The hidden danger of ChatGPT and generative AI | The AI Beat

The hidden danger of ChatGPT and generative AI | The AI Beat

December 5, 2022
Stream It Or Skip It: ‘Santo Maldito’ On Hulu, Where An Atheist Becomes A Pastor After He Performs A Medical Miracle

Stream It Or Skip It: ‘Santo Maldito’ On Hulu, Where An Atheist Becomes A Pastor After He Performs A Medical Miracle

February 8, 2023
FanDuel Super Bowl Promo Code Offers $3K No-Sweat Bet for Chiefs-Eagles

FanDuel Super Bowl Promo Code Offers $3K No-Sweat Bet for Chiefs-Eagles

February 8, 2023
Turkey silences earthquake response critics with Twitter ban

Turkey silences earthquake response critics with Twitter ban

February 8, 2023
One-way busloads to Canada add to urgency of border policy revamp

One-way busloads to Canada add to urgency of border policy revamp

February 8, 2023
Atomic Heart headlines Xbox Game Pass’s February calendar

Atomic Heart headlines Xbox Game Pass’s February calendar

February 8, 2023
Philadelphia police officer critically wounded in shooting

Philadelphia police officer critically wounded in shooting

February 8, 2023
Political groups spar over EU Parliament chief Metsola’s Qatargate transparency reforms

Political groups spar over EU Parliament chief Metsola’s Qatargate transparency reforms

February 8, 2023
Congress Is Investing in Alternatives to Police. Can They Work?

Congress Is Investing in Alternatives to Police. Can They Work?

February 8, 2023
Every Game Boy and GBA game now on Nintendo Switch

Every Game Boy and GBA game now on Nintendo Switch

February 8, 2023
Magiscriptor App Creator Michael Chierchio, Demonstrates The Art Of Tenacity In The Face Of Odds

Magiscriptor App Creator Michael Chierchio, Demonstrates The Art Of Tenacity In The Face Of Odds

February 8, 2023
Twitter Glitches Pile Up as Key Features Fail

Twitter Glitches Pile Up as Key Features Fail

February 8, 2023
Five Takeaways From the House G.O.P. Hearing With Former Twitter Executives 

Five Takeaways From the House G.O.P. Hearing With Former Twitter Executives 

February 8, 2023
DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

The hidden danger of ChatGPT and generative AI | The AI Beat

December 5, 2022
in News
The hidden danger of ChatGPT and generative AI | The AI Beat
975
SHARES
2.8k
VIEWS
Share on FacebookShare on Twitter

Since OpenAI launched its early demo of ChatGPT last Wednesday, the tool already has over a million users, according to CEO Sam Altman — a milestone, he points out, that took GPT-3 nearly 24 months to get to and DALL-E over 2 months. 

The “interactive, conversational model,” based on the company’s GPT-3.5 text-generator, certainly has the tech world in full swoon mode. Aaron Levie, CEO of Box, tweeted that “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” Y Combinator cofounder Paul Graham tweeted that “clearly something big is happening.” Alberto Romero, author of The Algorithmic Bridge, calls it “by far, the best chatbot in the world.” And even Elon Musk weighed in, tweeting that ChatGPT is “scary good. We are not far from dangerously strong AI.” 

But there is a hidden problem lurking within ChatGPT: That is, it quickly spits out eloquent, confident responses that often sound plausible and true even if they are not. 

ChatGPT can sound plausible even if its output is false

Like other generative large language models, ChatGPT makes up facts. Some call it “hallucination” or “stochastic parroting,” but these models are trained to predict the next word for a given input, not whether a fact is correct or not. 

Some have noted that what sets ChatGPT apart is that it is so darn good at making its hallucinations sound reasonable. 

Technology analyst Benedict Evans, for example, asked ChatGPT to “write a bio for Benedict Evans.” The result, he tweeted, was “plausible, almost entirely untrue.” 

More troubling is the fact that there are obviously an untold number of queries where the user would only know if the answer was untrue if they already knew the answer to the posed question. 

That’s what Arvind Narayanan, a computer science professor at Princeton, pointed out in a tweet: “People are excited about using ChatGPT for learning. It’s often very good. But the danger is that you can’t tell when it’s wrong unless you already know the answer. I tried some basic information security questions. In most cases the answers sounded plausible but were in fact BS.” 

Fact-checking generative AI

Back in the waning days of print magazines in the 2000s, I spent several years as a fact-checker for publications including GQ and Rolling Stone. Each fact had to include authoritative primary or secondary sources — and Wikipedia was frowned upon. 

Few publications have staff fact-checkers anymore, which puts the onus on reporters and editors to make sure they get their facts straight — especially at a time when misinformation already moves like lightning across social media, while search engines are constantly under pressure to surface verifiable information and not BS. 

That’s certainly why Stack Overflow, the Q&A site for coders and programmers, has temporarily banned users from sharing ChatGPT responses. 

And if Stack Overflow can’t keep up with misinformation due to AI, it’s hard to imagine others being able to manage a tsunami of potential AI-driven BS. As Gary Marcus tweeted, “If StackOverflow can’t keep up with plausible but incorrect information, what about social media and search engines?”

And while many are salivating at the idea that LLMs like ChatGPT could someday replace traditional search engines, others are strongly pushing back. 

Emily Bender, professor of linguistics at the University of Washington, has long pushed back on this notion. 

She recently emphasized again that LLMs are “not fit” for search — ”both because they are designed to just make sh** up and because they don’t support information literacy.” She pointed to a paper she co-authored on the topic published in March. 

Is it better for ChatGPT to look right? Or be right? 

BS is obviously something that humans have perfected over the centuries. And ChatGPT and other large language models have no idea what it means, really, to “BS.” But Open AI made this weakness very clear in its blog announcing the demo and explained that fixing it is “challenging,” saying: 

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.” 

So it’s clear that OpenAI knows perfectly well that ChatGPT is filled with BS under the surface. They never meant the technology to offer up a source of truth. 

But the question is: Are human users okay with that? 

Unfortunately, they might be. If it sounds good, many humans may think that’s good enough. And, perhaps, that’s where the real danger lies beneath the surface of ChatGPT. The question is, how will enterprise users respond?

The post The hidden danger of ChatGPT and generative AI | The AI Beat appeared first on Venture Beat.

Share390Tweet244Share

Trending Posts

Madonna compared to Cleopatra, Queen Elizabeth I for trying to look youthful

Madonna compared to Cleopatra, Queen Elizabeth I for trying to look youthful

February 8, 2023
A helmet wrapped in the emotion of war that proclaimed Volodymyr Zelensky’s battle cry

A helmet wrapped in the emotion of war that proclaimed Volodymyr Zelensky’s battle cry

February 8, 2023
The Legend of Zelda: Tears of the Kingdom’s new trailer shows us a Hyrule in chaos

The Legend of Zelda: Tears of the Kingdom’s new trailer shows us a Hyrule in chaos

February 8, 2023
Bird Flu Outbreak Puts Mink Farms Back in the Spotlight

What a Bird Flu Outbreak Among Mink Could Mean for Humans

February 8, 2023
China’s Surveillance Balloon Is Not a Test of Will

China’s Surveillance Balloon Is Not a Test of Will

February 8, 2023

Copyright © 2023.

Site Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2023.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT