• Latest
  • Trending
  • All
  • News
  • Business
  • Politics
  • Science
  • World
  • Lifestyle
  • Tech
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

May 30, 2023
Trump’s ‘Bogus’ Arguments Won’t Be Entertained by Supreme Court—Attorney

Trump’s ‘Bogus’ Arguments Won’t Be Entertained by Supreme Court—Attorney

December 2, 2023
George Santos Leaves Congress in a Blaze of Glory

Santos vows to file ethics complaints against multiple lawmakers hours after expulsion from House

December 2, 2023
The U.S. Won’t Permit the Forced Displacement of Palestinians, Harris Says

The U.S. Won’t Permit the Forced Displacement of Palestinians, Harris Says

December 2, 2023
Missing NYC Woman Found Dead in Trash Compactor of Manhattan Building

Missing NYC Woman Found Dead in Trash Compactor of Manhattan Building

December 2, 2023
‘American icon’: Biden pays tribute to Sandra Day O’Connor

‘American icon’: Biden pays tribute to Sandra Day O’Connor

December 2, 2023
House Speaker Mike Johnson to call vote on Biden impeachment probe amid  ‘stonewalling’

House Speaker Mike Johnson to call vote on Biden impeachment probe amid ‘stonewalling’

December 2, 2023
Gunmen kill eight bus passengers in northern Pakistan

Gunmen kill eight bus passengers in northern Pakistan

December 2, 2023
Michelle Pfeiffer gets a nasty black eye on pickleball court

Michelle Pfeiffer gets a nasty black eye on pickleball court

December 2, 2023
2 manatees named Romeo and Juliet that have lived in a tank at a Florida theme park since 1956 will finally be freed after pressure from activists

2 manatees named Romeo and Juliet that have lived in a tank at a Florida theme park since 1956 will finally be freed after pressure from activists

December 2, 2023
Guinea-Bissau president calls deadly violence ‘attempted coup’, as soldiers ordered back to barracks

Guinea-Bissau president calls deadly violence ‘attempted coup’, as soldiers ordered back to barracks

December 2, 2023
Egg suppliers ordered to pay 17.7 million by federal jury for price gouging

Egg suppliers ordered to pay 17.7 million by federal jury for price gouging

December 2, 2023
‘Star Trek: Discovery’ Final Season Clip: Captain Burnham & Book Encounter A Beast – And It’s Not Pleased

‘Star Trek: Discovery’ Final Season Clip: Captain Burnham & Book Encounter A Beast – And It’s Not Pleased

December 2, 2023
DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

May 30, 2023
in News
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn
1.3k
SHARES
3.6k
VIEWS
Share on FacebookShare on Twitter

A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement expected to be released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I.

The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I. research efforts, had not signed as of Tuesday.)

The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.

These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

This month, Mr. Altman, Mr. Hassabis and Mr. Amodei met with President Biden and Vice President Kamala Harris to talk about A.I. regulation. In a Senate testimony after the meeting, Mr. Altman warned that the risks of advanced A.I. systems were serious enough to warrant government intervention and called for regulation of A.I. for its potential harms.

Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.

“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.

But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.

In a blog post last week, Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.

Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.

In March, more than 1,000 technologists and researchers signed another open letter calling for a six-month pause on the development of the largest A.I. models, citing concerns about “an out-of-control race to develop and deploy ever more powerful digital minds.”

That letter, which was organized by another A.I.-focused nonprofit, the Future of Life Institute, was signed by Elon Musk and other well-known tech leaders, but it did not have many signatures from the leading A.I. labs.

The brevity of the new statement from the Center for AI Safety — just 22 words in all — was meant to unite A.I. experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring, but who shared general concerns about powerful A.I. systems, Mr. Hendrycks said.

“We didn’t want to push for a very large menu of 30 potential interventions,” Mr. Hendrycks said. “When that happens, it dilutes the message.”

The statement was initially shared with a few high-profile A.I. experts, including Mr. Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of artificial intelligence. From there, it made its way to several of the major A.I. labs, where some employees then signed on.

The urgency of A.I. leaders’ warnings has increased as millions of people have turned to A.I. chatbots for entertainment, companionship and increased productivity, and as the underlying technology improves at a rapid clip.

“I think if this technology goes wrong, it can go quite wrong,” Mr. Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.”

The post A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn appeared first on New York Times.

Share507Tweet317Share

Trending Posts

Israel Interrogating Hundreds of Fighters Seized in Gaza

Israel Interrogating Hundreds of Fighters Seized in Gaza

December 2, 2023
King Charles: Pay $5 trillion annually to prevent climate catastrophe

King Charles: Pay $5 trillion annually to prevent climate catastrophe

December 2, 2023
Fox News Host Implies His Own Network Bowed to ‘Censorship Industrial Complex’

Fox News Host Implies His Own Network Bowed to ‘Censorship Industrial Complex’

December 2, 2023
Republicans have a better chance at winning George Santos’ old House seat than you might think

Republicans have a better chance at winning George Santos’ old House seat than you might think

December 2, 2023
Weekly Horoscope: December 3 to December 9, 2023

Weekly Horoscope: December 3 to December 9, 2023

December 2, 2023

Copyright © 2023.

Site Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2023.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT