• Latest
  • Trending
  • All
  • News
  • Business
  • Politics
  • Science
  • World
  • Lifestyle
  • Tech
A.I. Is Being Built by People Who Think It Might Destroy Us

A.I. Is Being Built by People Who Think It Might Destroy Us

March 27, 2023
Pokémon’s next Squishmallows include Piplup, Pikachu

Pokémon’s next Squishmallows include Piplup, Pikachu

May 30, 2023
2 Iranian journalists’ trials begin over coverage of woman’s death in custody

2 Iranian journalists’ trials begin over coverage of woman’s death in custody

May 30, 2023
The ‘Ms. Pat Show’ Renewed For Season 4 At BET+ As Patricia Williams Sets 2 More Shows In Development With Network

The ‘Ms. Pat Show’ Renewed For Season 4 At BET+ As Patricia Williams Sets 2 More Shows In Development With Network

May 30, 2023
14-year-old fatally shot in the back by S.C. store owner, sheriff says

14-year-old fatally shot in the back by S.C. store owner, sheriff says

May 30, 2023
The Boogeyman review – deftly made yet derivative Stephen King horror

The Boogeyman review – deftly made yet derivative Stephen King horror

May 30, 2023
Far-Left Atlantic Brands Elon Musk’s Twitter a ‘Far Right Social Network’

Far-Left Atlantic Brands Elon Musk’s Twitter a ‘Far Right Social Network’

May 30, 2023
Virginia Man Is Charged in Fatal Shooting of New Jersey Councilwoman

Virginia Man Is Charged in Fatal Shooting of New Jersey Councilwoman

May 30, 2023
Jayapal says debt ceiling deal shows GOP doesn’t care about deficits: ‘No meaningful deficit reduction’

Jayapal says debt ceiling deal shows GOP doesn’t care about deficits: ‘No meaningful deficit reduction’

May 30, 2023
What to Know as the Tree of Life Massacre Trial Begins

Trial Begins in Pittsburgh Synagogue Massacre: What to Know

May 30, 2023
What to Know as the Tree of Life Massacre Trial Begins

Trial Begins in Tree of Life Massacre: What to Know

May 30, 2023
Manson follower Leslie Van Houten should be paroled, California appeals court rules

Manson follower Leslie Van Houten should be paroled, California appeals court rules

May 30, 2023
Score Up to $1200 Off on Samsung’s Best-Sellers for a Limited Time

Score Up to $1200 Off on Samsung’s Best-Sellers for a Limited Time

May 30, 2023
DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

A.I. Is Being Built by People Who Think It Might Destroy Us

March 27, 2023
in News
A.I. Is Being Built by People Who Think It Might Destroy Us
506
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

All of a sudden, it’s as if we’ve come face to face with a new species on our smartphone screens.

Over the past few months, A.I. chatbots have swarmed into the country’s social media feeds and, to some extent, its nightmares, with transcript after transcript triggering a collective, millenarian form of what the 19th century critic John Ruskin memorably called the pathetic fallacy — the very human tendency to project onto nonhuman beings those features we see as the quintessential attributes of humanity. With ChatGPT and Bing Chat, we aren’t just projecting depth or pathos or inner life as we implore them to “narrate the invasion of Iraq in lyrics suitable for a Disney princess” or “explain to a 4-year-old why King Tut’s death mask was made for a woman”; we’re reading our own existential panic into their responses, seeing them less as robot pets than as so many Frankenstein’s monsters, even when they are simply following our commands.

How menacing are the chatbots? They are still routinely making mistakes so basic that it seems pointlessly mystical to refer to them as “hallucinations,” as machine learning engineers and A.I. theorists alike tend to. (Was the A.I. microdosing or just wrong when it suggested that LeBron James had a pretty good chance of winning the N.B.A. M.V.P. this year?) They are prone to misinformation, and biased in quite conventional if disturbing ways. Some are trained on databases that do not extend to the present day, so that any questions about recent events (such as the run on Silicon Valley Bank) are likely to generate useless or counterproductive answers — making today’s leading “use case” for these tools, as a form of online search, a bit hard to understand.

But A.I. is also exhibiting some plainly disorienting progress, not just on concrete tasks but on unnerving ones: a chatbot hiring a human TaskRabbit to solve a captcha, another writing its own Python code to enable its “escape.” These are not examples of robot autonomy so much as performances of ready-made anxieties — in each case, they were prompted by human observers to test guardrails — and yet they still disquiet, signs that something strange and disruptive is absolutely afoot.

The tech is moving so quickly that it may seem presumptuous to believe that we already know what to make of it all. But many of those who have spent the last decade neck-deep in machine learning believe they do, in fact, know, and that we need to be thinking in quite dire terms. It’s common to hear invocations of the A.I. revolution as an event as significant as the arrival of the internet — but it’s one thing to prepare for a cultural earthquake like the internet and another to be preparing for the equivalent of nuclear war. And it is especially remarkable, given the pervasive utopianism of the internet’s original architects, just how dystopian those ushering in its next phase seem to be about the very new world they believe they are spawning.

“Last time we had rivals in terms of intelligence they were cousins to our species, like Homo neanderthalensis, Homo erectus, Homo floresiensis, Homo denisova and more,” the neuroscientist Erik Hoel wrote in one much-passed-around meditation on the current state of play, with the subtitle “Microsoft’s new A.I. really does herald a global threat.” Hoel went on: “Let’s be real: After a bit of inbreeding we likely murdered the lot.”

More outspoken cries of worry have been echoing across the internet now for months, including from Eliezer Yudkowsky, the godfather of A.I. existentialism, who lately has been taking whatever you’d call the opposite of a victory lap to despair over the progress already made by A.I. and the failure to erect real barriers to its takeoff. We may be on the cusp of significant breakthroughs in A.I. superintelligence, Yudkowsky told one pair of interviewers, but the chances we will get to observe those breakthroughs playing out are slim, “because we’ll all be dead.” His advice, given how implausible he believes a good outcome with A.I. appears to be, is to “go down fighting with dignity.”

Even Sam Altman — the mild-mannered, somewhat normie chief executive of OpenAI, the company behind the most impressive new chatbots — has publicly promised “to operate as though these risks are existential,” and suggested that Yudkowsky might well deserve the Nobel Peace Prize for raising the alarm about the risks. He also recently wrote that “A.I. is going to be the greatest force for economic empowerment and a lot of people getting rich we have ever seen,” and joked in 2015 that “A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” A year later, in a New Yorker profile, Altman was less ironic about the bleakness of his worldview. “I prep for survival,” he acknowledged — meaning eventualities like a laboratory-designed superbug, nuclear war and an A.I. that attacks us. “My problem is that when my friends get drunk they talk about the ways the world will end,” he said. “I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

This may not be a universal view among those working on artificial intelligence, but it also is not an uncommon one. In one much cited 2022 survey, A.I. experts were asked: “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median estimate was 10 percent — a one in 10 chance. Half the responses rated the chances higher. In another poll, nearly one-third of those actively working on machine learning said they believed that artificial intelligence would make the world worse. My colleague Ezra Klein recently described these results as mystifying: Why, then, would you choose to work on it?

There are many possible answers to this question, including that ignoring growing risks in any field is a pretty good way to make them worse. Another is that the respondents don’t entirely believe their answer, and are instead articulating how significant they believe A.I. to be by resorting to theological and mythological reference points. But another partial explanation could be that, to some, at least, the apocalyptic possibilities look less like downsides than like a kind of enticement — that those answering survey questions in self-aggrandizing ways may be feeling, beyond the tug of the pathetic fallacy, some mix of existential vanity and an almost wishful form of end-of-days prophecy.

In recent years, this critique of catastrophist thinking has been regularly and conspicuously leveled by complacent centrists and patronizing graybeards against the alarmist fringe of the climate movement — yes, warming was happening, they acknowledged, and yes, it represented a challenge to the world’s collective status quo, but still, all of this hyperbolic talk was, let’s be honest, a bit much. More recently, commentators fretting over the mental health crisis in U.S. teenagers have linked the spikes in despair to catastrophist thinking on the progressive left more broadly.

But look elsewhere on the political spectrum and you can find a similar fatalism, often incubated online, that allows catastrophists to extrapolate and even braid their various fears — about imminent ecosystem collapse and mass extinction in this corner of the internet, or low birthrates and “surplus men” in that one; about worldwide bank runs and the crisis of fiat currency here, and hyperinflation and a global debt crisis there; about mass permanent disability from Covid infection over here and permanent lockdowns and rampant cardiac disaster from vaccination over there.

Some of these fears are better grounded than others — your mileage may vary, as they used to say in the internet’s more sociable age. But it’s clear that catastrophic thinking isn’t some isolated or idiosyncratic phenomenon you can cordon off or excise from the culture. Soft millenarianism has become so much a universal grammar that even the high priests of technological progress, the self-appointed architects of our brave new world, can’t manage to escape it.

The post A.I. Is Being Built by People Who Think It Might Destroy Us appeared first on New York Times.

Share202Tweet127Share

Trending Posts

Biden Accuser Tara Reade Claims She Fled to Russia Fearing for Her Life

Biden Accuser Tara Reade Claims She Fled to Russia Fearing for Her Life

May 30, 2023
‘Succession’ Finale: Shiv and Tom Holding Hands Infuriated Fans

‘Succession’ Finale: Shiv and Tom Holding Hands Infuriated Fans

May 30, 2023
Now Far-Right Nutters Want to Boycott… Chick-Fil-A

Now Far-Right Nutters Want to Boycott… Chick-Fil-A

May 30, 2023
Where to Find Family-Style Dining, and More Reader Questions

Where to Find Family-Style Dining, and More Reader Questions

May 30, 2023
Russian energy giant Gazprom’s private security force was deployed to fight in Ukraine, report says

Russian energy giant Gazprom’s private security force was deployed to fight in Ukraine, report says

May 30, 2023
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn

May 30, 2023

Copyright © 2023.

Site Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2023.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT