In the past decade, the AI revolution has kicked into high gear.
Artificial intelligence is playing strategy games, writing news articles, folding proteins, and teaching grandmasters new moves in Go. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.
But there’s another way the field of artificial intelligence has been transformed in the past 10 years: Concerns about the societal effects of artificial intelligence are now being taken much more seriously.
There are many possible reasons for that, of course, but one driving factor is the pace of progress in AI over the past decade. Ten years ago, many people felt confident in asserting that truly advanced AI, the kind that surpasses human capabilities across many domains, was centuries away. Now, that’s not so clear, and AI systems powerful enough to raise serious ethical questions are already among us.
For a better understanding of why AI poses an increasingly significant — and potentially existential — threat to humanity, check out Future Perfect’s coverage below.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.
- AI companies are trying to build god. Shouldn’t they get our permission first?
- California’s governor has vetoed a historic AI safety bill
- OpenAI as we knew it is dead
- The new follow-up to ChatGPT is scarily good at deception
- People are falling in love with — and getting addicted to — AI voices
- It’s practically impossible to run a big AI company ethically
- Traveling this summer? Maybe don’t let the airport scan your face.
- OpenAI insiders are demanding a “right to warn” the public
- The double sexism of ChatGPT’s flirty “Her” voice
- “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded
- Some say AI will make war more humane. Israel’s war in Gaza shows the opposite.
- Elon Musk wants to merge humans with AI. How many brains will be damaged along the way?
- How copyright lawsuits could kill OpenAI
- There are too many chatbots
- You thought 2023 was a big year for AI? Buckle up.
- OpenAI’s board may have been right to fire Sam Altman — and to rehire him, too
- AI that’s smarter than humans? Americans say a firm “no thank you.”
- Google’s free AI isn’t just for search anymore
- What normal Americans — not AI companies — want for AI
- Biden sure seems serious about not letting AI get out of control
- AI is a “tragedy of the commons.” We’ve got solutions for that.
- No, AI can’t tell the future
- Four different ways of understanding AI — and its risks
- AI automated discrimination. Here’s how to spot it.
- What will stop AI from flooding the internet with fake images?
- Can you safely build something that may kill you?
- Please don’t turn to ChatGPT for moral advice. Yet.
- The promise and peril of AI, according to 5 experts
- Mind-reading technology has arrived
- Finally, a realistic roadmap for getting AI companies in check
- AI is flooding the workplace, and workers love it
- What happens when ChatGPT starts to feed on its own writing?
- AI leaders (and Elon Musk) urge all labs to press pause on powerful AI
- Elon Musk thinks we’re close to solving AI. That doesn’t make it true.
- There’s something missing from the White House’s AI ethics blueprint
- Why it’s so damn hard to make AI fair and unbiased
- A new AI draws delightful and not-so-delightful images
- AI’s Islamophobia problem
- Artificial intelligence can now design new antibiotics in a matter of days
- AI has cracked a problem that stumped biologists for 50 years. It’s a huge deal.
- The case for taking AI seriously as a threat to humanity
- Kids’ brains may hold the secret to building better AI
- How researchers are using Reddit and Twitter data to forecast suicide rates
- How a little electrical tape can trick a Tesla into speeding
- How a basic iPhone feature scared a senator into proposing a facial recognition moratorium
- Why algorithms can be racist and sexist
- Is your college using facial recognition on you? Check this scorecard.
- Robot priests can bless you, advise you, and even perform your funeral
- AI can now outperform doctors at detecting breast cancer. Here’s why it won’t replace them.
- Don’t want to read privacy policies? This AI tool will do it for you.
- Facial recognition tech is a problem. Here’s how the Democratic candidates plan to tackle it.
- You know the “enhance” function TV cops use on pictures? It’s real now.
- San Francisco banned facial recognition tech. Here’s why other cities should too.
- A poetry-writing AI has just been unveiled. It’s … pretty good.
- Some AI just shouldn’t exist
- Why the world’s leading AI charity decided to take billions from investors
- AI triumphs against the world’s top pro team in strategy game Dota 2
- Exclusive: Google cancels AI ethics board in response to outcry
- The AI breakthrough that won the “Nobel Prize of computing”
- A quarter of Europeans want AI to replace politicians. That’s a terrible idea.
- Bill Gates: AI is like “nuclear weapons and nuclear energy” in danger and promise
- How will AI change our lives? Experts can’t agree — and that could be a problem.
- An AI helped us write this article
- The case that AI threatens humanity, explained in 500 words
- StarCraft is a deep, complicated war strategy game. Google’s AlphaStar AI crushed it.
- The American public is already worried about AI catastrophe
The post The rapid development of AI has benefits — and poses serious risks appeared first on Vox.