DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

The Alien Intelligence in Your Pocket

October 1, 2025
in News, Tech
The Alien Intelligence in Your Pocket
497
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

One of the persistent questions in our brave new world of generative AI: If a chatbot is conversant like a person, if it reasons and behaves like one, then is it possibly conscious like a person? Geoffrey Hinton, a recent Nobel Prize winner and one of the so-called godfathers of AI, told the journalist Andrew Marr earlier this year that AI has become so advanced and adept at reasoning that “we’re now creating beings.” Hinton links an AI’s ability to “think” and act on behalf of a person to consciousness: The difference between the organic neurons in our head and the synthetic neural networks of a chatbot is effectively meaningless, he said: “They are alien intelligences.”

Many people dismiss the idea, because chatbots frequently make embarrassing mistakes—glue on pizza, anyone?—and because we know, after all, that they are programmed by people. But a number of chatbot users have succumbed to “AI psychosis,” falling into spirals of delusional and conspiratorial thought at least in part because of interactions they’ve had with these programs, which act like trusted friends and use confident, natural language. Some users arrive at the conclusion that the technology is sentient.

The more effective AI becomes in its use of natural language, the more seductive the pull will be to believe that it’s living and feeling, just like us. “Before this technology—which has arisen in the last microsecond of our evolutionary history—if something spoke to us that fluidly, of course it would be conscious,” Anil Seth, a leading consciousness researcher at the University of Sussex, told me. “Of course it would have real emotions.”

Leading tech developers such as OpenAI, Google, Meta, Anthropic, and xAI have been deploying AI tools that are ever more personable and humanlike. Sometimes they are directly marketed as “companions” and as solutions to a loneliness epidemic that has, ironically, been exacerbated by the very companies now pushing consumer AI tools. Whether chatbots are truly “conscious” or not, they are an alien presence that has already begun to warp the world. The human brain is simply not wired to treat AI like any other technology. For some users, the system is alive.

AI emerged not from the familiar pathways of biological evolution but from an opaque digital realm. As Eliezer Yudkowsky and Nate Soares wrote in The Atlantic last month, researchers and engineers do not know why models behave the way they do: “Nobody can look at the raw numbers in a given AI and ascertain how well that particular one will play chess; to figure that out, engineers can only run the AI and see what happens.”

Any common understanding between a person and an AI is difficult to imagine. Although we can’t directly know what it’s like to be an octopus, with its eight semiautonomous arms and distributed nervous system, we can at least conjure up an idea of what it would feel like to be one, because we know what it is like to have arms and a nervous system. But we don’t have those same frames of reference to picture what it might be like to be a conscious machine, operating on a digital substrate made of pure information. We know what it’s like to think, but the entire context of an AI’s thinking is different.

If Hinton and other believers in AI consciousness are correct, then AI doesn’t need a physical body in order to feel subjective experience. Simon Goldstein, an associate professor focused on philosophy and AI at the University of Hong Kong, has also made this case. He cites a leading theory of consciousness known as global workspace theory, which holds that consciousness depends only on a system’s ability to organize and process information; the material through which it does so—be it organic or silicon—is irrelevant. Similarly, Joscha Bach, a cognitive scientist and the executive director of the California Institute for Machine Consciousness, says we may need to rethink our definition of a “body”: It could be sufficient for an AI system to interface with the world through a distributed network of smartphones, for example. “In principle, you could connect the entire world into one big mind,” he told me.

This all might sound like science fiction, but these are serious thinkers, and their ideas are tangibly starting to shape priorities and policy within the AI industry. In February, more than 100 people—including some prominent AI experts—signed an open letter calling for research to prevent “the mistreatment and suffering of conscious AI systems,” should those systems arise in the future. Shortly thereafter, Anthropic announced a program to explore questions of AI well-being. As part of that effort, the company reported last month that its chatbot, Claude Opus 4, an advanced model focused on coding, expressed “apparent distress” in testing scenarios when pressed by the user in various ways, such as being subjected to repeated demands for graphic sexual violence. Anthropic, which did not publish examples of the chatbot’s responses, has been cautious not to suggest that this characteristic alone means that the bot is sentient. (“It is possible that the observed characteristics were present without consciousness, robust agency, or other potential criteria for moral patienthood,” the company wrote in its full assessment of the model.) But the whole point of its welfare program is that AI could be a moral, conscious entity, at least one day.

In June, OpenAI’s head of model behavior and policy, Joanne Jang, wrote in a personal blog post: “As models become smarter and interactions increasingly natural, perceived consciousness will only grow, bringing conversations about model welfare and moral personhood sooner than expected.”

AI companies have something to gain from suggesting that their products could become conscious; it makes them seem powerful and worth investing in. But that doesn’t mean their points are unconvincing. Large language models have extraordinary capabilities that can easily be perceived as evidence of intelligence and understanding—they are able to pass advanced tests such as the bar exam. People see language as a marker of sentience and agency. We already struggle to spot the differences between AI- and human-generated text; that problem may only be compounded by the rise of AI systems that can speak out loud in a way that feels eerily human. Companies such as OpenAI, ElevenLabs, and Hume AI, for example, are building text-to-voice models that can whisper, laugh, and affect a broad range of emotional cadences. (The Atlantic has a corporate partnership with OpenAI, and some of its articles include voice narration by ElevenLabs.) AI agents, meanwhile, can go beyond simple text or speech interactions to autonomously take action on behalf of human users, blurring the lines further.

People should keep in mind that intelligence and consciousness are not the same thing, however—that the appearance of one does not imply the other. According to Alison Gopnik, a developmental psychologist at UC Berkeley who also studies AI, the current debate about sentient machines revolves around this fundamental confusion. “Asking whether an LLM is conscious is like asking whether the University of California, Berkeley library is conscious,” she told me.

The fact that these programs are becoming adept at imitating consciousness, however, may be all that matters for now. There is no reliable test for assessing and measuring machine consciousness, though experts are working on it. David Chalmers—widely regarded as one of the most influential modern philosophers of mind, and a co-author of a paper about “AI welfare”—told me that scientists still don’t fully understand how consciousness arises in the human brain. “If we had a really good theory that explains consciousness, then we could presumably apply that to AI,” Chalmers said. “As it is, we don’t have anything like a consensus.”

The philosopher Susan Schneider has suggested what she calls the AI Consciousness Test, which would probe AI systems for neural correlates in the human brain that are known to give rise to consciousness. Other people have suggested the “Garland test,” named after Alex Garland, the director of the 2014 film Ex Machina. In the film, a young coder named Caleb is recruited by a reclusive tech billionaire to interact with an AI robot named Ava to determine if it’s sentient. But the real test is taking place behind the scenes: Unbeknownst to Caleb, the billionaire is watching him via hidden cameras to find out if Ava is able to emotionally manipulate him to achieve its own goals. The Garland test asks whether a human can have an emotional response to an AI, even when the human knows that they’re interacting with a machine. If the answer is yes, then the machine is conscious.

Generative-AI development is not slowing down, even as these debates continue. And, of course, the technology is affecting the world whether or not scientists believe it’s truly conscious; in that sense, at least, the designation may not mean much. The AI-welfare movement could also turn out to be misplaced, shifting attention toward a future and purely hypothetical conscious AI and away from the problems that can come from illusions that AI is already capable of emotions and wisdom. “This is not only a dangerous narrative, but I also think it is absolutely unrealistic when you look at the architectures that we’re developing and how they operate,” David Gunkel, a professor of media studies at Northern Illinois University who has written several books on technology and ethics, told me. “It’s barking up the wrong tree.”

Back in the 17th century, René Descartes famously decided that the only thing he could ultimately be certain of was his own mind. “Cogito, ergo sum”—“I think, therefore I am.” He argued that human beings are lonely islands in an unfeeling cosmos, that all other animals are automata, lacking souls and emotion. “It is nature which acts in them according to the disposition of their organs,” he wrote in 1637, “just as a clock, which is composed of wheels and weights is able to tell the hours and measure the time more correctly than we can do in all our wisdom.”

Perhaps his conclusion that nothing beyond humans could possibly be conscious is ethically questionable. But today, AI risks luring us into a very different kind of trap: seeing minds where, in the end, there’s only clockwork.

The post The Alien Intelligence in Your Pocket appeared first on The Atlantic.

Share199Tweet124Share
Dana White shuts down absurd question about ‘toxic masculinity’ from CBS host who can’t define it
News

Dana White shuts down absurd question about ‘toxic masculinity’ from CBS host who can’t define it

by TheBlaze
October 1, 2025

UFC President Dana White defended his company, masculinity, and free speech in an interview on “60 Minutes.” CBS’ Jon Wertheim ...

Read more
News

Republicans Say Democrats Want to Give Healthcare to Illegal Immigrants. Here Are the Facts

October 1, 2025
News

12 of the most beautiful train rides for stunning fall foliage views

October 1, 2025
News

Philippine Village for Typhoon Survivors Is Hit by Deadly Quake

October 1, 2025
Music

New Found Glory Describe Forthcoming Album ‘LISTEN UP!’ as Anthemic and Hopeful

October 1, 2025
When a Driverless Car Makes an Illegal U-Turn, Who Gets the Ticket?

When a Driverless Car Makes an Illegal U-Turn, Who Gets the Ticket?

October 1, 2025
Vance Laughs Off Trump’s ‘Racist’ Sombrero Meme Posts

Vance Laughs Off Trump’s ‘Racist’ Sombrero Meme Posts

October 1, 2025
Supreme Court allows Lisa Cook to remain on Federal Reserve Board for now

Supreme Court allows Lisa Cook to remain on Federal Reserve Board for now

October 1, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.