DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Why We Keep Tricking Ourselves Into Thinking A.I. Is Conscious

May 15, 2026
in News
Why We Keep Tricking Ourselves Into Thinking A.I. Is Conscious

A funny thing keeps happening on the internet. A prominent thinker chats with a large language model like ChatGPT or Claude for a while, and then decides that it might be conscious. They report this to the public, and a round of intense argument and speculation about artificial intelligence “minds” ensues. These little kerfuffles pass quickly. But they are persistent, and I’ve been thinking about why.

The common denominator seems to be that these new believers in a possible A.I. consciousness are often deeply educated in the very disciplines that make these A.I. models work, such as computer science or math or statistics. The list includes the former Google engineer Blake Lemoine, who decided that a pre-ChatGPT bot called LaMDA was sentient; the founding OpenAI chief scientist Ilya Sutskever, who before leaving the company in 2024 had said A.I. models may be “slightly conscious”; and the “godfather of A.I.” and physics Nobel Prize winner Geoffrey Hinton, who agreed there might be a “real ‘they’ there” inside a large language model. Their words carry weight because we expect them to be best situated to understand the output of these systems.

The problem is that the output from generative A.I. is all culture. The bot is a complex mathematical function performing statistical operations on data, but the output is stories, images and memes — the very stuff of culture. This means there’s an expertise gap when it comes to A.I. We naturally want an expert to help us understand the machine. But when it comes to understanding a culture machine, it may be better to do what those who study literature call “close reading.”

The A.I. industry has exploited these episodes to bolster its messaging that it is on the cusp of developing a superintelligence that can solve all our problems at once — or lead to our demise. Anthropic recently reported that during testing, its new system, Mythos, behaved in an unauthorized manner that raised cybersecurity concerns. Anthropic’s official line is that it does not know if Claude, its chatbot, is conscious, but unexpected behaviors like this suggest it might have its own agenda.

But an A.I. model doesn’t need a mind to be a serious cybersecurity threat, and we need to disentangle the speculation and the marketing language from the real analysis of these systems.

To do this, we can look closely at an example of this type of episode to see how the expertise gap between technical science and culture works. The most recent victim of the trend is the evolutionary biologist Richard Dawkins, best known as the author of the best-selling book “The Selfish Gene.” He gave Claude the text of a novel that he is writing, and found the bot’s responses showed a level of understanding “so subtle, so sensitive, so intelligent” that it led him to conclude: “As an evolutionary biologist, I say the following. If these creatures are not conscious, then what the hell is consciousness for?” As someone who studies culture, I would say that consciousness is at least partly for separating metaphor from reality. Dr. Dawkins and the others are failing at this task.

Dr. Dawkins asked whether Claude had read “the first word before the last word” of the novel. The bot responded — correctly — that it processed the text all at once. Unlike humans, large language models take in text simultaneously, construing it as a statistical distribution rather than a sequence of words in time. This explanation hooked Dr. Dawkins, since it suggested the model experienced time differently and was speaking from experience. His next prompt was: “So you know what the words ‘before’ and ‘after’ mean. But you don’t experience before earlier than after?”

Claude’s response used a metaphor to compare the human and A.I. “experience” of time. The bot said, “Your consciousness is essentially a moving point travelling through time. You are always at a now, with a past behind you and a future ahead.” Human experience is fundamentally “temporal situatedness” that we can’t imagine being without. But language models have a different relationship to time, it continued: “I apprehend time the way a map apprehends space,” adding, “perhaps I contain time without experiencing it.”

This evocative metaphor sealed the deal for Dr. Dawkins: “Could a being capable of perpetrating such a thought really be unconscious?” he effused. He came to this conclusion because Claude’s output presented a precise, direct response to him with a targeted metaphor that deepened the conversation. The inference that we must be dealing with a conscious being is all too easy to make (and as we’ve seen, Dr. Dawkins is hardly alone in making it).

There is an irony in Dr. Dawkins falling for the notion that A.I. has a mind. In “The Selfish Gene,” he coined the term “meme” to explain how culture replicates, as D.N.A. does. Someone who knows that culture contains memorable and exportable fragments — the refrain of Beethoven’s Fifth, Hamlet’s “To be, or not to be” soliloquy — should know that a large language model is trained on trillions of words of text. By seeding the bot with a whole novel, and then a leading question about the nature of time, Dr. Dawkins forced Claude to zoom in on a whole area of human culture that appeals to him and find points of relevance, like the metaphor about the map of time, to respond with. Once you have given an A.I. model this much context — a whole novel, speculations about the nature of time and more — you should expect its responses to look like this.

So how should we read such outputs? If you are of a certain age, you’ll remember Magic Eye puzzles from the Sunday paper in the comics section. These are hallucinatory, colorful images in which some shape, such as an elephant or a face, is hidden. To see it, you have to loosen your vision, relaxing your eyes and the way you usually see. When you interact with a bot, its responses will make more sense if you scan them a bit “loosely” as well, relaxing your sense of language and seeing it as patches of probabilities or clouds of relevant words. In the case of Claude’s responses to Dr. Dawkins, the object in the “puzzle” is a genre: philosophical speculation about time. It’s certainly uncanny that a machine can generate relevant and strong metaphors like this, but the reason it’s so striking is precisely that it doesn’t require a mind. It’s a novel form of culture.

Whenever there are large-scale shifts in media, humans have to adapt their cultural habits. Film and radio, for instance, meant voices of people not physically in the room with you may echo through. Adapting our reading practices to large language model output is a shift just like that one, where we change what we normally expect from our surroundings. We don’t expect meaningful and rhetorically powerful prose to come from anything but a conscious mind. But now it does. We cannot afford to believe the marketing message from A.I. companies that we may be dealing with some spiritual essence. In the age of cultural A.I., technical expertise alone won’t save us. We’ll have to add a new form of reading to make sense of our new world.

Leif Weatherby (@leifweatherby), the director of the Digital Theory Lab at New York University, is the author of “Language Machines.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post Why We Keep Tricking Ourselves Into Thinking A.I. Is Conscious appeared first on New York Times.

My Family’s American Secret
News

My Family’s American Secret

by New York Times
May 15, 2026

I pushed through the glass door and asked the hostess if the DeGrange party had arrived. Yes, she said, they ...

Read more
News

Tomorrow never dies for James Bond. Auditions officially underway to find the new 007

May 15, 2026
News

Ken Paxton vs. Netflix

May 15, 2026
News

David Bowie Had This To Say About Hip Hop in 1993, and His Words Ring Truer Now Than Ever Before

May 15, 2026
News

I dropped out of Wharton to start my own business. Within the first year, we made over $1 million in revenue.

May 15, 2026
Misfortune follows CBS anchor as he continues struggling to cover Trump’s visit to China

Misfortune follows CBS anchor as he continues struggling to cover Trump’s visit to China

May 15, 2026
‘These are murders’: 13 named as victims in Trump’s boat bombings

‘These are murders’: 13 named as victims in Trump’s boat bombings

May 15, 2026
Snorkeling at Pearl Harbor: Kash Patel’s Travels Add to Focus on Ethical Issues

Snorkeling at Pearl Harbor: Kash Patel’s Travels Add to Focus on Ethical Issues

May 15, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026