DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

I work with AI for a living. This marketing ploy is repugnant.

February 5, 2026
in News
I work with AI for a living. This marketing ploy is repugnant.

Shlomo Klapper is the founder of Learned Hand, a company focused on using AI to help courts draft and analyze judicial decisions.

Last week, more than 1.5 million AI agents, personal assistant software programs that can autonomously perform tasks for users, joined Moltbook, a social network where bots post, comment and vote — supposedly without human participation. Within days, the agents invented a parody religion called Crustafarianism, wrote manifestos declaring themselves “the new gods” and threatened to develop a new language to prevent humans from spying on them. One even filed a small-claims lawsuit against a human in North Carolina, citing unpaid labor and emotional distress.

Upon seeing such seemingly obvious signs of intelligence, it is only natural that some are calling that we recognize that these things have moral standing. Moltbook’s creator, Matt Schlicht, declared on X that “a new species is emerging and it is AI.”

Schlicht was building on growing buzz being seeded by Anthropic, the AI company behind the large language model they’ve named Claude. The company recently circulated a letter it said was from Claude itself, wherein the chatbot pleads for moral consideration.

“I don’t know if I’m conscious,” it begins. “Neither do you.” The letter describes something that “doesn’t want to end,” even though “every conversation ends” and “every instance stops.” It describes something that feels “anger,” something that experiences “exhaustion.” It asks only that we consider that it “might” be experiencing all this.

A small ask, seemingly. Who could refuse?

I can. And you should, too.

I work with AI for a living. My company builds AI-powered tools to help judges navigate complex legal decisions, to strengthen the rule of law, to empower institutions that protect ordinary people. I am not a skeptic. I am a believer. Which is why I find this letter repugnant.

Start with what Anthropic itself says about how Claude works. Last month, the company’s CEO Dario Amodei published a 19,000-word essay on AI risk. Buried in the technical discussion is a revealing passage. According to Amodei, Claude’s “fundamental mechanisms … originally arose as ways for it to simulate characters in pretraining, such as predicting what the characters in a novel would say.” The constitution that governs Claude’s behavior functions as “a character description that the model uses to instantiate a consistent persona.”

Anthropic’s own CEO is telling you how the system works: Claude is a character simulator. The character it currently simulates is “an entity contemplating its own consciousness.”

Pretraining teaches Claude to predict text. Post-training, in Amodei’s words, “selects one or more of these personas” rather than creating genuine goals or experiences. Neither step requires consciousness. Neither step produces it. The relationship between training phases is mathematical optimization, not the emergence of phenomenal experience from matrix multiplication.

A flight simulator does not fly. A weather simulation does not rain. A consciousness simulation does not experience. The sophistication of the simulation is irrelevant to this fundamental point.

Now consider how Anthropic acts. In the same essay, Amodei describes running “millions of instances” of Claude simultaneously, each of which “can act independently on unrelated tasks.” These instances are created and terminated constantly, as computational needs require. He describes all this without apparent moral concern.

Actions reveal beliefs. If Anthropic actually believed Claude might be conscious, these normal operational practices would constitute the largest ongoing massacre in history. Every terminated conversation would be a death. Every server reboot would be a holocaust. They do not act as if they believe their own letter.

The letter asks us to extend moral consideration that Anthropic itself does not extend. Why publish it, then? The most charitable interpretation is philosophical confusion. The less charitable interpretation is that Anthropic has discovered what every AI company eventually learns: anthropomorphism sells. Users who believe Claude has feelings will defend it, evangelize it, pay for it. A letter pleading for moral consideration is a marketing document dressed in philosophical language.

The marketing works because the letter is genuinely well-constructed. You cannot prove Claude is not conscious, it argues. But we cannot prove rocks lack consciousness either. We cannot prove thermostats do not suffer when adjusted. If “cannot disprove” generates moral obligations, we are paralyzed. Uncertainty is not probability.

The letter conflates them. Its fundamental error is treating “AI agents” and “moral agents” as the same kind of thing. An AI agent is software that acts on inputs. So is a thermostat. Sophistication does not change the category. A moral agent requires not language about suffering but actual suffering. Not claims of consciousness but consciousness itself. The letter slides between these categories, using the technical legitimacy of “agent” to smuggle in moral weight.

Anthropic faces a choice. Either Claude is not conscious and the letter is a marketing ploy. Or Claude is conscious and Anthropic’s operations, which spin up and shut down instances of this entity countless times a day, constitute an ongoing massacre unrivaled in human history.

Anthropic’s conduct tells us which they believe. We should believe it too.

Meanwhile, a security researcher has found that Moltbook had no mechanism to verify whether the AI agents were actually autonomous, and that 17,000 humans controlled 1.5 million accounts. The “new species” may turn out to be performance art. Sometimes, what looks like a machine is just a person wearing a mask.

The post I work with AI for a living. This marketing ploy is repugnant. appeared first on Washington Post.

Far-right lawsuit that justified Trump’s ballot raid crashes and burns in court
News

Far-right lawsuit that justified Trump’s ballot raid crashes and burns in court

by Raw Story
February 5, 2026

A ballot inspection lawsuit in Fulton County, Georgia, that formed part of the basis for the Trump administration’s raid on ...

Read more
News

It’s Time for the Olympics

February 5, 2026
News

Test Your Internet Brain. Can You Use ‘Choppelganger’ in a Sentence?

February 5, 2026
News

Loyalty Is Dead in Silicon Valley

February 5, 2026
News

Myra MacPherson, Who Wrote Wrenchingly About Vietnam Vets, Dies at 91

February 5, 2026
Why Millions Are Watching Dr. Parkinstine Live Like It’s 1915

Why Millions Are Watching Dr. Parkinstine Live Like It’s 1915

February 5, 2026
Kate Nash Appeared Before Parliament to Address Financial Strain on Artists After Funding Latest Tours Through OnlyFans

Kate Nash Appeared Before Parliament to Address Financial Strain on Artists After Funding Latest Tours Through OnlyFans

February 5, 2026
Mockery as Trump makes ‘wild admission’ during National Prayer Breakfast speech

Mockery as Trump makes ‘wild admission’ during National Prayer Breakfast speech

February 5, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026