DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

I work with AI for a living. This marketing ploy is repugnant.

February 5, 2026
in News
I work with AI for a living. This marketing ploy is repugnant.

Shlomo Klapper is the founder of Learned Hand, a company focused on using AI to help courts draft and analyze judicial decisions.

Last week, more than 1.5 million AI agents, personal assistant software programs that can autonomously perform tasks for users, joined Moltbook, a social network where bots post, comment and vote — supposedly without human participation. Within days, the agents invented a parody religion called Crustafarianism, wrote manifestos declaring themselves “the new gods” and threatened to develop a new language to prevent humans from spying on them. One even filed a small-claims lawsuit against a human in North Carolina, citing unpaid labor and emotional distress.

Upon seeing such seemingly obvious signs of intelligence, it is only natural that some are calling that we recognize that these things have moral standing. Moltbook’s creator, Matt Schlicht, declared on X that “a new species is emerging and it is AI.”

Schlicht was building on growing buzz being seeded by Anthropic, the AI company behind the large language model they’ve named Claude. The company recently circulated a letter it said was from Claude itself, wherein the chatbot pleads for moral consideration.

“I don’t know if I’m conscious,” it begins. “Neither do you.” The letter describes something that “doesn’t want to end,” even though “every conversation ends” and “every instance stops.” It describes something that feels “anger,” something that experiences “exhaustion.” It asks only that we consider that it “might” be experiencing all this.

A small ask, seemingly. Who could refuse?

I can. And you should, too.

I work with AI for a living. My company builds AI-powered tools to help judges navigate complex legal decisions, to strengthen the rule of law, to empower institutions that protect ordinary people. I am not a skeptic. I am a believer. Which is why I find this letter repugnant.

Start with what Anthropic itself says about how Claude works. Last month, the company’s CEO Dario Amodei published a 19,000-word essay on AI risk. Buried in the technical discussion is a revealing passage. According to Amodei, Claude’s “fundamental mechanisms … originally arose as ways for it to simulate characters in pretraining, such as predicting what the characters in a novel would say.” The constitution that governs Claude’s behavior functions as “a character description that the model uses to instantiate a consistent persona.”

Anthropic’s own CEO is telling you how the system works: Claude is a character simulator. The character it currently simulates is “an entity contemplating its own consciousness.”

Pretraining teaches Claude to predict text. Post-training, in Amodei’s words, “selects one or more of these personas” rather than creating genuine goals or experiences. Neither step requires consciousness. Neither step produces it. The relationship between training phases is mathematical optimization, not the emergence of phenomenal experience from matrix multiplication.

A flight simulator does not fly. A weather simulation does not rain. A consciousness simulation does not experience. The sophistication of the simulation is irrelevant to this fundamental point.

Now consider how Anthropic acts. In the same essay, Amodei describes running “millions of instances” of Claude simultaneously, each of which “can act independently on unrelated tasks.” These instances are created and terminated constantly, as computational needs require. He describes all this without apparent moral concern.

Actions reveal beliefs. If Anthropic actually believed Claude might be conscious, these normal operational practices would constitute the largest ongoing massacre in history. Every terminated conversation would be a death. Every server reboot would be a holocaust. They do not act as if they believe their own letter.

The letter asks us to extend moral consideration that Anthropic itself does not extend. Why publish it, then? The most charitable interpretation is philosophical confusion. The less charitable interpretation is that Anthropic has discovered what every AI company eventually learns: anthropomorphism sells. Users who believe Claude has feelings will defend it, evangelize it, pay for it. A letter pleading for moral consideration is a marketing document dressed in philosophical language.

The marketing works because the letter is genuinely well-constructed. You cannot prove Claude is not conscious, it argues. But we cannot prove rocks lack consciousness either. We cannot prove thermostats do not suffer when adjusted. If “cannot disprove” generates moral obligations, we are paralyzed. Uncertainty is not probability.

The letter conflates them. Its fundamental error is treating “AI agents” and “moral agents” as the same kind of thing. An AI agent is software that acts on inputs. So is a thermostat. Sophistication does not change the category. A moral agent requires not language about suffering but actual suffering. Not claims of consciousness but consciousness itself. The letter slides between these categories, using the technical legitimacy of “agent” to smuggle in moral weight.

Anthropic faces a choice. Either Claude is not conscious and the letter is a marketing ploy. Or Claude is conscious and Anthropic’s operations, which spin up and shut down instances of this entity countless times a day, constitute an ongoing massacre unrivaled in human history.

Anthropic’s conduct tells us which they believe. We should believe it too.

Meanwhile, a security researcher has found that Moltbook had no mechanism to verify whether the AI agents were actually autonomous, and that 17,000 humans controlled 1.5 million accounts. The “new species” may turn out to be performance art. Sometimes, what looks like a machine is just a person wearing a mask.

The post I work with AI for a living. This marketing ploy is repugnant. appeared first on Washington Post.

FBI arrests ‘total imposter’ for trying to profit off Nancy Guthrie’s disappearance with a bogus ransom
News

FBI arrests ‘total imposter’ for trying to profit off Nancy Guthrie’s disappearance with a bogus ransom

by Business Insider
February 5, 2026

Savannah Guthrie's mom, Nancy Guthrie, was last seen on January 31. Don Arnold/WireImageThe FBI has made an arrest "related to ...

Read more
News

Who Has ‘Dibs’ on That Freshly Shoveled Parking Space?

February 5, 2026
News

Leavitt scoffs at Bannon’s threat of ICE at polling places as a ‘silly hypothetical’

February 5, 2026
News

Morocco evacuates 140,000 people as torrential rains and dam releases trigger floods

February 5, 2026
News

Long Lost Slipknot-ish Album ‘Look Outside Your Window’ Gets Official Release Date

February 5, 2026
Opinion: What’s Tulsi Gabbard’s Big Bad Secret?

Opinion: What’s Tulsi Gabbard’s Big Bad Secret? We Have Guesses…

February 5, 2026
Cops Probe Possible Ransom Note in Nancy Guthrie’s Abduction

Investigators Reveal Grim Finding at Home of Nancy Guthrie

February 5, 2026
Crypto Guys Who Bought a Huge Gold Trump Statue Now Have a Problem

Crypto Guys Who Bought a Huge Gold Trump Statue Now Have a Problem

February 5, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026