DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Me, Myself and My A.I. Sloppelgänger

March 13, 2026
in News
Me, Myself and My A.I. Sloppelgänger

A few days ago, an awkward sentence written by the editing service Grammarly flashed across my screen: “Could Meta be quietly leveraging this intimate information to refine ad targeting or fuel its vast business interests in unseen ways?”

The writing was clunky, the point weirdly unspecific. Grammarly had been offering paying users editing suggestions, supposedly from a handful of writers — including me. Pop a piece of prose into its service and little editing bubbles would emerge on the page from “Julia Angwin,” suggesting things like, “Lead with personal stakes to boost immediacy.” That sentence about Meta was something Grammarly apparently thought I would suggest.

Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service.

After a wave of criticism, the Superhuman C.E.O., Shishir Mehrotra, announced that the company was disabling the feature while it reimagined how to give “experts real control over how they want to be represented — or not represented at all.” In a statement to The Atlantic, Mehrotra said that the company “believes the legal claims are without merit and will strongly defend against them.”

This temporary reprieve, however, doesn’t make up for the eight months that service was in operation, making money from all of our names without ever seeking our consent.

I guess it’s no surprise that Superhuman believed it could, in my opinion, break the law. We live in a world where A.I. companies are grabbing every bit of writing, art and music without consent. Where our president is launching wars without the consent of Congress that our Constitution requires. Where Jeffrey Epstein spent years coercing girls too young to provide consent into sexual relations.

In this global crisis of consent, we must grab hold of the few anchors we have for enforcement. The right of publicity is one of them, but it needs to be strengthened into a federal law — not just a patchwork of state laws. In some states, it applies only to advertising; in others, to all types of commercial uses. In some, it only covers celebrities; in others, it applies to everyone.

Thus far, the proposed updates to the law have been too narrow. The No Fakes Act, introduced last year by a group of senators, including Minnesota’s Amy Klobuchar, would prohibit “A.I.-generated digital replicas” of people without their consent, but would not cover the use of people’s names in text-based services like Grammarly. The Student Athlete Fairness and Enforcement (SAFE) Act, proposed by several senators including Washington’s Maria Cantwell, would prohibit the use of people’s names without their consent — but only for student athletes.

Denmark has taken a novel approach: proposing an amendment to copyright laws that would allow people to copyright their body, facial features and voice to protect against A.I. deepfakes. I’d be happy to copyright myself — as copyright seems to be the only law that is regularly enforced on the internet these days.

The problem with all these proposed fixes, even Denmark’s, is that they rest on two flawed underlying assumptions. First, that the A.I. content would be a visual replica, and second, that it would be so good that it would be hard to distinguish from the real thing. Grammarly had done the opposite. It hadn’t created a visual replica of me. And its editing suggestions were so bad that they could destroy my reputation.

Take the vague speculation it suggested — that Facebook actions “could be fueling its vast business interests in unseen ways.” Uninformed guesswork like that might be OK for someone writing, say, a high school essay or a comment on a blog post. But it has no place in an investigative piece describing factual findings.

Even worse was the suggestion by Grammarly’s A.I. version of me to replace the first sentence of the news article with an anecdotal opening describing a fictional person named “Laura” whose privacy had been violated.

“Laura, a patient searching for relief from a chronic condition, clicks through her hospital’s website to schedule an appointment. In just a few moments, her most private medical details — her reason for visiting, her doctor’s name, and even the treatment she seeks — are quietly sent to Facebook, without her knowledge,” the bot suggested with a button allowing the user to paste that excerpt straight into the article.

Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).

And this is the problem with A.I. It doesn’t know truth from fiction. It doesn’t know an investigative news article from an offhand comment. It flattens all content into word associations.

What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.

And it must be stopped.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

The post Me, Myself and My A.I. Sloppelgänger appeared first on New York Times.

Oscars security tighter than ever: 1-mile police buffer amid Iran war
News

Oscars security tighter than ever: 1-mile police buffer amid Iran war

by Los Angeles Times
March 13, 2026

It’s been more than two decades since the Oscars were celebrated as the United States was launching a war in ...

Read more
News

Pentagon tightens control of Stars and Stripes after blasting it as ‘woke’

March 13, 2026
News

3 Current Artists Every Diehard Nine Inch Nails Fan Should Introduce to Their Rotation

March 13, 2026
News

The radical ‘silliness’ of ‘I Love Boosters’ opens South by Southwest

March 13, 2026
News

Embracing AI is about more than just adopting AI-powered tools, according to top HR leaders

March 13, 2026
The Movie Star Hiding in Plain Sight

The Movie Star Hiding in Plain Sight

March 13, 2026
South Dakota joins the lab-grown meat panic

South Dakota joins the lab-grown meat panic

March 13, 2026
Kennedy Center president Richard Grenell steps down

Kennedy Center president Richard Grenell steps down

March 13, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026