DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

How AI Is Creeping Into ‘The New York Times’

March 25, 2026
in News
How AI Is Creeping Into ‘The New York Times’

On Sunday, a writer named Becky Tuch posted an excerpt on X from a months-old New York Times “Modern Love”column that had given her pause. “I don’t want to falsely accuse writers” of using AI, she wrote. “But this reads EXACTLY like AI slop.” The excerpt—from an essay by a mother who had lost custody of her son—described the son’s feelings, at one point, toward his mother: “Not hate. Not anger. Just the flat finality of a heart too tired to keep trying.”

Among the 100-plus replies to Tuch’s post was one by an AI researcher, Tuhin Chakrabarty. He’d run the snippet from “Modern Love” through an AI-detection tool from the start-up Pangram Labs, which flagged it as likely having been AI-generated.

I learned about the incident from Chakrabarty, a computer-science professor at Stony Brook University. I’d previously written about his efforts to quantify the proliferation of AI in novels self-published on Amazon. After commenting on Tuch’s post, he plugged the whole column into the Pangram AI detector. The program estimated that more than 60 percent of it was AI-generated. I ran the column through four other AI-detection tools: Two of them flagged 30 percent of the work as likely AI-generated, one found no AI, and one suspected AI but offered no percentage.

Kate Gilgan, the author of the column, told me that she hadn’t copied and pasted language from an AI model into her work. “However, I did utilize AI as a tool,” she added, seeking “inspiration and guidance and correction.” She said she’d prompted various products (including ChatGPT, Claude, Copilot, Gemini, and Perplexity) to help her stay on topic in a paragraph, for example, or stick to a theme. “I used AI as a collaborative editor and not as a content generator,” she said. In response to questions about the column, a New York Times spokesperson noted that the paper’s contracts require freelancers to abide by its ethical-journalism handbook, which mandates that AI use “adhere to established journalistic standards and editing processes” and that “substantial use of generative A.I.” be clearly disclosed to readers. Asked for comment on whether Gilgan’s AI use rose to the level requiring disclosure, the spokesperson said in an email: “Journalism at The Times is inherently a human endeavor. That will not change. As technology evolves, we are consistently assessing best practices for our newsroom.”

Whatever the extent of Gilgan’s dependence on AI—detection tools are imperfect—her acknowledgment is the latest evidence of a phenomenon that people have been whispering about online for a long time: Artificial intelligence has already infiltrated prestigious media outlets and publishing houses. Last week, Hachette made national headlines when it decided to cancel the publication of a novel, Shy Girl, that appeared to include AI-generated text, which readers had identified ahead of its American release. (The novel had previously been published in the United Kingdom and is now being discontinued there. The author told the Times that she had not used AI to write Shy Girl, but that an acquaintance who’d edited an earlier version of the novel had done so.) Last spring, the Chicago Sun-Times and The Philadelphia Inquirer were caught publishing a syndicated summer-reading guide featuring nonexistent novels; a freelancer had made it using ChatGPT. Besides those high-profile incidents, people have been posting for months about suspicions of AI turning up, undisclosed, in major news publications—far beyond personal essays or puffy summer features.

[Read: At least two newspapers syndicated AI garbage]

A note of caution: One challenge with AI detection is that the tools involved, much like the models they analyze, are still evolving. Sometimes they flag false positives or fail to catch AI-generated material. Pangram’s CEO, Max Spero, acknowledged that both happen. He also warned that the percentage of AI material in a text is difficult to estimate with certainty; an article riddled with AI tells could be flagged as fully AI-generated even if it also includes some human-written text. Different detection tools give varying results.

Jenna Russell, a doctoral candidate in computer science at the University of Maryland, has been following various social-media firestorms. Often, someone will paste a screenshot from a work that they suspect contains AI material, a commenter will run it through an AI detector and post the results, others will pile on to express outrage, and then everyone will just move on. Wondering how common AI use really was, Russell and six other researchers set Pangram on thousands of articles, and found that it flagged likely AI use across the U.S. press—including in the opinion sections of The New York Times, The Wall Street Journal, and The Washington Post—suggesting that writers are turning to AI more than their readers might believe. (Although the researchers focused on opinion articles in the big publications, they also studied a small number of their news stories; among those, far fewer were flagged for AI-like language.) In October, Russell and her colleagues published a preprint of their research, which is not yet peer-reviewed; several Pangram researchers, including Spero, are co-authors.

All three of those national newspapers have posted information about their AI policies, noting that they permit some use but prioritize being transparent about it. A spokesperson for the Journal’s parent company, Dow Jones, declined to comment for this article. (I’m a former Journal reporter and have also written and edited for the Times on a freelance basis.) In response to questions about its stories, a spokesperson for the Post said, “Our editing process includes working to establish the authenticity of everything we publish.” (The Post also creates AI-generated podcasts, so it isn’t entirely clear what their definition of authenticity is.)

The Post had tested three articles I asked about and told me that it had found lower AI likelihood through Pangram than the researchers did; one ranked as “fully human written.” Other detection tools suspected even less AI use in most cases. Spero told me that the current iteration of Pangram, which the Post used, was designed to be more conservative than the previous version (used in their research) in flagging material as AI-generated, partly for fear of spreading false accusations. But he also said that when he and Russell reran their data set of opinion articles through the current version, the underlying assessments were similar to those in the earlier iteration, including with regard to the Post. (Chakrabarty checked the “Modern Love” column with the current version of Pangram.)   

Regardless of the exact numbers, the fact remains: Some of the most trusted publications in the United States have been publishing opinions—under real people’s names—that appear to include text generated with AI models. As AI slop has become a fixture of all kinds of online spaces—our internet searches, our social-media feeds, our online bookstores—major newspapers have been seen by many as a protected space, in which AI-generated content would rarely (or never) appear undisclosed. The newspapers that have survived the onslaught of the internet have benefited from the shared assumption that they can be trusted. The stakes of a broken social contract could not be higher, and they go far beyond the risk of a smooth-brained writing style.

[Read: How to guess if your job will exist in five years]

When opinion articles or personal essays are published in major papers—sometimes with big names attached to them—they can influence societal beliefs and, in turn, the policies of governments or corporations. It has seemed fair to assume, historically, that those opinions reflect the voices and beliefs of the individuals whose names are attached to them. But AI language is something else entirely. Research has found that AI output is much more homogenous than human language. Major AI companies have also acknowledged that their models can be skewed—for example, toward certain cultural and political beliefs. Analyses of the Grok chatbot have found that its language often mimics that of the man behind its development, Elon Musk.

Multiple studies, including those from AI companies themselves, also demonstrate that AI output is unusually persuasive, to the point of getting people to change their minds about political issues or candidates. A world where some self-published romance novels include synthetic turns of phrases and plot points is upsetting. One where AI models’ language and perspectives creep, undisclosed, into the pages of major newspapers—and therefore into public life—is terrifying.

The good news is that we can do something about this. Publications can design clear policies about AI use and disclosure and require that staffers and freelancers abide by them, including by explicitly listing the requirements in contracts. This isn’t a stretch: Many contracts require, for example, that contributors promise not to plagiarize. (The Atlantic requires contributors to attest to being “the sole author” of their article, and forbids AI-generated writing or imagery without approval and disclosure.)

In addition, editors could receive training in identifying AI tells by sight; they could also use detection products. Then they could follow up with writers whose work raises questions (while avoiding jumping to conclusions based only on an editor’s suspicions or a software scan). Those who violate a publication’s policies could face legal or other penalties; as with plagiarizing, using AI without disclosing it would incur significant social and professional costs. Governments, too, could enact policies to rein in failures of disclosure: Legislators could legally require it in certain contexts, for example, though enforcement would surely raise free-speech challenges.

Another remedy could be for major AI companies to take some responsibility for the problem by “watermarking” their products’ output, making it easier to spot. The Journal reported in 2024 that OpenAI had built a tool that could detect AI text with up to 99.9% certainty, but hadn’t released it; one apparent factor, according to the Journal, was a survey in which some users “said they would use ChatGPT less if it deployed watermarks and a rival didn’t.” Asked for comment, an OpenAI spokesperson shared a blog post pointing out other obstacles; “bad actors” could circumvent it, for example. When I asked Chakrabarty about watermarking, he noted the technical difficulties but also raised a more existential question: “Why would Anthropic or OpenAI do it, when the whole business model is based on convincing people AI language is humanlike?”

The post How AI Is Creeping Into ‘The New York Times’ appeared first on The Atlantic.

‘The definition of terrorism’: Hegseth warned after vowing to ‘negotiate with bombs’
News

‘The definition of terrorism’: Hegseth warned after vowing to ‘negotiate with bombs’

by Raw Story
March 25, 2026

Right-wing conspiracy theorist Alex Jones suggested Defense Secretary Pete Hegseth met the definition of a terrorist after he admitted that ...

Read more
News

Republicans Block Democrats’ Push for Public Testimony on Iran War

March 25, 2026
News

Meta and OpenAI’s compute crunch gives Arm a big opportunity

March 25, 2026
News

Iran’s Attacks Force U.S. Troops to Work Remotely

March 25, 2026
News

Mark Zandi warns recession odds are creeping toward 50%, and the Iran war could launch us into economic turmoil by midyear

March 25, 2026
Homeland Security Talks Hit Snag as Democrats Demand ICE Restrictions

Homeland Security Talks Hit Snag as Democrats Demand ICE Restrictions

March 25, 2026
‘Disgrace!’ White House lashes out at defecting Trump official’s ‘laughable’ claim

‘Disgrace!’ White House lashes out at defecting Trump official’s ‘laughable’ claim

March 25, 2026
Partisan brawl in Virginia taints Spanberger’s moderate image

Partisan brawl in Virginia taints Spanberger’s moderate image

March 25, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026