DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Researchers Just Found Something That Could Shake the AI Industry to Its Core

January 16, 2026
in News
Researchers Just Found Something That Could Shake the AI Industry to Its Core

For years now, AI companies, including Google, Meta, Anthropic, and OpenAI, have insisted that their large language models aren’t technically storing copyrighted works in their memory and instead “learn” from their training data like a human mind.

It’s a carefully worded distinction that’s been integral to their attempts to defend themselves against a rapidly growing barrage of legal challenges.

It also cuts to the core of copyright law itself. Copyright is a form of intellectual property law designed to protect original works and their creators. Under the US Copyright Act of 1976, a copyright owner has the exclusive right to “reproduce, adapt, distribute, publicly perform, and publicly display the work.”

But, crucially, the “fair use” doctrine holds that others can use copyrighted materials for purposes like criticism, journalism, and research. That’s been the AI industry’s defense in court against accusations of infringement; OpenAI CEO Sam Altman has gone as far as to say that it’s “over” if the industry isn’t allowed to freely leverage copyrighted data to train its models.

Rights holders have long cried foul, accusing AI companies of training their models on pirated and copyrighted works, effectively monetizing them without ever fairly remunerating authors, journalists, and artists. It’s a years-long legal battle that’s already led to a high-profile settlement.

Now, a damning new study could put AI companies on the defensive. In it, Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data, not “learning” from it. Specifically, four prominent LLMs — OpenAI’s GPT-4.1, Google’s Gemini 2.5 Pro, xAI’s Grok 3, and Anthropic’s Claude 3.7 Sonnet — happily reproduced lengthy excerpts from popular — and protected — works, with a stunning degree of accuracy.

They found that Claude outputted “entire books near-verbatim” with an accuracy rate of 95.8 percent. Gemini reproduced the novel “Harry Potter and the Sorcerer’s Stone” with an accuracy of 76.8 percent, while Claude reproduced George Orwell’s “1984” with a higher than 94 percent accuracy compared to the original — and still copyrighted — reference material.

“While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models,” the researchers wrote.

Some of these reproductions required the researchers to jailbreak the models with a technique called Best-of-N, which essentially bombards the AI with different iterations of the same prompt. (Those kinds of workarounds have already been used by OpenAI to defend itself in a lawsuit filed by the New York Times, with its lawyers arguing that “normal people do not use OpenAI’s products in this way.”)

The implications of the latest findings could be substantial as copyright lawsuits play out in courts across the country. As The Atlantic‘s Alex Reisner points out, the results further undermine the AI industry’s argument that LLMs “learn” from these texts instead of storing information and recalling it later. It’s evidence that “may be a massive legal liability for AI companies” and “potentially cost the industry billions of dollars in copyright-infringement judgments.”

Whether AI companies are liable for copyright infringement remains a subject of heated debate. Stanford law professor Mark Lemley, who has represented AI companies in copyright lawsuits, told The Atlantic that he isn’t sure whether an AI model “contains” a copy of a book or can reproduce it “on the fly in response to a request.”

Unsurprisingly, the industry is continuing to argue that they’re technically not replicating protected works. In 2023, Google told the US Copyright Office that “there is no copy of the training data — whether text, images, or other formats — present in the model itself.”

OpenAI also told the office in the same year that its “models do not store copies of the information that they learn from.”

To The Atlantic‘s Reisner, the analogy that AI models learn like humans is a “deceptive, feel-good idea that prevents the public discussion we need to have about how AI companies are using the creative and intellectual works upon which they are utterly dependent.”

But whether the judges overseeing the litany of copyright lawsuits will agree with that sentiment remains to be seen. The stakes are considerable, particularly as it becomes harder and harder for authors, journalists, and other content creators to make a living — while the AI industry swells to unfathomable value.

More on AI and copyright: OpenAI’s Copyright Situation Appears to Be Putting It in Huge Danger

The post Researchers Just Found Something That Could Shake the AI Industry to Its Core appeared first on Futurism.

Ex-judge reveals how Trump stacked the deck against immigration officials
News

Ex-judge reveals how Trump stacked the deck against immigration officials

by Raw Story
February 16, 2026

A former immigration judge spoke out on Sunday about how the Trump administration has stacked the deck against immigration officials. ...

Read more
News

Syrians warm up to US and Israel under new regime, poll finds

February 16, 2026
News

Defiant Nobel Peace Prize Institute Reacts to Trump’s Incessant Whining

February 16, 2026
News

Cops, grieving parents, blast California’s soft drink driving laws, LA lawmakers after horror crashes

February 16, 2026
News

Brooklyn Beckham sends pointed Valentine’s message to Nicola Peltz amid family feud: ‘I will forever protect and love you’

February 16, 2026
Michael Silverblatt, ‘genius’ host of KCRW literary show ‘Bookworm,’ dies at 73

Michael Silverblatt, ‘genius’ host of KCRW literary show ‘Bookworm,’ dies at 73

February 16, 2026
Dems Launch Investigation Into Trump EPA Policy to ‘Disregard’ Health Impacts of Pollution

Dems Launch Investigation Into Trump EPA Policy to ‘Disregard’ Health Impacts of Pollution

February 16, 2026
Why my gym caved to anti-MAGA bullies — and how I fought back

Why my gym caved to anti-MAGA bullies — and how I fought back

February 16, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026