DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era

April 22, 2026
in News
Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era

Sigrid Jin was waiting to board a plane when he saw stunning news that artificial intelligence start-up Anthropic had accidentally leaked the source code for Claude Code, its popular A.I. agent. Mr. Jin, 25, an undergraduate student, scrambled to post a copy online. His worried girlfriend quickly texted him: Was he violating copyright law?

Mr. Jin turned to a team of A.I. assistants for a solution. He directed them to rewrite the leaked code in another programming language, then shared that version online. Within hours, more than 100,000 people had liked or linked to it.

Anthropic, one of the leading A.I. companies alongside OpenAI, has said the leak had been caused by human error and, citing copyright violations, demanded that GitHub, an online library of computer code, remove posts sharing the code. Thousands of posts were taken down. But Mr. Jin’s version remains online. He said Anthropic had not asked him to take it down.

It is unclear whether Anthropic, which did not respond to questions from The New York Times, is drawing a distinction with the rewritten code. Mr. Jin said he believed rewriting the code transformed it into a new work, one that Anthropic could not claim ownership over.

He said he was driven less by money or fame than by a desire to make a broader philosophical point. What is the value of copyrighted intellectual property in an era when A.I. can easily replicate not just computer code but art, music and literature in minutes?

“I just wanted to raise some ethical questions in the A.I. agent era,” he said. “Any creative work can be reproduced in a second.”

Computer code has long been treated as a protected creative work, akin to music or art. But enforcing copyright has been difficult, because a software’s underlying computational instructions can be copied or tweaked in ways that are hard to trace. Even what counts as protected has been up for debate. Google and Oracle waged a legal battle for years, arguing over where to draw the line between creative expression and the basic functions of software.

Now, a new technology is making that even more complicated.

When the Anthropic leak surfaced online, Mr. Jin and his friends already treated A.I. assistant tools like Claude Code and OpenClaw as employees to handle daily tasks. These agents don’t just answer questions; they carry out tasks on their own once prompted with a goal, such as “organize my receipts” or “make a new social media post.”

The agents also make copying and imitation easier than ever and on a far greater scale.

For many software companies, as well as authors, artists and musicians, the risk is not just direct copying. It’s that the market for their work could be flooded with A.I.-generated substitutes that cost almost nothing to produce, diminishing the value of the original work.

“What happened with the Claude Code leak is essentially a preview of what’s coming for every creative industry,” said Russ Pearlman, a lawyer specializing in A.I. and technology and chief information officer of Dallas College. Existing copyright rules, he said, were built on the assumption that copying takes time and that there’s a meaningful window to take action to protect a work.

“When an A.I. agent can rewrite 512,000 lines of code into a different language before most people have finished their morning coffee, that assumption collapses,” he said.

In 2022, the United States Copyright Office said works created entirely by A.I. without human creative input are not eligible for copyright protection. A follow-up review reaffirmed that decision, finding that a simple human prompt was not enough. But courts have yet to decide how much human involvement is required.

“Artists and musicians are extremely concerned about this,” said Yelena Ambartsumian, the founder of Ambart Law, a firm in New York that counsels start-ups about A.I., intellectual property and other matters. “All of the resources you put into being able to protect your copyrightable human expression, does it really matter if in a second or two hours that expression can be copied and then changed?”

Many popular A.I. models were trained to produce humanlike prose by ingesting vast swaths of material posted online. Artists, authors and media companies have said A.I. firms have infringed their copyrights by using their work to train their systems.

Last year, Anthropic agreed to pay $1.5 billion to a group of authors and publishers in the largest settlement in the history of U.S. copyright cases, after a judge ruled it had illegally downloaded and stored millions of copyrighted books. Anthropic has argued that, rather than replicating a creator’s exact work, its systems analyze the underlying patterns in that work to build something new.

(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)

“The library of everything that has been written has already been fed into A.I.,” said Kandis Koustenis, a lawyer who specializes in intellectual property at Bean, Kinney & Korman, a firm in Virginia. “From the author’s point of view, the genie is out of the bottle a little bit.”

Since the advent of the personal computer, tech companies have devised ways to recreate software that is similar to rivals’ without violating copyright, including techniques that insulate programmers from directly copying the original code.

Mr. Jin argued that he had used a comparable approach to rewrite the Anthropic code, using A.I. agents rather than human programmers.

That distinction has not been tested in court.

While some A.I. companies, including Anthropic, closely guard the inner workings of their systems, others have embraced open source, based on the idea that transparency makes A.I. safer and accelerates innovation.

As agents make it easy to replicate such work with minimal human input, creativity is becoming more valuable, Mr. Jin said. His goal was not to create something new, but to highlight how few truly novel ideas remain.

“We are now relying on models that are relying on ideas that come out of other people’s heads,” Mr. Jin said. “It is becoming difficult to have novelty.”

Meaghan Tobin covers business and tech stories in Asia with a focus on China and is based in Taipei.

The post Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era appeared first on New York Times.

I Had an Affair With My Friend’s Wife. Should I Tell Him?
News

I Had an Affair With My Friend’s Wife. Should I Tell Him?

by New York Times
April 22, 2026

Not long ago, I met a woman entirely by chance in an art class that I wandered into. From the ...

Read more
News

Bessent Backs Financial Support for Oil-Rich U.A.E.

April 22, 2026
News

New York Bans Government Employees from Insider Trading on Prediction Markets

April 22, 2026
News

Supreme Court Finds Soldier Injured in Suicide Bombing Can Sue

April 22, 2026
News

Crypto Entrepreneur Files Fraud Suit Against Trump Family Firm

April 22, 2026
Czech students protest a government plan to overhaul funding for public media

Czech students protest a government plan to overhaul funding for public media

April 22, 2026
Trump Corrupts, and Absolute Trump Corrupts Absolutely

Trump Holds the American People in Total Contempt

April 22, 2026
Scientists Found Some Common Meds Linked to Autism (None of Them Are Tylenol)

Scientists Found Some Common Meds Linked to Autism (None of Them Are Tylenol)

April 22, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026