DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude

February 23, 2026
in News
Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude
Claude logo
Claude logo Joel Saget / AFP via Getty Images
  • Anthropic said three of the biggest Chinese AI labs have “illicitly” used Claude to train their models.
  • Anthropic said DeepSeek, MiniMax, and Moonshot AI orchestrated their own “industrial-scale campaigns.”
  • The actions, Anthropic said, show why chip controls are needed.

Anthropic says its Chinese competitors are stealing from the AI startup to gain an edge in the global AI race.

On Monday, Anthropic said three of China’s biggest AI labs, DeepSeek, MiniMax, and Moonshot AI, were “illicitly” using Claude “to improve their own models,” through a process known as distillation.

“These campaigns are growing in intensity and sophistication,” Anthropic said as part of its lengthy statement on Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.

Anthropic said the distillation efforts were “industrial-scale campaigns” that included roughly 24,000 fraudulent Claude accounts that generated over 16 million exchanges “in violation of our terms of service and regional access restrictions.”

Distillation is the process of training a less powerful model on the output of a more powerful model. The practice is a legitimate way that many US companies use to train their models for public release. Increasingly, major US companies are also stating that their Chinese competitors are improperly using the practice to steal their work.

In January 2025, OpenAI said DeepSeek may have “inappropriately” used OpenAI’s outputs to train their models. Earlier this month, Google disclosed it had “identified an increase in model extraction attempts or ‘distillation attacks.'”

“Competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently,” Anthropic said on Monday.

Anthropic disclosed remarkable detail about the extent to which DeepSeek, MiniMax, and Moonshot AI “illicitly” used their systems. Claude is not available for commercial access in China, though Anthropic said the rival labs found workarounds.

Among the notable findings, Anthropic said DeepSeek sought to create “censorship-safe alternatives to policy-sensitive queries.” The company also said it detected MiniMax’s campaign “while it was still active,” giving them an in-depth look at what their competitor was doing.

“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system,” Anthropic said.

Representatives for DeepSeek, MiniMax, and Moonshot AI did not immediately respond to Business Insider’s request for comment.

Beyond cheating in the AI, Anthropic said improper distillation poses security risks because less-trained models may lack the proper safeguards, such as those to prevent the development of bioweapons.

In response to such distillation campaigns, Anthropic said it has built in “behavioral fingerprinting systems,” shares data with other AI companies on what to look out for, and continues to develop additional countermeasures.

Anthropic CEO Dario Amodei recently wrote that leading models are approaching the point where, without proper safeguards, they could help direct someone in building a bioweapon.

Amodei is also an outspoken advocate of US-export controls, a topic that divides some leading tech CEOs. Nvidia CEO Jensen Huang has repeatedly said that restricting US companies, including his own, from selling advanced chips to China won’t curb China’s AI advancements.

“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” Anthropic said.

Anthropic has also faced allegations of using copyrighted material to train its models. In January, the Washington Post reported new details about an endeavor at the company called Project Panama, which the company reportedly described as “our effort to destructively scan all the books in the world.” Last year, Anthropic settled a class-action lawsuit brought by the authors and publishers of some of the books for $1.5 billion. As part of the settlement, the company didn’t admit any wrongdoing.

Read the original article on Business Insider

The post Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude appeared first on Business Insider.

‘Untethered from facts’: Expert warns MAGA judge told on herself with ‘egregious’ ruling
News

‘Untethered from facts’: Expert warns MAGA judge told on herself with ‘egregious’ ruling

by Raw Story
February 23, 2026

A legal expert warned on Monday that a federal judge’s “egregious” ruling showed she has an absolute bias toward President ...

Read more
News

Trump Melts Down After Favorite General Sounds Alarm on His War Plotting

February 23, 2026
News

Lamborghini CEO cites lack of engine noise as a reason the company scrapped its EV

February 23, 2026
News

Trump hit with stinging rebuke in blue city’s snowplow naming contest

February 23, 2026
News

Quad God Dresses Like Mere Mortal

February 23, 2026
4 U.K. Indie Bands to Keep in Rotation While Struggling Through the Arctic Monkeys Drought

4 U.K. Indie Bands to Keep in Rotation While Struggling Through the Arctic Monkeys Drought

February 23, 2026
Democrats Rally to Shame Trump With Epic Stunt on His Big Night

Democrats Rally to Shame Trump With Epic Stunt on His Big Night

February 23, 2026
50 Cent Says Spite and Hatred Is the Best Motivator: ‘Make Them Watch Your Success’

50 Cent Says Spite and Hatred Is the Best Motivator: ‘Make Them Watch Your Success’

February 23, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026