DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude

February 23, 2026
in News
Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude
Claude logo
Claude logo Joel Saget / AFP via Getty Images
  • Anthropic said three of the biggest Chinese AI labs have “illicitly” used Claude to train their models.
  • Anthropic said DeepSeek, MiniMax, and Moonshot AI orchestrated their own “industrial-scale campaigns.”
  • The actions, Anthropic said, show why chip controls are needed.

Anthropic says its Chinese competitors are stealing from the AI startup to gain an edge in the global AI race.

On Monday, Anthropic said three of China’s biggest AI labs, DeepSeek, MiniMax, and Moonshot AI, were “illicitly” using Claude “to improve their own models,” through a process known as distillation.

“These campaigns are growing in intensity and sophistication,” Anthropic said as part of its lengthy statement on Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.

Anthropic said the distillation efforts were “industrial-scale campaigns” that included roughly 24,000 fraudulent Claude accounts that generated over 16 million exchanges “in violation of our terms of service and regional access restrictions.”

Distillation is the process of training a less powerful model on the output of a more powerful model. The practice is a legitimate way that many US companies use to train their models for public release. Increasingly, major US companies are also stating that their Chinese competitors are improperly using the practice to steal their work.

In January 2025, OpenAI said DeepSeek may have “inappropriately” used OpenAI’s outputs to train their models. Earlier this month, Google disclosed it had “identified an increase in model extraction attempts or ‘distillation attacks.'”

“Competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently,” Anthropic said on Monday.

Anthropic disclosed remarkable detail about the extent to which DeepSeek, MiniMax, and Moonshot AI “illicitly” used their systems. Claude is not available for commercial access in China, though Anthropic said the rival labs found workarounds.

Among the notable findings, Anthropic said DeepSeek sought to create “censorship-safe alternatives to policy-sensitive queries.” The company also said it detected MiniMax’s campaign “while it was still active,” giving them an in-depth look at what their competitor was doing.

“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system,” Anthropic said.

Representatives for DeepSeek, MiniMax, and Moonshot AI did not immediately respond to Business Insider’s request for comment.

Beyond cheating in the AI, Anthropic said improper distillation poses security risks because less-trained models may lack the proper safeguards, such as those to prevent the development of bioweapons.

In response to such distillation campaigns, Anthropic said it has built in “behavioral fingerprinting systems,” shares data with other AI companies on what to look out for, and continues to develop additional countermeasures.

Anthropic CEO Dario Amodei recently wrote that leading models are approaching the point where, without proper safeguards, they could help direct someone in building a bioweapon.

Amodei is also an outspoken advocate of US-export controls, a topic that divides some leading tech CEOs. Nvidia CEO Jensen Huang has repeatedly said that restricting US companies, including his own, from selling advanced chips to China won’t curb China’s AI advancements.

“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” Anthropic said.

Anthropic has also faced allegations of using copyrighted material to train its models. In January, the Washington Post reported new details about an endeavor at the company called Project Panama, which the company reportedly described as “our effort to destructively scan all the books in the world.” Last year, Anthropic settled a class-action lawsuit brought by the authors and publishers of some of the books for $1.5 billion. As part of the settlement, the company didn’t admit any wrongdoing.

Read the original article on Business Insider

The post Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude appeared first on Business Insider.

Why Did Comet 3I/ATLAS Get So Much Brighter After Flying Past the Sun?
News

Why Did Comet 3I/ATLAS Get So Much Brighter After Flying Past the Sun?

by VICE
February 23, 2026

Ah, good ol’ 3I/ATLAS. How I miss writing about you a few days a week. You were a fun comet. ...

Read more
News

A US Air Force F-22 Raptor just showed off how it might work with a loyal wingman-type drone in a future air war

February 23, 2026
News

New York City’s Homeless Population Faces Another Dangerous Storm

February 23, 2026
News

The highlights and lowlights of the 2026 Winter Olympics

February 23, 2026
News

Anthropic Accuses 3 Chinese Companies of Harvesting Its Data

February 23, 2026
C.I.A. Intelligence Helped Lead Mexican Authorities to ‘El Mencho’

C.I.A. Intelligence Helped Lead Mexican Authorities to ‘El Mencho’

February 23, 2026
Trump fumes over ‘fake’ reports top general claimed US is headed to war

Trump fumes over ‘fake’ reports top general claimed US is headed to war

February 23, 2026
Coast Guard investigating swastika found at recruit training center

Coast Guard investigating swastika found at recruit training center

February 23, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026