DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

China’s DeepSeek kicked off 2026 with a new AI training method that analysts say is a ‘breakthrough’ for scaling

January 2, 2026
in News
China’s DeepSeek kicked off 2026 with a new AI training method that analysts say is a ‘breakthrough’ for scaling
deepseek logo
China’s DeepSeek has published a new AI training method that can scale models more easily and shape “the evolution of foundational models.” credit should read CFOTO/Future Publishing via Getty Images
  • China’s DeepSeek has just published a new AI training method to scale models more easily.
  • Analysts told Business Insider the approach is a “striking breakthrough.”
  • The paper comes as DeepSeek is reportedly working toward the release of R2, its next flagship model.

DeepSeek got the year rolling with a new idea for training AI. And analysts say it could have a massive impact on the industry.

The Chinese AI startup published a research paper on Wednesday, describing a method to train large language models that could shape “the evolution of foundational models,” it said.

The paper, co-authored by its founder Liang Wenfeng, introduces what DeepSeek calls “Manifold-Constrained Hyper-Connections,” or mHC, a training approach designed to scale models without them becoming unstable or breaking altogether.

As language models grow, researchers often try to improve performance by allowing different parts of a model to share more information internally. However, this increases the risk of the information becoming unstable, the paper said.

DeepSeek’s latest research enables models to share richer internal communication in a constrained manner, preserving training stability and computational efficiency even as models scale, it added.

DeepSeek’s new method is a ‘striking breakthrough’

Wei Sun, the principal analyst for AI at Counterpoint Research, told Business Insider on Friday that the approach is a “striking breakthrough.”

DeepSeek combined various techniques to minimize the extra cost of training a model, Sun said. She added that even with a slight increase in cost, the new training method could yield much higher performance.

Sun said the paper reads as a statement of DeepSeek’s internal capabilities. By redesigning the training stack end-to-end, the company is signaling that it can pair “rapid experimentation with highly unconventional research ideas.”

Deepseek can “once again, bypass compute bottlenecks and unlock leaps in intelligence,” she said, referring to its “Sputnik moment” in January 2025, when the company unveiled its R1 reasoning model.

The launch shook the tech industry and the US stock market, showing that the R1 model could match top competitors, such as ChatGPT’s o1, at a fraction of the cost.

Lian Jye Su, the chief analyst at Omdia, a technology research and consulting firm, told Business Insider on Friday that the published research could have a ripple effect across the industry, with rival AI labs developing their own versions of the approach.

“The willingness to share important findings with the industry while continuing to deliver unique value through new models showcases a newfound confidence in the Chinese AI industry,” Su said of DeepSeek’s paper. Openness is embraced as “a strategic advantage and key differentiator,” he added.

Is the next DeepSeek model on the horizon?

The paper comes as DeepSeek is reportedly working toward the release of its next flagship model R2, following an earlier postponement.

R2, which had been expected in mid-2025, was delayed after Liang expressed dissatisfaction with the model’s performance, according to a June report by The Information. The report said the launch was also complicated by shortages of advanced AI chips, a constraint that has increasingly shaped how Chinese labs train and deploy frontier models.

While the paper does not mention R2, its timing has raised eyebrows. DeepSeek previously published foundational training research ahead of its R1 model launch.

Su said DeepSeek’s track record suggests the new architecture will “definitely be implemented in their new model.”

Sun, on the other hand, is more cautious. “There is most likely no standalone R2 coming,” Sun said. Since DeepSeek has already integrated earlier R1 updates in its V3 model, the technique could form the backbone of DeepSeek’s V4 model, she added.

Business Insider’s Alistair Barr wrote in June that DeepSeek’s updates to its R1 model failed to generate much traction in the tech industry. Barr argued that distribution matters, and DeepSeek still lacks the broad reach enjoyed by leading AI labs — such as OpenAI and Google — particularly in Western markets.

Read the original article on Business Insider

The post China’s DeepSeek kicked off 2026 with a new AI training method that analysts say is a ‘breakthrough’ for scaling appeared first on Business Insider.

One Year and More Than 500,000 Deportations
News

One Year and More Than 500,000 Deportations

by New York Times
January 2, 2026

Good morning. It’s Friday. Today we’ll look back at the first year of President Trump’s anti-immigration campaign with two Times ...

Read more
News

MAGA’s Favorite ‘Journalist’ Caught Paying People to Make News

January 2, 2026
News

Sleepless Trump, 79, Threatens New War in Late-Night Rage Post

January 2, 2026
News

Washington is still catching up with the AI bubble debate

January 2, 2026
News

You Can Track Plenty of Fitness Stats With Just Your Phone—No Wearables Required

January 2, 2026
Mamdani Represents 21st-Century America

Mamdani Represents 21st-Century America

January 2, 2026
Rethinking Your Drinking? Watch for These 5 Signs.

Rethinking Your Drinking? Watch for These 5 Signs.

January 2, 2026
Trump Says U.S. Is ‘Locked and Loaded’ if Iran Kills Protesters

Trump Says U.S. Is ‘Locked and Loaded’ if Iran Kills Protesters

January 2, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025