DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

U.S. Withholds Support From Major International AI Safety Report

February 3, 2026
in News
U.S. Withholds Support From Major International AI Safety Report

Artificial intelligence is improving faster than many experts anticipated, and the evidence for several risks has “grown substantially.” Current risk management techniques, meanwhile, are “improving but insufficient.” Those are findings of the second International AI Safety Report, published Tuesday, ahead of the AI Impact Summit scheduled to take place in Delhi from Feb. 19 to 20.

[time-brightcove not-tgx=”true”]

Guided by 100 experts and backed by 30 countries and international organizations including the United Kingdom, China, and the European Union, the report is meant to set an example of “working together to navigate shared challenges.” But unlike last year, the United States declined to throw its weight behind it, the report’s chair, Turing Award-winning scientist Yoshua Bengio confirmed.

As AI’s risks begin to materialize, the home of leading AI developers has walked away from international efforts to understand and mitigate them. The move is largely symbolic, and the report does not hinge on the U.S.’s support. But when it comes to understanding AI, “the greater the consensus around the world, the better,” Bengio says.

Whether the U.S. balked at the report’s content, or is simply retreating from international agreements—it exited the Paris climate agreement and World Health Organization in January—remains unclear. Bengio says the U.S. provided feedback on earlier versions of the report but declined to sign the final version.

The U.S. Department of Commerce, which was named on the 2025 International AI Safety Report, did not respond for comment on the decision.

What The Report Says

“Over the past year, the capabilities of general-purpose AI models and systems have continued to improve,” the report reads. Capabilities have progressed so rapidly that in the year between the first and second report, the authors published two interim updates in response to major changes. That cuts against the steady drumbeat of headlines suggesting AI has plateaued. The scientific evidence shows “no slowdown of advances over the last year,” Bengio says.

Why does it feel to many like progress has slowed? One hint is what researchers call the “jaggedness” of AI performance. These models can reach gold-medal standard on International Mathematical Olympiad questions while sometimes failing to count the number of r’s in “strawberry.” That jaggedness makes AI’s capabilities hard to assess, and direct human comparisons—like the popular “intern” analogy—misleading.

There is no guarantee that the current rate of progress will continue, though the report notes that trends are consistent with continued improvement through 2030. If today’s pace holds until then, experts predict AI will be able to complete well-scoped software engineering tasks that would take human engineers multiple days. But the report also raises the more striking possibility that progress could accelerate if AI substantially assists in its development, producing systems as capable as or more capable than humans across most dimensions

That’s likely to excite investors, but worrisome for those who fear society is failing to adequately adapt to the emerging risks at the current pace. Even Google DeepMind’s CEO Demis Hassabis said in Davos in January that he believes it would be “better for the world” if progress slows.

“A wise strategy, whether you’re in government or in business, is to prepare for all the plausible scenarios,” Bengio says. That means mitigating risks, even in the face of uncertainty.

Maturing Understanding of Risks

Policymakers who want to listen to scientists when it comes to AI risk face a problem: the scientists disagree. Bengio and fellow AI pioneer Geoffrey Hinton have warned since ChatGPT’s launch that AI could pose an existential threat to humanity. Meanwhile Yann LeCun, the third of AI’s so-called “godfathers,” has called such concerns “complete B.S.”

But the report suggests the ground is firming. While some questions remain divisive, “there is a high degree of convergence” on the core findings, the report notes. AI systems now match or exceed expert performance on benchmarks relevant to biological weapons development, such as troubleshooting virology lab protocols. There is strong evidence that criminal groups and state-sponsored attackers are actively using AI in cyber operations.

Continuing to measure those risks will face challenges going forward as AI models increasingly learn to game safety tests, the report says.”We’re seeing AIs whose behavior, when they are tested, […] is different from when they are being used,” Bengio says, adding that by studying models’ chains-of-thought—the intermediate steps it took before arriving at an answer—researchers have identified this difference is “not a coincidence.” AIs are acting dumb or on their best behavior in ways that “significantly hamper our ability to correctly estimate risks,” Bengio says.

Rather than propose a single fix, the report recommends stacking multiple safety measures—testing before release, monitoring after, tracking incidents—so that what slips through one layer gets caught by the next, like water through a series of increasingly fine strainers. Some measures target the models themselves; others aim to strengthen defenses in the real world—for example, making it harder to acquire the materials needed to build a biological weapon even if AI has made them easier to design. On the corporate side, 12 companies voluntarily published or updated Frontier Safety Frameworks in 2025, documents that describe how they plan to manage risks as they build more capable models—though they vary in the risks they cover, the report notes.

Despite the findings, Bengio says the report has left him with a sense of optimism. When the first report was commissioned in late 2023, the debate over AI risk was driven by opinion and theory. Now, he says, “we’re starting to have a much more mature discussion.”

The post U.S. Withholds Support From Major International AI Safety Report appeared first on TIME.

SpaceX combines with xAI at a $1.25-trillion valuation
News

SpaceX combines with xAI at a $1.25-trillion valuation

by Los Angeles Times
February 3, 2026

Elon Musk’s SpaceX acquired xAI in a deal that encompasses the billionaire’s increasingly costly ambitions to dominate artificial intelligence and ...

Read more
News

Trump and India Call Off Their Trade War, but the Terms of Peace Are Murky

February 3, 2026
News

Trump says Powell probe should continue: ‘Take it to the end and see’

February 3, 2026
News

Don Lemon makes post-arrest appearance on ‘Jimmy Kimmel Live!’: ‘They want to instill fear’

February 3, 2026
News

DHS slams Biden-appointed judge for blocking Trump admin’s efforts to end deportation protections for Haitian migrants

February 3, 2026
The Long Game Behind Xi Jinping’s P.L.A. Purge

The Long Game Behind Xi Jinping’s P.L.A. Purge

February 3, 2026
Researchers hacked Moltbook’s database in under 3 minutes and accessed thousands of emails and private DMs

Researchers hacked Moltbook’s database in under 3 minutes and accessed thousands of emails and private DMs

February 3, 2026
The first D.C. homicide in February comes on the month’s second day

The first D.C. homicide in February comes on the month’s second day

February 3, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026