DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

California AI Policy Report Warns of ‘Irreversible Harms’ 

June 17, 2025
in News
California AI Policy Report Warns of ‘Irreversible Harms’ 
492
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause “potentially irreversible harms,” a new report commissioned by California Governor Gavin Newsom has warned.

“The opportunity to establish effective AI governance frameworks may not remain open indefinitely,” says the report, which was published on June 17. Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be “extremely high.” 

The 53-page document stems from a working group established by Governor Newsom, in a state that has emerged as a central arena for AI legislation. With no comprehensive federal regulation on the horizon, state-level efforts to govern the technology have taken on outsized significance, particularly in California, which is home to many of the world’s top AI companies. In 2023, California Senator Scott Wiener sponsored a first-of-its-kind bill, SB 1047, which would have required that large-scale AI developers implement rigorous safety testing and mitigation for their systems, but which critics feared would stifle innovation and squash the open-source AI community. The bill passed both state houses despite fierce industry opposition, but Governor Newsom ultimately vetoed it last September, deeming it “well-intentioned” but not the “best approach to protecting the public.”

Following that veto, Newsom launched the working group to “develop workable guardrails for deploying GenAI.” The group was co-led by “godmother of AI” Fei-Fei Li, a prominent opponent of SB 1047, alongside Mariano-Florentino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes dean of the College of Computing, Data Science, and Society at UC Berkeley. The working group evaluated AI’s progress, SB 1047’s weak points, and solicited feedback from more than 60 experts. “As the global epicenter of AI innovation, California is uniquely positioned to lead in unlocking the transformative potential of frontier AI,” Li said in a statement. “Realizing this promise, however, demands thoughtful and responsible stewardship—grounded in human-centered values, scientific rigor, and broad-based collaboration,” she said.

“Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September,” the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from “inference scaling,” which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic’s Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI’s o3 model reportedly outperformed 94% of virologists on a key evaluation.

In recent months, new evidence has emerged showing AI’s ability to strategically lie, appearing aligned with its creators’ goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While “currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm,” the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually “reduce compliance burdens on developers and avoid a patchwork approach” by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It “steers clear” of some of the more divisive provisions of SB1047, like the requirement for a “kill switch” or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report. 

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI’s progress. The goal is to “reap the benefits of innovation. Let’s not set artificial barriers, but at the same time, as we go, let’s think about what we’re learning about how it is that the technology is behaving,” says Cuéllar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge.

“The underlying approach here is one of ‘trust but verify,’” Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That’s a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It’s an approach that acknowledges the “substantial expertise inside industry,” Singer says, but “also underscores the importance of methods of independently verifying safety claims.”

The post California AI Policy Report Warns of ‘Irreversible Harms’  appeared first on TIME.

Share197Tweet123Share
Protester charged with throwing ‘destructive device’ at CHP from freeway overpass
Crime

Protester charged with throwing ‘destructive device’ at CHP from freeway overpass

by Los Angeles Times
June 17, 2025

Los Angeles County prosecutors announced new charges Tuesday against people suspected of attacking the police during recent protests that rocked ...

Read more
News

Threats to lawmakers are on the rise, security officials tell senators

June 17, 2025
News

Could the Third Time Be the Charm on Impeachment?

June 17, 2025
News

GOP senator deletes inflammatory social media posts about Minnesota shootings

June 17, 2025
News

Israel Says It Assassinated Iran’s Most Senior Military Commander

June 17, 2025
2 Arizona Rangers, a husband and wife, found shot dead in Taylor home

2 Arizona Rangers, a husband and wife, found shot dead in Taylor home

June 17, 2025
Navy vet, 80, joins UCLA graduation after missing his own due to Vietnam draft

Navy vet, 80, joins UCLA graduation after missing his own due to Vietnam draft

June 17, 2025
Iran Is Preparing Missiles for Possible Retaliatory Strikes on U.S. Bases

Iran Is Preparing Missiles for Possible Retaliatory Strikes on U.S. Bases, Officials Say

June 17, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.