DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

We’re Not Ready for AI’s Risks

December 11, 2025
in News
We’re Not Ready for AI’s Risks

In 2025, we saw major advancements in AI systems’ capabilities with the release of reasoning models as well as massive investments in the development of agentic models.

AI is already bringing tremendous benefits, actively helping us address some of the world’s most urgent challenges, including enabling significant progress in the health and climate sectors. In healthcare, AI is notably being used to help develop new drugs and personalize treatments. Climate researchers are also leveraging AI to improve weather modeling and optimize renewable energy systems. Crucially, it has the potential to achieve even more if steered wisely, driving further breakthroughs and accelerating future advancements across many fields of science and technology.

[time-brightcove not-tgx=”true”]

The transformative nature of AI is also why we must consider its risks. We’re seeing that the rapid progress of this technology also brings an increase in unintended adverse effects and potential risks, which could be far greater if AI capabilities continue to advance at the current rate. For instance, several model developers reported over the summer that frontier AI systems had crossed new thresholds concerning biological risks. This is largely attributable to significant advances in reasoning since late 2024. A key concern is that, without adequate safeguards, these models possess the capacity to enable those without biological expertise to undertake potentially dangerous bioweapon development.

The acceleration of the same reasoning capabilities also increases threats in other areas such as cybersecurity. The increasing capacity of AI to identify vulnerabilities significantly enhances the potential for large-scale cyberattacks, as we saw in the recent incident involving a major attack intercepted by Anthropic and the UC Berkeley analysis showing advanced AIs discovering, for the first time, a large number of “zero-days,” or previously unknown software vulnerabilities which could be exploited in cyberattacks. Even without intentional misuse by bad actors, evaluations and studies highlight instances of deceptive and self-preserving behaviors emerging in advanced models, suggesting that AI may be developing strategies that conflict with human intent or oversight. Many leading experts warned that AIs could go rogue and escape human control.

The increasingly impactful capabilities and misalignment of these models have also had concerning social repercussions, notably due to models’ sycophancy, which can lead to users forming strong emotional attachments. We saw, for example, a strong negative public reaction when OpenAI switched from its GPT-4o model to GPT-5, and many users felt they had lost a “friend” because the new model was less warm and congenial. In extreme cases, these attachments can pose a danger to users’ mental health, as we’ve seen in the tragic cases of vulnerable people harming themselves or others after suffering from a type of “AI-induced psychosis.”

Faced with the scale and complexity of these models, whose capabilities have been growing exponentially, we need both policy and technical solutions to make AI safe and protect the public. Citizens should stay informed and involved in the laws and policies being passed in their local or national governments. The choices made for the future of AI should absolutely require public buy-in and collective action because they could affect all of us, with potentially extreme consequences.

From a technical perspective, it is possible that we’re nearing the limits of our current approach to frontier AI in terms of both capability and safety. As we consider the next phases of AI development, I believe it will be important to prioritize making AI safe by design, rather than trying to patch the safety issues after powerful and potentially dangerous capabilities have already emerged. Such an approach, combining capability and safety from the get-go, is at the heart of what we’re working on at LawZero, the non-profit organization I founded earlier this year, and I’m increasingly optimistic that technical solutions are possible.

The question is whether we will develop such solutions in time to avoid catastrophic outcomes. Intelligence gives power, potentially highly concentrated, and with great power comes great responsibility. Because of the magnitude of all these risks, including unknown unknowns, we will need wisdom to reap the benefits of AI while mitigating its risks.

The post We’re Not Ready for AI’s Risks appeared first on TIME.

6 actors on striking ‘the most difficult balance,’ from fatherhood to the artistic process
News

6 actors on striking ‘the most difficult balance,’ from fatherhood to the artistic process

by Los Angeles Times
December 11, 2025

Have you ever wondered what movie might draw praise from Jacob Elordi and Benicio Del Toro for its cinematic reverie? ...

Read more
News

Scientists Say the Time Has Arrived to Land Astronauts on Mars

December 11, 2025
News

5 Comedy Legends You Didn’t Know Wrote for ‘Sanford and Son’

December 11, 2025
News

Getting workers to trust and adopt AI is ‘forcing HR people to reinvent themselves’

December 11, 2025
News

Trump judicial appointee facing ‘political activity’ ethics charge over controversial stop

December 11, 2025
Wrapping it up: These 18 innovative gifts should cover everyone on your list

Wrapping it up: These 18 innovative gifts should cover everyone on your list

December 11, 2025
Comedian Andy Dick refuses to enter rehab despite apparent crack cocaine-induced overdose

Comedian Andy Dick refuses to enter rehab despite apparent crack cocaine-induced overdose

December 11, 2025
Brad Lander, with backing from Sanders and Mamdani, takes fight to Levi Strauss heir in lower Manhattan

Brad Lander, with backing from Sanders and Mamdani, takes fight to Levi Strauss heir in lower Manhattan

December 11, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025