DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

The U.K. Lacks the Ability to Respond to AI Disasters, New Report Warns

September 30, 2025
in News
The U.K. Lacks the Ability to Respond to AI Disasters, New Report Warns
493
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?

Subscribe to In the Loop

What to Know: Preparing for an AI Emergency

A major AI-enabled disaster is becoming increasingly likely as AI capabilities advance. But a new report from a London-based think tank warns that the British government does not have the emergency powers necessary to respond to AI-enabled disasters like the disruption of critical infrastructure or a terrorist attack. The U.K. must give its officials new powers including being able to compel tech companies to share information and restrict public access to their AI models in an emergency, argues the report, which was shared exclusively with TIME ahead of its publication on Tuesday by the Centre for Long-Term Resilience (CLTR). It’s a model for AI legislation that could catch on not just in Britain, but in other parts of the world with limited jurisdiction over AI companies.

A lack of levers — “Relying on 20- or 50 year-old legislation that was never intended for this kind of technology is not necessarily going to be the best approach,” says Tommy Shaffer Shane, the CLTR’s director of AI policy and the author of the report. “What we might find is that if something does go really wrong, the government is going to be scrambling to find levers. They might be pulling them and finding that they’re not really attached to anything — and that the kind of action they need, perhaps within hours, is just not happening.”

The proposals — The report, which was timed to coincide with the governing Labour Party’s annual conference this week, includes 34 proposals that the CLTR hopes will be included in the government’s long-delayed AI bill. As well as giving the government the power to compel tech companies to share information and revoke access to models, the proposals include requiring AI companies to report serious AI security incidents to the government, and having officials conduct regular preparedness exercises.

A new approach to AI regulation — If the British government adopts these proposals, it would signal a different approach to AI regulation than, for example, the European Union, whose lengthy AI Act is focused on regulating individual AI models. That E.U. law has attracted scorn from Silicon Valley and Washington, with influential figures arguing it has stifled innovation in the European tech industry, and placed onerous burdens on American AI companies. Under the second Trump Administration, this type of regulation is increasingly seen as synonymous with being hostile to U.S. economic interests.

Threading the needle — So how can Britain regulate AI while retaining access to the economic growth it promises, and staying in the U.S.’s good books? The CLTR’s answer: don’t regulate the models themselves; instead get ready for their downstream consequences. “What we’re talking about with emergency preparedness is accepting that you’re not going to have those types of interventions, [and] that you’re going to have more dangerous models more widely deployed than you would ideally want,” says Shaffer Shane. “And so the question is, how do you prepare for that scenario?”

If you have a minute, please take our quick survey to help us better understand who you are and which AI topics interest you most.

Who to Know: Rodney Brooks, iRobot founder

Over the weekend, a big name in the robotics world—Rodney Brooks, cofounder of the company that brought you the Roomba—published a scathing essay. The billions of dollars currently being poured into building humanoid robots by the likes of Figure AI and Tesla will not succeed in their task of creating a safe, dexterous, and therefore useful humanoid, he argued.

The reason, Brooks says, is due to a limitation in how these robots are being trained. Figure and Tesla are collecting video data of humans performing actions, and feeding that data into a neural network. Brooks says this approach is flawed because it doesn’t collect data about touch—a type of kinetic feedback that he says is essential for a robot to learn how to be dexterous.

More money than ever before is being spent on building robotics, by a suite of companies racing to be the first to conquer what some believe is a market worth many trillions of dollars. If these companies are right that new robotics capabilities can emerge just by scaling video data (like large language models), then the effects on the labor market and the economy will be huge. But if they’re scaling the wrong type of data, Brooks writes, “a lot of money will have disappeared.”

AI in Action

A 60-year-old man solicited advice from ChatGPT about what to substitute for table salt in order to improve his diet, according to a study published in a peer-reviewed journal last month. ChatGPT suggested he swap it out for sodium bromide. Over the next three months, he began experiencing fatigue, red spots on his skin, and difficulty walking. He was eventually diagnosed with bromism—a syndrome that can result in psychosis, hallucinations, and even a coma. “This case … highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes,” the paper reads.

In a statement to TIME,, OpenAI said that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice. The company also says it has trained its AI systems to encourage people to seek professional guidance.

As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at: [email protected]

What We’re Reading

OpenAI, NVIDIA, and Oracle: Breaking Down $100B Bets on AGI, by Peter Wildeford on Substack

Top forecaster Peter Wildeford dissects the circular deals being struck by the likes of OpenAI, Oracle and Nvidia to fund datacenter construction—and makes the observation that they essentially turn the entire S&P500 into a leveraged bet on AGI arriving in the next few years, with catastrophic consequences if that turns out to not be the case. He writes:

“The reason we should be somewhat concerned — or at least curious — about this infinite money glitch is twofold. Firstly, AGI might lead to the serious destruction of everything we value and love, if not the extinction of the entire human race. Secondly, and much more mundane by comparison, because NVIDIA currently represents approximately 7% of the S&P 500’s total market capitalization. Add in Microsoft, Google, Meta, Amazon, and other companies whose valuations assume continued AI progress, and you’re looking at perhaps 25-30% of total market value predicated on AI transformation happening roughly on schedule.

“In other words, AGI happening soon may mean the end of humanity, but at least the S&P 500 will remain strong. On the other hand, if the AI scaling hypothesis hits unexpected walls, the unwinding could be a second ‘dot com bust’ or worse. When everyone is both buyer and seller in circular deals, you’ve created massive correlation risk. If OpenAI can’t pay Oracle, Oracle can’t pay NVIDIA, NVIDIA’s stock crashes, and suddenly 25% of the S&P 500 is in freefall.”

The post The U.K. Lacks the Ability to Respond to AI Disasters, New Report Warns appeared first on TIME.

Share197Tweet123Share
People are chasing AI stocks like ‘dogs chase cars’ — and a crash looks certain, veteran investor Bill Smead says
News

People are chasing AI stocks like ‘dogs chase cars’ — and a crash looks certain, veteran investor Bill Smead says

by Business Insider
September 30, 2025

Bill SmeadChristopher Goodney/Bloomberg via Getty ImagesThe AI boom is being fueled by "momentum" as investors chase huge stock gains, Bill ...

Read more
Arts

Column: The Oscars’ international feature category is broken. But there’s no easy fix

September 30, 2025
News

‘Now there is nothing for us’: Towns disappear when wildfire survivors can’t rebuild

September 30, 2025
News

Long-distance swimmer bitten by shark off California coast

September 30, 2025
News

City of Madison offering free smoke alarm installations

September 30, 2025
The Protection True Whitaker Wears Around Her Neck

The Protection True Whitaker Wears Around Her Neck

September 30, 2025
‘Morning Joe’ Blows Up JD Vance’s Shutdown Moaning With His Own Words

‘Morning Joe’ Blows Up JD Vance’s Shutdown Moaning With His Own Words

September 30, 2025
Ukraine updates: Second Nord Stream suspect detained

Ukraine updates: Second Nord Stream suspect detained

September 30, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.