DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Can AI developers avoid Frankenstein’s fateful mistake?

November 15, 2025
in News
Can AI developers avoid Frankenstein’s fateful mistake?

Audiences already know the story of Frankenstein. The gothic novel — adapted dozens of times, most recently in director Guillermo del Toro’s haunting revival now available on Netflix — is embedded in our cultural DNA as the cautionary tale of science gone wrong. But popular culture misreads author Mary Shelley’s warning. The lesson isn’t “don’t create dangerous things.” It’s “don’t walk away from what you create.”

This distinction matters: The fork in the road comes after creation, not before. All powerful technologies can become destructive — the choice between outcomes lies in stewardship or abdication. Victor Frankenstein’s sin wasn’t simply bringing life to a grotesque creature. It was refusing to raise it, insisting that the consequences were someone else’s problem. Every generation produces its Victors. Ours work in artificial intelligence.

Recently, a California appeals court fined an attorney $10,000 after 21 of 23 case citations in their brief proved to be AI fabrications — nonexistent precedents. Hundreds of similar instances have been documented nationwide, growing from a few cases a month to a few cases a day. This summer, a Georgia appeals court vacated a divorce ruling after discovering that 11 of 15 citations were AI fabrications. How many more went undetected, ready to corrupt the legal record?

The problem runs deeper than irresponsible deployment. For decades, computer systems were provably correct — a pocket calculator can consistently offer users the mathematically correct answers every time. Engineers could demonstrate how an algorithm would behave. Failures meant implementation errors, not uncertainty about the system itself.

Modern AI changes that paradigm. A recent study reported in Science confirms what AI experts have long known: plausible falsehoods — what the industry calls “hallucinations” — are inevitable in these systems. They’re trained to predict what sounds plausible, not to verify what’s true. When confident answers aren’t justified, the systems guess anyway. Their training rewards confidence over uncertainty. As one AI researcher quoted in the report put it, fixing this would “kill the product.”

This creates a fundamental veracity problem. These systems work by extracting patterns from vast training datasets — patterns so numerous and interconnected that even their designers cannot reliably predict what they’ll produce. We can only observe how they actually behave in practice, sometimes not until well after damage is done.

This unpredictability creates cascading consequences. These failures don’t disappear, they become permanent. Every legal fabrication that slips in undetected enters databases as precedent. Fake medical advice spreads across health sites. AI-generated “news” circulates through social media. This synthetic content is even scraped back into training data for future models. Today’s hallucinations become tomorrow’s facts.

So how do we address this without stifling innovation? We already have a model in pharmaceuticals. Drug companies cannot be certain of all biological effects in advance, so they test extensively, with most drugs failing before reaching patients. Even approved drugs face unexpected real-world problems. That’s why continuous monitoring remains essential. AI needs a similar framework.

Responsible stewardship — the opposite of Victor Frankenstein’s abandonment — requires three interconnected pillars. First: prescribed training standards. Drug manufacturers must control ingredients, document production practices and conduct quality testing. AI companies should face parallel requirements: documented provenance for training data, with contamination monitoring to prevent reuse of problematic synthetic content, prohibited content categories and bias testing across demographics. Pharmaceutical regulators require transparency while current AI companies need to disclose little.

Second: pre-deployment testing. Drugs undergo extensive trials before reaching patients. Randomized controlled trials were a major achievement, developed to demonstrate safety and efficacy. Most fail. That’s the point. Testing catches subtle dangers before deployment. AI systems for high-stakes applications, including legal research, medical advice and financial management, need structured testing to document error rates and establish safety thresholds.

Third: continuous surveillance after deployment. Drug companies are obligated to track adverse events of their products and report them to regulators. In turn, the regulators can mandate warnings, restrictions or withdrawal when problems emerge. AI needs equivalent oversight.

Why does this need regulation rather than voluntary compliance? Because AI systems are fundamentally different from traditional tools. A hammer doesn’t pretend to be a carpenter. AI systems do, projecting authority through confident prose, whether retrieving or fabricating facts. Without regulatory requirements, companies optimizing for engagement will necessarily sacrifice accuracy for market share.

The trick is regulating without crushing innovation. The EU’s AI Act shows how hard that is. Under the Act, companies building high-risk AI systems must document how their systems work, assess risks and monitor them closely. A small startup might spend more on lawyers and paperwork than on building the actual product. Big companies with legal teams can handle this. Small teams can’t.

Pharmaceutical regulation shows the same pattern. Post-market surveillance prevented tens of thousands of deaths when the FDA discovered that Vioxx — an arthritis medication prescribed to more than 80 million patients worldwide — doubled the risk of heart attacks. Still, billion-dollar regulatory costs mean only large companies can compete, and beneficial treatments for rare diseases, perhaps best tackled by small biotechs, go undeveloped.

Graduated oversight addresses this problem, scaling requirements and costs with demonstrated harm. An AI assistant with low error rates gets extra monitoring. Higher rates trigger mandatory fixes. Persistent problems? Pull it from the market until it’s fixed. Companies either improve their systems to stay in business, or they exit. Innovation continues, but now there’s more accountability.

Responsible stewardship cannot be voluntary. Once you create something powerful, you’re responsible for it. The question isn’t whether to build advanced AI systems — we’re already building them. The question is whether we’ll require the careful stewardship those systems demand.

The pharmaceutical framework — prescribed training standards, structured testing, continuous surveillance — offers a proven model for critical technologies we cannot fully predict. Shelley’s lesson was never about the creation itself. It was about what happens when creators walk away. Two centuries later, as Del Toro’s adaptation reaches millions this month, the lesson remains urgent. This time, with synthetic intelligence rapidly spreading through our society, we might not get another chance to choose the other path.

Dov Greenbaum is professor of law and director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at Reichman University in Israel.

Mark Gerstein is the Albert L. Williams Professor of Biomedical Informatics at Yale University.

The post Can AI developers avoid Frankenstein’s fateful mistake? appeared first on Los Angeles Times.

Reagan-appointed ex-judge alarmed as Supreme Court tactic becomes Trump’s secret weapon
News

Reagan-appointed ex-judge alarmed as Supreme Court tactic becomes Trump’s secret weapon

November 15, 2025

A retired senior federal district judge appointed by former President Ronald Reagan spoke out in an interview with Politico published ...

Read more
News

The End to the Government Shutdown

November 15, 2025
News

Staggering number of women want to permanently leave America

November 15, 2025
News

Our neighbors didn’t have family, so they became like grandparents to my kids. I still regret that I couldn’t help them more than I did.

November 15, 2025
News

Trump’s FDA head fighting to retain power as infighting roils agency: report

November 15, 2025
People Are Having AI “Children” With Their AI Partners

People Are Having AI “Children” With Their AI Partners

November 15, 2025
Around the World, From the Trenches to the Club, Youth Are in Revolt

Around the World, From the Trenches to the Club, Youth Are in Revolt

November 15, 2025
Trump brutally mocked over stunning tariff ‘admission’: ‘Thought they were lowering costs’

Trump brutally mocked over stunning tariff ‘admission’: ‘Thought they were lowering costs’

November 15, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025