DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

AI could democratize medicine, but better regulation comes first

April 25, 2026
in News
AI could democratize medicine, but better regulation comes first

Last month, a group of researchers were able to manipulate an AI-powered drug prescription serviceinto tripling an opioid dose and into labeling methamphetamine as safe. Days later, New York lawmakers introduced sweeping legislation that analogizes clinical AI to a doctor practicing medicine without a license — rendering it potentially illegal for AI to provide even basic medical guidance. California has staked out a middle ground, enacting legislation early this year that mandates disclosure to patients when AI is involved.

While states continue to send conflicting signals about how best to regulate AI in medicine, millions of Americans aren’t waiting for consensus. Data shows that one in three Americans are now turning to AI chatbots to diagnose symptoms and direct care, a figure that doubled in just a single year. In short, AI is already practicing medicine.

I’ve worked as an emergency medicine physician in academic medical centers, a safety-net hospital and a community ER. What defines my experience, across every institution, is the staggering weight of unmet medical need: Patients who run out of an essential medication and can’t get refills. A diabetic who hasn’t seen his endocrinologist for months because appointments are scarce. A UTI that progresses to a kidney infection without prompt treatment. Every day, our system transforms manageable conditions into major crises and turns the ER into a substitute for all the care Americans cannot access. The human cost is staggering.

Artificial intelligence can change this reality, and the possibilities are neither radical nor experimental. Women should be able to refill birth control without scheduling an appointment. Patients with cold sores or yeast infections shouldn’t have to wait days for a callback; in many parts of the world, that care is accessible without a prescription. AI can bring equivalent access to American patients, with appropriate safety standards built in.

Indeed, the most ambitious model of this vision is further along than most people realize: The federal government is currently soliciting proposals from the private sector to develop AI that will independently manage heart failure events, a disease for which only 1% of patients receive the recommended medication regimen and five-year mortality rates now exceed 50%.

AI’s potential to radically expand access to medicine is a good thing, maybe even a revolutionary one. Most Americans are not choosing between AI and their trusted family doctor. Barriers like cost and doctor shortages mean that Americans are choosing between AI and nothing. Those patients deserve better, and AI is the first development in decades that promises tangible help at scale.

That is why, alongside my clinical practice and research, I recently joined a company using AI to democratize access to medicine. I did not make that decision lightly. There is legitimate cause to be wary about a technology as powerful as AI reaching vulnerable patients without appropriate safeguards. But the answer is not the approach New York is considering. Neither physicians nor policymakers can afford to sit on the sidelines while patients fill the many gaps in our healthcare system with AI. We need regulation that is serious, enforceable and built for the speed at which this technology is progressing.

The federal government has already begun influencing this rapidly changing field. In January, the Food and Drug Administration updated its software guidance to allow AI tools to operate with less oversight when assisting doctors. Under the new rubric, software that enables a physician to independently review the basis for an AI recommendation falls outside the agency’s regulation of medical devices. A textbook example would be software that can warn a doctor about dangerous drug interactions before she signs a prescription.

But this carve-out covers AI only with a doctor in the loop. There’s no comparable exemption for AI that talks directly to patients without a doctor in the room, or that makes recommendations in time-critical situations. That technology presumably remains subject to full FDA oversight, though the government has not yet weighed in. Building federal guardrails around fast-moving technology is genuinely difficult, and the FDA’s caution is understandable. But the result is counterintuitive: clinical AI operating the most autonomously is, ironically, the least regulated.

Into this vacuum, states have moved quickly and in different directions. Some, including Utah, Arizona and Texas, are building frameworks to accelerate deployment. Others, including New York and California, are moving to curtail AI in medicine. In many respects, this is the laboratories of democracy model working as intended, allowing federal policy to find its footing through state-level experimentation and evidence collection. But 50 competing standards cannot be the answer for a technology this consequential. Patients deserve basic protections when they use clinical AI no matter where they live, and companies building these tools need to be held to uniform standards that prioritize patient safety.

The framework we need is an extension of what the FDA already knows how to do: require independent, third-party evidence of safety and effectiveness before a clinical AI system deploys; mandate adversarial security testing as part of the approval process; and impose a uniform federal standard, with room for states to go further but not fall below it. Finally, when AI harms a patient, there must be a clear path to accountability. Medical malpractice has governed physician liability for decades. It can be adapted here.

Many assume that regulation slows down transformative technology, but history suggests otherwise. Federal deposit insurance made people trust banks enough to use them. Federal safety standards made commercial aviation the safest form of mass transportation.

Clinical AI needs the same foundation, and there is urgency to act now — it is already in patients’ hands, moving faster than any technology we have tried to govern. The patients with the most to gain are the same ones with the most to lose if we don’t get it right.

Hashem Zikry is an assistant professor at UCLA and medical director for research and policy at Counsel Health.

The post AI could democratize medicine, but better regulation comes first appeared first on Los Angeles Times.

Two Early 1990s Games Are Getting Surprise Physical Re-Releases This Fall
News

Two Early 1990s Games Are Getting Surprise Physical Re-Releases This Fall

by VICE
April 25, 2026

Two iconic games from the SEGA Genesis and Super Famicom/SNES era are getting brand-new physical releases in May 2026, more ...

Read more
News

I travel with my 75-year-old mother-in-law and wife every year. Our recent trip to Charleston had something for everyone.

April 25, 2026
News

Katie Couric calls Matt Lauer’s alleged rape victim Brooke Nevils ‘very brave’

April 25, 2026
News

The US Military Just Arrested One of Its Soldiers for Making Ghoulish Polymarket Bets, and It Shows How Deep the Moral Rot of Prediction Markets Really Goes

April 25, 2026
News

Iran says no direct talks planned as Witkoff, Kushner expected in Pakistan

April 25, 2026
The U.K. Smoking Ban Is Illiberal

The U.K. Smoking Ban Is Illiberal

April 25, 2026
The hidden signs in your gut that may indicate Parkinson’s disease before symptoms emerge

The hidden signs in your gut that may indicate Parkinson’s disease before symptoms emerge

April 25, 2026
Ex-GOP staffer spots ‘really interesting’ sign of Dems advancing in ruby red state

Ex-GOP staffer spots ‘really interesting’ sign of Dems advancing in ruby red state

April 25, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026