DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

People Are Uploading Their Medical Records to A.I. Chatbots

December 3, 2025
in News
People Are Uploading Their Medical Records to A.I. Chatbots

Mollie Kerr, a 26-year-old New Yorker living in London, was rattled this summer when her bloodwork showed hormone imbalances.

After seeing the results in her patient portal, she felt too scared to wait to talk to her doctor. So, with some unease, she pasted the full report into ChatGPT. Her lab results could indicate a number of conditions, the chatbot told her, but “most likely” pointed to a pituitary tumor or a rare condition linked to pituitary tumors.

The chatbot’s guesses weren’t out of the question — Ms. Kerr’s doctor agreed to order an M.R.I. to check — but they were wrong. No tumor detected.

Another patient, Elliot Royce, 63, had a different experience after uploading five years of his medical records to ChatGPT, including documentation of a complex heart condition and a past heart attack.

He started to feel more uncomfortable while exercising, and a test indicated a partly blocked artery. His doctor believed close monitoring would suffice, for the time being. But based on ChatGPT’s advice, Mr. Royce pushed for a more invasive diagnostic procedure, which revealed an 85 percent blockage — a serious problem that was addressed with a stent.

Around the world, millions of people are using chatbots to try to better understand their health. And some, like Ms. Kerr and Mr. Royce, are going further than just asking medical questions. They and more than a dozen others who spoke with The New York Times have handed over lab results, medical images, doctor’s notes, surgical reports and more to chatbots.

Inaccurate information is a major concern; some studies have found that people without medical training obtain correct diagnoses from chatbots less than half the time. And uploading sensitive data adds privacy risks in exchange for responses that can feel more personalized.

Dr. Danielle Bitterman, an assistant professor at Harvard Medical School who studies artificial intelligence, said it wasn’t safe to assume a chatbot was personalizing its analysis of test results. Her research has found that chatbots can veer toward offering more generally applicable responses even when given context on specific patients.

“Just because you’re providing all of this information to language models,” she said, “doesn’t mean they’re effectively using that information in the same way that a physician would.”

And once people upload this kind of data, they have limited control over how it is used.

HIPAA, the federal health privacy law, doesn’t apply to the companies behind popular chatbots. Legally, said Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center, “you’re basically waiving any rights that you have with respect to medical privacy,” leaving only the protections that a given company chooses to offer.

OpenAI, the maker of ChatGPT, said it had extensive safeguards to protect its users’ private information.

A representative noted that users could opt out of having their chats used to train future models, and said the company tested its systems against simulated attacks. It also shares minimal data with third-party service providers, she said. (The Times has sued OpenAI, claiming copyright infringement of news content. OpenAI has denied the claims.)

Even so, data privacy experts said there were risks to uploading medical information to any chatbot — both because different chatbots’ policies vary, and because it is very difficult to eliminate all vulnerabilities.

One issue is that many people don’t opt out of handing over their data for training purposes. This creates the possibility that, if one person uploads medical data and someone else asks a future model about that person, a chatbot “might accidentally leak very sensitive information,” said Karni Chagal-Feferkorn, an assistant professor at the Bellini College of Artificial Intelligence, Cybersecurity and Computing at the University of South Florida.

OpenAI says it works to “minimize” this possibility, and the representative said ChatGPT was trained not to learn or reveal such information. But data privacy experts still consider the scenario plausible.

“Their actions surely reduce the risk, but are not and likely cannot be bulletproof,” Dr. Chagal-Feferkorn said. “Don’t be afraid of the technology, but be very aware of the risks,” she added.

A few patients said they had redacted their names and scrubbed metadata before sharing their records with chatbots, but that might not be enough. Sufficiently detailed information can sometimes be linked back to individuals even if no names are attached, said Dr. Rainu Kaushal, the chair of the department of population health sciences at Weill Cornell Medicine and NewYork-Presbyterian.

The consequences of having health information leaked can be serious. For instance, though it’s illegal for most employers to discriminate against people with disabilities, it isn’t uncommon.

But most people who spoke to The Times said they weren’t troubled.

“My cellphone is following me wherever I go,” said Robert Gebhardt, 88, who asks ChatGPT to evaluate the urgency of his symptoms and the appropriateness of his medications given the 15 years of medical records he has uploaded. “Anybody that wants to know anything about me can find out, including my medical data. It’s a fact of life, and I’ve reconciled myself to that.”

Stephanie Landa, 53, has fed test results into ChatGPT since she received a diagnosis of metastatic appendix cancer last year. She values ChatGPT’s immediate overview of her results, perhaps especially when they are devastating, as when they showed the cancer had spread throughout her abdomen. If she processes bad news before a doctor’s visit, she said, she can use the appointment time more effectively.

For a while, she painstakingly redacted identifying information. But then she decided that, given the prognosis of her aggressive cancer, she didn’t really care.

As for Ms. Kerr, the woman who did not have a pituitary tumor, an endocrinologist couldn’t help after ruling out the tumor, she said, and her primary care doctor has been unable to solve the mystery.

So she has gone back to ChatGPT for new diagnostic suggestions and dietary advice, some of which she has found helpful.

“I know it’s sensitive information,” she said. “But I also feel like I’m not getting any answers from anywhere else.”

Share Your Experience


Produced by Deanna Donegan.

Maggie Astor covers the intersection of health and politics for The Times.

The post People Are Uploading Their Medical Records to A.I. Chatbots appeared first on New York Times.

Trump plans to ‘decimate and demoralize’ veterans workforce with new crackdown: leaked doc
News

Trump plans to ‘decimate and demoralize’ veterans workforce with new crackdown: leaked doc

by Raw Story
December 3, 2025

An internal memo delivered to the Veterans Affairs team has ordered them to make an internal database documenting non-US citizens. ...

Read more
News

Best of the Babylon Bee: Dems’ poll numbers surging among key narcoterrorist demographic

December 3, 2025
News

Charlie Kirk’s killing shows the difference between words and violence

December 3, 2025
News

Ex-Google CEO Eric Schmidt says AI isn’t overhyped — the biggest gains from automating corporate work are still ahead

December 3, 2025
News

A Broadway star is born: June Squibb takes the lead at 96

December 3, 2025
Progress elusive in Ukraine peace talks as Russia says no compromise yet

Progress elusive in Ukraine peace talks as Russia says no compromise yet

December 3, 2025
‘This must be a health problem’: MS NOW panel grapples with Trump’s sleep issues

‘This must be a health problem’: MS NOW panel grapples with Trump’s sleep issues

December 3, 2025
The CDC gears up for another anti-vaccine clown show

The CDC gears up for another anti-vaccine clown show

December 3, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025