As a practicing physician, I know that my patients are using artificial intelligence to get medical advice. Sometimes the signs are subtle, like when they bring me lists of suggested tests and potential diagnoses that would put Dr. House to shame. But mostly they just tell me they consulted “Dr. ChatGPT” before seeing Dr. Rodman. Data suggest that over a third of Americans use large language models for health advice.
As an A.I. researcher, I believe that when used appropriately, these large language models are the greatest tool for empowering patients since the invention of the internet. But they also carry new and barely understood risks, like degrading the relationship that patients have with doctors, or pulling people into spirals of anxiety as they pepper a chatbot with questions. As we take part in this exhilarating new phase of health care, here is what I want my patients to know about using A.I. for their health.
Use A.I. to enhance, but not replace, your medical appointments
One of the best ways I see my patients use A.I. is to better prepare for doctors’ visits. The average patient gets only 18 minutes of face time with their doctor every year. The 21st Century Cures Act ensures that patients have access to their medical notes, but the vast majority never look at them. Those who do may have trouble making sense of the jargon or figuring out what’s important. Worse, inaccurate information from a misdiagnosis or a ruled-out condition may still be in the notes, a phenomenon that doctors euphemistically call “chart lore.”
A.I. can help patients navigate this morass. Let’s say you are going to the doctor because of a bothersome cough. Here’s a tip: Pull up your medical notes and remove all identifiable information. Copy those notes into an A.I. tool and give the model a current update of your health and cough concerns. Then ask the chatbot to concisely summarize all this information. Finally, ask the chatbot: “Given this context about my health, please give me three questions I should ask my doctor about my cough during my upcoming visit.”
Figure out what’s important
A.I. tools are capable of giving expert-level medical advice, but their performance is almost entirely dependent on having the full picture of your health, like any health conditions, your medications and what your daily life is like. Doctors learn in medical school what symptoms and descriptions from patients to home in on. To figure out the most effective way to describe your symptoms, you can ask a chatbot to “interview me as if you’re a doctor”; the question-and-answer process can lead to clearer explanations and also help to exclude other conditions that might cause unnecessary alarm.
Beware of sycophancy
The tendency of language models to try to please their users is especially troublesome for people using A.I. to answer health questions. Cyberchondria is a phenomenon in which surfing the web for information about benign symptoms can rapidly lead a person into a rabbit hole of scary possibilities. Because large language models are so aligned to your unconscious desires, they can pick up on what information resonates with you most powerfully and expose you to more of it, mistakenly assuming that it’s what you want. They might, for example, nudge a chat about a stress headache toward a detailed discussion of brain cancer. It’s a bit like how social media algorithms can encourage doomscrolling.
To avoid this, patients should explicitly tell the models why they are asking questions. If you have a headache, don’t say, “I have a headache. What should I do?” Instead, say, “I am having a bad headache today. Here is my last note from my primary care doctor. What are some strategies to make it better?” I remind patients that if a conversation is increasing their distress or anxiety, sycophancy is probably at play and it’s time to talk to your physician directly.
Giving A.I. all your medical information doesn’t mean better answers
Both OpenAI and Anthropic have released new features to allow people’s personal health information to be automatically pulled into chatbots; this is likely to become a standard offering for all A.I. companies. The benefits of automatically importing such personal data aren’t obvious yet. A.I. struggles to “pay attention” when it is given large amounts of repetitive text. If you have complex health problems or many years of health records, dumping everything into the model may paradoxically cause it to perform worse. I have no doubt that models will continue to improve at understanding health information, but I continue to encourage all my patients to seriously consider privacy implications before turning over large amounts of unredacted health data.
Second opinions are powerful, but use caution
Diagnostic errors cause almost 800,000 deaths or permanent disabilities in the United States each year. Decades of policy reforms have barely moved this number. A.I.’s ability to help identify errors early could make it one of our best tools for saving patients’ lives. Still, I advise my patients to be cautious about seeking second opinions from chatbots. An A.I. second opinion should be a conversation starter with your doctor. It can help to inform a diagnosis, but it shouldn’t be relied on for advice on treatment plans, an area in which models frequently fall short. In the Annals of Internal Medicine last year, a group of doctors reported on a case of sodium bromide poisoning in a man who had wanted to improve his diet with a substitute for table salt, and was accessing ChatGPT at the time. Though there is no way to tell what his chat logs were like, the doctors did find that ChatGPT produced a response that included sodium bromide as a possible salt substitute.
Be honest about A.I. use (for patients and doctors)
One of the great ironies of medical care in 2026 is that both patients and doctors routinely talk to A.I. systems without disclosing it to each other. A survey published last year found that nearly 66 percent of physicians reported having used A.I. in their practice in 2024. That includes me — I now routinely use A.I. to search medical literature, I use A.I. scribes to take medical notes, and I use privacy-compliant models when I need a second opinion. Being honest about our A.I. use, in my experience, strengthens trust between doctors and patients and will hopefully lead to better care.
Adam Rodman is a general internist at Beth Israel Deaconess Medical Center and an assistant professor at Harvard Medical School. He is the host of the podcast “Bedside Rounds.”
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Take It From a Doctor: It’s OK if Your Medical Advice Comes From A.I. appeared first on New York Times.




