For better or worse, many people are asking artificial intelligence chatbots for health information and advice. According to a June 2024 poll from the health research group KFF, about one in six adults do so regularly, and experts say that share has grown since.
Recent studies have shown that ChatGPT can pass medical licensing exams and solve clinical cases more accurately than humans can. But A.I. chatbots are also notorious for making things up, and their faulty medical advice seems to have also caused real harm.
These risks don’t mean you should stop using chatbots, said Dr. Ainsley MacLean, the former chief A.I. officer of the Mid-Atlantic Permanente Medical Group, but they do underscore the need for caution and critical thinking.
Chatbots are great at creating a list of questions to ask your doctor, simplifying jargon in medical records and walking you through your diagnosis or treatment plan, Dr. MacLean said. But no chatbot is ready to replace your physician, so be a little more cautious when asking A.I. for potential diagnoses or medical advice.
We asked experts how to best use chatbots for your health care questions.
Practice when the stakes are low.
In general, people are used to seeking medical advice from Google and understand that the tenth page of results isn’t as good as the first, said Dr. Raina Merchant, executive director of the Center for Health Care Transformation and Innovation at Penn Medicine.
But most people don’t have as much experience using A.I. chatbots. There’s a learning curve, experts said, and you need to practice framing questions and scrutinizing the responses to get the best results.
So don’t wait until you have a major health concern to start experimenting with A.I., said Dr. Robert Pearl, the author of “ChatGPT, MD: How A.I.-Empowered Patients & Doctors Can Take Back Control of American Medicine.”
Think back to your last medical visit, Dr. Pearl suggested, and a few questions that your doctor answered well. Then, pose them to the chatbot and test different prompts, comparing its answers with your doctor’s. This exercise can give you a sense of the chatbot’s strengths and limitations, he said.
Also, watch out for chatbots’ tendency to be sycophantic and relentlessly validating. A leading question — such as “Don’t you think I should get an M.R.I.?” — could prompt a chatbot to agree with you, rather than provide accurate answers. But you can guard against this by asking balanced, open-ended questions.
Dr. Pearl even recommended removing yourself from the question — “What would you tell a patient who has a bad cough?” — to better bypass chatbots’ tendency to agree with you. You could even consider directly asking, “What would you say that he might not want to hear?”
Share context — within reason.
Chatbots don’t know anything about you, except what you tell them, said Dr. Michael Turken, an internal medicine physician at University of California San Francisco Health. So, when asking medical questions, give chatbots as much context as you’re comfortable sharing to increase the chance of getting a more personalized answer, he said.
Let’s say you ask a chatbot about recent hip pain. There are, of course, dozens of potential causes. “But as soon as you give the chatbot your age, your prior medical history, associated diseases, medications, your job, now it can start to come up with a very specific, personalized diagnosis,” Dr. Pearl said — one you can then ask your doctor about.
Still, there are serious privacy concerns when it comes to A.I., said Dr. Ravi Parikh, director of the Human-Algorithm Collaboration Lab at Emory University. Most popular chatbots are not bound by the Health Insurance Portability and Accountability Act, or HIPAA, and it’s not clear who might have access to your conversation history, he added. So, avoid sharing identifying details or uploading your full medical records. These can contain your address, social security number and other sensitive data.
If you’re worried about privacy, many chatbots have an anonymous or incognito mode, in which conversations aren’t used to train the model and are deleted after a short period. There are also several HIPAA-compliant medical chatbots available online, Dr. Turken said, like My Doctor Friend, Counsel Health and Doctronic.
Check in during long chats.
A.I. chatbots can sometimes forget or confuse critical details, particularly with free versions or during a long conversation, Dr. Parikh said. So ideally, use the paid, more advanced models for medical questions, since they tend to have longer memory, a better “reasoning” process and more up-to-date data, he added.
It can also be helpful to periodically start fresh chats, but many patients find it frustrating to re-enter their medical information and get the model up to speed again. In that case, Dr. Merchant recommended asking the chatbot to “summarize what you know about my medical history” at regular intervals. Such check-ins can help correct misunderstandings and make sure the chatbot stays on track.
Invite more questions.
In general, A.I. chatbots are far better at offering answers than asking questions, so they tend to skip the important follow-ups a physician would ask, Dr. Turken said — like whether you have any underlying conditions or are taking any medications. This is especially problematic when you’re asking about potential diagnoses or medical advice.
To compensate, Dr. Turken recommended prompting the chatbot with a line like: “Ask me any additional questions you need to reason safely.”
Expect a burst of questions, but try to address each question carefully. If you miss or skip a question, it probably won’t ask again, Dr. Turken said.
If given specific details and your follow-up answers, chatbots are quite good at giving patients a “differential diagnosis,” or a ranked list of possible conditions that can explain your symptoms, Dr. Turken said. Just remember, while they might uncover a diagnosis your doctor had missed, these chatbots may also give you some alarming, worst-case-scenario options.
“ChatGPT doesn’t have the actual real experience of having seen hundreds of patients and knowing the probability of each condition,” Dr. MacLean said.
Pit your chatbot against itself.
Every piece of health advice online or from a friend comes with a certain perspective baked in, and A.I. chatbots are no different, Dr. Pearl said. The problem is that, even when A.I. is making elementary mistakes, it still exudes confidence and appears all knowing.
So, be skeptical and ask chatbots for sources — and then confirm those sources actually exist, Dr. MacLean said. You should also ask difficult follow-up questions and make chatbots explain their reasoning. “Be really engaged,” she added. “No one cares more about your health than you.”
To push chatbots even further, prompt them to take different points of view. At first, Dr. MacLean said, you might tell the chatbot, “You’re a careful, experienced primary care doctor.” But later on, ask it to take on the perspective of a specialist, which tends to steer the model toward deeper, domain-specific knowledge.
You can also push chatbots to think more carefully by asking them to critique their first answer and then to reconcile both responses, Dr. Turken said. But don’t simply take A.I. chatbots at their word — always double-check their information with reputable health resources and, of course, your doctor.
Experts say there aren’t strict redlines around what you can safely ask a chatbot; it’s what you do with that information that matters most. The key is to treat A.I. as an educational resource rather than as a decision maker.
Simar Bajaj covers health and wellness.
The post 5 Tips When Consulting ‘Dr.’ ChatGPT appeared first on New York Times.




