DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

AI didn’t replace me as a doctor. It made me better.

February 24, 2026
in News
AI didn’t replace me as a doctor. It made me better.

Ashish K. Jha, the former dean of the Brown University School of Public Health, was White House covid-19 response coordinator from 2022 to 2023.

As a researcher, public health professional and practicing physician, I’ve been watching the rise of artificial intelligence tools such as ChatGPT with both cautious optimism and healthy skepticism.

The public is rightly wary about this new technology in health care. Its misuse can have serious consequences for patients, for example, by inappropriately denying care, hallucinating incorrect information or overlooking pertinent patient information. Clear guardrails and direct patient contact with medical professionals is crucial.

Still, for time-pressed doctors, a tool that both confirms judgments and broadens diagnostic thinking can be invaluable. When used properly, it can help combat the tunnel vision that often takes hold in busy clinics and hospitals.

Much of the promise of AI in medicine is its ability to free doctors from paperwork, reduce mistakes and allow for more time with patients. All great benefits. But I wondered, could it actually help me be a better doctor? I decided to find out.

During hospital rounds at the hospital where I work, I ran a small experiment. After each patient encounter, I entered a brief, de-identified summary into ChatGPT that included age, symptoms, labs and treatment plan. No names. No personal information. Then I asked: What else should I be considering?

I expected ChatGPT to echo what I already knew about potential diagnoses and care options. Instead, it pushed me to think more broadly about what approaches to take. (The Post has a content partnership with OpenAI, the creator of ChatGPT.)

Take an elderly man with osteomyelitis, a serious bone infection in his foot. Bloodwork showed a staph infection, so our care team’s plan was to prescribe antibiotics and surgically remove the infected bone. Straightforward enough. But ChatGPT suggested an echocardiogram, an ultrasound of the patient’s heart. It cited recommendations from the Infectious Diseases Society of America outlining how sometimes staph infections can also infect the heart valves, a complication with serious consequences.

We ordered the test. Thankfully, the heart valves were clear. We may have ordered it eventually on our own, but on a busy day, it could easily have been missed or delayed. That prompt underscored the potential of AI to ensure critical interventions aren’t overlooked. Of course, AI can also hallucinate or suggest the wrong test, which is why judgment remains essential.

Another example involved a patient with a common drug rash. The hospital pharmacy didn’t carry the treatment I usually prescribe, only an alternative I had never used. In less than a minute, ChatGPT produced a side-by-side comparison, including studies and outcomes, allowing me to prescribe with confidence.

Other times it resolved debates in real time. One patient’s blood test raised the question of whether his respiratory acidosis, a condition where the lungs are not adequately able to clear carbon dioxide, was acute or chronic. In this patient, the evidence wasn’t clear. ChatGPT laid out the physiology step by step, giving us a common framework and reference to think from. The team reached consensus and avoided a clinical misstep in diagnoses.

ChatGPT even changed how I teach. When an intern and I disagreed on the definition of unstable angina — a type of chest pain caused by reduced blood flow to the heart — we asked ChatGPT together. It pulled from reliable sources, laying out the definition from the American College of Cardiology and the European Society of Cardiology. What could have been a top-down correction became an active learning moment.

These experiences convinced me that future clinicians should be training on how to use AI effectively, including what information to enter, how to frame questions and how to filter outputs. My medical experience helps me know which questions to ask, which answers to trust and which suggestions to dismiss. We need to ensure that other clinicians — not just doctors but also nurses, pharmacists and community health workers who already carry so much of the clinical load — can do the same with AI tools.

As these tools improve, they must be made trustworthy, safe and responsive to human needs. Privacy and security are paramount. “Hallucinations” — confident but wrong answers — can occur, and AI companies must do more to reduce them. Physicians, too, can take specific steps to minimize errors. My primary strategy is to ask ChatGPT for references for each of its recommendations. For any information that is consequential to my decision, I then look at the original source. This can be a bit tedious, but it dramatically reduces the likelihood of an AI-enabled error.

Some people argue that ChatGPT will eventually replace physicians altogether. At some point in the future, maybe that will be true. But right now, what I know is ChatGPT did not replace my training or judgment. It undoubtedly enhanced both.

The post AI didn’t replace me as a doctor. It made me better. appeared first on Washington Post.

How Robin Williams Helped Save Sharon Osbourne’s Life
News

Why Robin Williams Was Prevented From Appearing in the ‘Harry Potter’ Movies

by VICE
February 24, 2026

When it was first announced that the Harry Potter books were being adapted into films, a number of Hollywood actors ...

Read more
News

Mike Johnson Blasted by Republicans for Botched Handling of MAGA Affair Scandal

February 24, 2026
News

Angry Locals Force Officials to Bin Charlie Kirk Memorial

February 24, 2026
News

World of Warcraft May Be Coming to Xbox Game Pass Ultimate

February 24, 2026
News

Inside OpenAI’s org chart: Here are the executives in charge at the ChatGPT creator

February 24, 2026
Mario Vargas Llosa’s Swan Song Is an Ode to Peruvian Music

Mario Vargas Llosa’s Swan Song Is an Ode to Peruvian Music

February 24, 2026
Morning Joe calls Trump’s reaction to Supreme Court ruling ‘worst moment of presidency’

Morning Joe calls Trump’s reaction to Supreme Court ruling ‘worst moment of presidency’

February 24, 2026
More Than Half of Teens Use Chatbots for Schoolwork, Survey Finds

More Than Half of Teens Use Chatbots for Schoolwork, Survey Finds

February 24, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026