As a young, idealistic medical student in the 2000s, I thought my future job as a doctor would always be safe from artificial intelligence.
At the time it was already clear that machines would eventually outperform humans at the technical side of medicine. Whenever I searched Google with a list of symptoms from a rare disease, for example, the same abstruse answer that I was struggling to memorize for exams reliably appeared within the first few results.
But I was certain that the other side of practicing medicine, the human side, would keep my job safe. This side requires compassion, empathy and clear communication between doctor and patient. As long as patients were still composed of flesh and blood, I figured, their doctors would need to be, too. The one thing I would always have over A.I. was my bedside manner.
When ChatGPT and other large language models appeared, however, I saw my job security go out the window.
These new tools excel at medicine’s technical side — I’ve seen them diagnose complex diseases and offer elegant, evidence-based treatment plans. But they’re also great at bedside communication, crafting language that convinces listeners that a real, caring person exists behind the words. In one study, ChatGPT’s answers to patient questions were rated as more empathetic (and also of higher quality) than those written by actual doctors.
You might find it disturbing that A.I. can have a better bedside manner than humans. But the reason it can is that in medicine — as in many other areas of life — being compassionate and considerate involves, to a surprising degree, following a prepared script.
I began to understand this in my third year of medical school, when I participated in a teaching session on how to break bad news to patients. Our teacher role-played a patient who had come to receive the results of a breast biopsy. We medical students took turns telling the patient that the biopsy showed cancer.
Before that session, I thought breaking such news was the most daunting aspect of patient care and the epitome of medicine’s human side. Delivering bad news means turning a pathologist’s technical description of flesh under the microscope into an everyday conversation with the person whose flesh it is. I presumed that all it required of me was to be a human and to act like it.
But the process turned out to be much more technical than I had expected. The teacher gave us a list of dos and don’ts: Don’t clobber the patient over the head with the news right when you walk in the room. But do get to the point relatively quickly. When delivering the diagnosis, don’t hide behind medical terms like “adenocarcinoma” or “malignancy” — say “cancer.” Once the news is delivered, pause for a moment to give the patient a chance to absorb it. Don’t say phrases like “I’m sorry,” since the diagnosis isn’t your fault. Consider using an “I wish” line, as in, “I wish I had better news.” Ask what the patient knows about cancer and provide information, since many people know little other than that it is bad.
I initially recoiled at the idea that compassion and empathy could be choreographed like a set of dance steps marked and numbered on the floor. But when it was my turn to role-play the doctor, following the memorized lines and action prompts felt completely natural. To my surprise, surrendering my humanity to a script made the most difficult moment in medicine feel even more human.
Suddenly the technical and human sides of medicine didn’t seem so distinct after all. Somehow the least scientific thing I learned in medical school turned out to be the most formulaic.
In the years since, I’ve recited versions of the “bad news” script to scores of patients while working as an emergency room doctor. For patients and their families, these conversations can be life-changing, yet for me it is just another day at work — a colossal mismatch in emotion. The worse the prognosis, the more eagerly I reach for those memorized lines to guide me. During the brief minutes after I learn the diagnosis, before returning to the patient’s room, I rehearse the conversation, plan my approach and make sure to have a tissue box nearby.
Until A.I. completely upends health care (and my career), doctors will have to work in tandem with the technology. A.I. can help us more efficiently write notes in medical charts. And some doctors are already using A.I.-generated lines to better explain complex medical concepts or the reasoning behind treatment decisions to patients.
People worry about what it means to be a human being when machines can imitate us so accurately, even at the bedside. The truth is that prewritten scripts have always been deeply woven into the fabric of society. Be it greetings, prayer, romance or politics, every aspect of life has its dos and don’ts. Scripts — what you might call “manners” or “conventions” — lubricate the gears of society.
In the end, it doesn’t actually matter if doctors feel compassion or empathy toward patients; it only matters if they act like it. In much the same way, it doesn’t matter that A.I. has no idea what we, or it, are even talking about. There are linguistic formulas for human empathy and compassion, and we should not hesitate to use good ones, no matter who — or what — is the author.
The post I’m a Doctor. ChatGPT’s Bedside Manner Is Better Than Mine. appeared first on New York Times.