DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Patients Are Using Chatbots to Fight Medical Bills, With Mixed Results

April 8, 2026
in News
Patients Are Using Chatbots to Fight Medical Bills, With Mixed Results

As chatbots become a fixture in everyday medical care, patients are using them not only to make lists of questions for doctors’ visits or decipher test results, but increasingly to pick apart the financial paperwork that follows, including challenging medical bills.

When Jackie Davalos, 34, received a notice from a collections agency that she owed $22,604 to a hospital for an emergency room visit after she fell down some stairs two years earlier, her partner, Walter Kerr, used the chatbot Claude to help challenge the hospital’s charges.

Mr. Kerr, 39, an executive at a global development nonprofit, said the chatbot had proved a useful adviser, “but not a perfect one.”

At a time when health care costs top Americans’ financial worries, more patients are turning to chatbots like Claude or ChatGPT as a no-cost, do-it-yourself way to navigate problems with medical bills or insurance coverage. The trend is significant enough that the American Hospital Association has alerted its members that patients are increasingly using artificial intelligence to help dispute bills.

Health care providers and insurers have used A.I. for some time, in ways that some people have suggested are intended to maximize charges and deny claims.

Chatbots might seem to offer patients a way to fight back. But critics warn that the tools can dispense flawed advice, especially to users who are less experienced in using A.I., or who do not have much knowledge about the health care system. And they note that chatbots are not bound by the federal privacy protections of the Health Insurance Portability and Accountability Act, or HIPAA.

While chatbots can explain patients’ rights and identify opportunities for relief, critics contend that they often fail to ask for crucial context or they obscure important solutions, leaving patients to fill in the blanks.

For their part, the technology companies argue their current models are more sophisticated and address many of the shortcomings the critics point to.

OpenAI, the maker of ChatGPT, said its new models were “trained to hedge more, browse more and proactively ask for additional details when needed.” (The New York Times has sued OpenAI, claiming copyright infringement of news content. OpenAI denies the claims.)

‘We Might Actually Win’

Ms. Davalos said she never received a bill from George Washington University Hospital in Washington, where she was treated and, according to the records, incorrectly listed as uninsured. Ms. Davalos, who is training to be a pastry chef but at the time was a journalist working for Bloomberg, feared the debt might derail the dream she and Mr. Kerr had of buying a home.

At first, the couple tried to dispute the bill with the hospital’s parent company, Universal Health Services, which manages its billing.

In response, Mr. Kerr said, the company removed a medication charge, but said that Ms. Davalos would have to pay the balance.

Last July, Mr. Kerr decided to upload Ms. Davalos’s billing and medical records to Claude. He asked the chatbot to identify whether they might have any further recourse.

Claude came up with several suggestions, Mr. Kerr said, including that the hospital might have failed to meet some legal requirements regarding debt and insurance.

The chatbot’s suggestions, Mr. Kerr said, encouraged him to think that he and Ms. Davalos might have grounds to keep fighting the hospital bill.

“For the first time,” he said, he felt that “we might actually win.”

Using many of the chatbot’s arguments, Mr. Kerr wrote a letter to executives at the hospital and Universal Health Services, urging them to drop the charges. Shortly after, the hospital waived the entire bill.

Although Mr. Kerr prevailed in the dispute, the chatbot’s advice may not have been entirely correct. Studies suggest that chatbots often err when answering legal questions.

After reviewing a summary of the dispute, Ariel Levinson-Waldman, the founding president of Tzedek DC, a nonprofit legal aid center in Washington, said some of Claude’s analysis was correct. But the chatbot misunderstood the debt and insurance laws it was citing, and failed to inform the couple of other avenues that might be open to them.

For example, Mr. Levinson-Waldman said, some of the legal requirements that Claude suggested the hospital might not have complied with applied to insurers or third-party collectors, not to hospitals. But he could not draw further conclusions without reviewing more records, he said.

George Washington University and Universal Health Services said federal privacy laws limited what they could disclose about Ms. Davalos’s bill. ​​But Susan LaRosa, a spokeswoman for the hospital, acknowledged that when Ms. Davalos was admitted to the hospital, “a clerical error” had been made.

The hospital eliminated the debt once it was “made aware of the updated information,” Ms. LaRosa said, and Ms. Davalos’s credit was not affected.

Maria English, a spokeswoman for Universal Health Services, noted that the debt was eliminated once “all information about the situation was received and communications with the patient were completed” — a resolution achieved only after Mr. Kerr escalated the dispute.

Anthropic, the maker of Claude, declined to comment on the chatbot’s performance.

Confusing Advice

Getting useful answers from a chatbot often requires knowing how to give the chatbot proper instructions or having enough knowledge about health insurance to supply the right context, said Andrew Cohen, an attorney at the nonprofit firm Health Law Advocates. These requirements can leave many people at a disadvantage.

Michelle Maziar, 46, an immigration policy consultant in Atlanta, tried ChatGPT last July to help recover a $3,140 payment she was owed by her insurance company, Anthem.

In March 2023, Anthem had reversed its initial denial of her claim for coverage of fertility services, but the payment never came. She thought the chatbot might be able to help. But ChatGPT mostly proposed steps she had already tried, including asking to speak with a manager, or gave her advice that sounded like another dead end, such as contacting her state insurance commissioner.

“It was deflating,” Ms. Maziar said.

Drained and unable to afford a lawyer, she put her dispute on hold.

Ms. Maziar recently repeated her ChatGPT query for The Times. Nicole Broadhurst, a professional patient advocate who reviewed the transcript of the exchange, agreed with much of the chatbot’s guidance.

But, she said, the bot had missed an important step: asking questions that could help it determine who oversaw Ms. Maziar’s insurance plan. Because her former employer, the City of Atlanta, is self-insured, contacting the state insurance commissioner, as ChatGPT suggested, would not be helpful.

Janey Kiryluik, a spokesperson for Anthem, said an error had delayed Ms. Maziar’s payment, but that it had now been issued in full, which Ms. Maziar confirmed.

Ms. Broadhurst noted that chatbots could excel at translating jargon and doing grunt work like combing through policy documents for key words, but that they often lacked the judgment needed for complex cases.

Even when A.I. gets the rules right, critics say, it can misdirect vulnerable patients.

Last July, Maria Vanegas, a single mother and community organizer, turned to ChatGPT after Medical City Dallas Hospital billed her $3,930 for an emergency visit, an amount that she could not afford to pay.

The chatbot first suggested that she audit the bill. Following that advice, Ms. Vanegas took steps to get an itemized statement. But the thought of a dispute felt overwhelming. “It was intimidating,” she said, “because I don’t speak any medical jargon.”

The chatbot next recommended seeking financial assistance — programs most hospitals offer to waive or discount bills for eligible patients — but Ms. Vanegas said the explanation of eligibility criteria was so technical she could not tell if she would qualify, and the hospital had not offered her any financial help.

Worn down by the hurdles, she said, she almost gave up. Then, near the end of the chatbot’s response, she saw a mention of Dollar For, a nonprofit that helps patients apply for financial assistance. She contacted the organization, which helped her get the bill waived.

Jared Walker, Dollar For’s founder, recently tested ChatGPT with a similar query, and the chatbot again listed financial assistance only as a secondary option. This, he said, downplayed a resource that even many middle-class households can qualify for.

Citing confidentiality, Emma Philips, a spokeswoman for Medical City Dallas Hospital, declined to comment on Ms. Vanegas’s case. But patients receive documents about financial responsibility at registration, she said, and that they can consult hospital advisers about getting assistance.

A spokeswoman for OpenAI cited internal data showing that the company’s current chatbot models were more likely than earlier versions to ask follow-up questions when the chatbot was uncertain and gave much better health answers than the earlier versions.

Privacy Risks

Patients who share their health records or bills with a chatbot risk exposing sensitive information to companies that have few legal guardrails about disclosure.

Unlike hospitals and insurers, chatbot companies are not bound by HIPAA, the health privacy law. They can change privacy policies at will, and information given to a chatbot is not legally protected the way a conversation with a doctor is, so it can be more easily turned over as part of discovery in a lawsuit or custody dispute.

OpenAI and Anthropic recently pledged they would not train their models on their users’ health information, and they would store this information separately. But both companies’ safeguards require opting in, and are currently restricted to paid subscribers or people on a waiting list, rather than the general public.

Jennifer King, a data privacy researcher at Stanford University, called the addition of safeguards an improvement, but she questioned why they were not applied across the board.

Improved, but Still Limited

In late 2024, Joel Bachar, 58, a server at a fine-dining restaurant in Charlotte, N.C., uploaded an insurance document to ChatGPT and asked why his health plan covered so little of his M.R.I. scan.

The chatbot offered no solutions, he recalled — it was “a dead end.” He called his health plan to question the amount, but ultimately paid the $1,170 balance. Caroline Landree, a spokeswoman for UnitedHealthcare, the insurer’s parent company, said that the claim was processed correctly and reflected the benefits in his policy.

When Mr. Bachar recently replicated his exchange with ChatGPT, the chatbot suggested potential options to lessen the bill, like asking for a discount to settle it quickly.

But the chatbot also showed its limits. Julien Nakache, chief executive of the bill-negotiation start-up Granted Health —a specialized A.I. company that disputes bills and denials and that Mr. Bachar has hired for other cases — reviewed the exchange. In Mr. Bachar’s case, Mr. Nakache said, the chatbot claimed that the plan had applied the benefit correctly, but it had not gathered enough information to know this was the case, and it did not suggest checking the bill for errors.

An OpenAI representative said the company used doctors to help test its chatbots’ answers involving health care, including bills and insurance.

Other patients have also found that technology can hit a wall in dealing with an exhausting bureaucracy. After Mr. Kerr posted the details of his dispute with George Washington University Hospital on social media, he began helping others contest bills by using chatbots. Even so, some people gave up. Others are still in limbo, awaiting a response.

Success, Mr. Kerr said, often requires persistence, something “A.I. can’t solve for you.”

The post Patients Are Using Chatbots to Fight Medical Bills, With Mixed Results appeared first on New York Times.

‘We Were Not Ready for This’: Lebanon’s Emergency System Is Hanging by a Thread
News

‘We Were Not Ready for This’: Lebanon’s Emergency System Is Hanging by a Thread

by Wired
April 8, 2026

The last time a government official from Lebanon sat down to think carefully about national digital infrastructure, nobody expected another ...

Read more
News

The Iran War Is Driving a Global Surge of Interest in Electric Vehicles

April 8, 2026
News

Dealer Who Sold Ketamine to Matthew Perry Is Sentenced to 15 Years

April 8, 2026
News

Shopping at the Masters is its own sport. We asked fans what they’re buying, from $50 gnomes to $3,000 hauls.

April 8, 2026
News

‘Fiasco’: Alex Jones in disbelief as Trump’s ‘ceasefire falling apart’

April 8, 2026
Amazon and U.S. Postal Service Reach New Deal on Deliveries After Year of Talks

Amazon and U.S. Postal Service Reach New Deal on Deliveries After Year of Talks

April 8, 2026
California’s top court halts sheriff’s seizure of half a million ballots

California’s top court halts sheriff’s seizure of half a million ballots

April 8, 2026
‘We will not applaud’: US bashed by ally as ceasefire struck with Iran

‘We will not applaud’: US bashed by ally as ceasefire struck with Iran

April 8, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026