Ben Riley discovered by accident that his dad hadn’t been telling the truth about his cancer.
He was sitting at the kitchen counter in his Austin home last summer, a bright new build with white walls and concrete floors, when he decided to peek at his dad’s MyChart portal. He idly scrolled through pages of lab results and doctor’s notes on his laptop until a sentence grabbed his attention.
“I was clear the window of treatment may close the longer he postpones,” the doctor wrote. “The natural history of his disease is death and debilitation.”
The note didn’t make sense. Ben knew that his 75-year-old father had chronic lymphocytic leukemia, a type of white blood cell cancer that is often slow-moving. But his dad, Joe Riley, had reassured his family that starting treatment was not urgent. He certainly hadn’t conveyed his doctor’s warning that he was headed toward a dangerous deadline.
Ben, panicked, quickly clicked through more records. The oncologist had been recommending treatment for 10 months. His pleas seemed to grow more desperate by the page. But Joe was convinced the drugs would do more harm than good.
“We discussed that treatment can slow down and possibly halt the progression of his C.L.L. which will give him more time to be with his family as he so desires,” another note read.
“He answered that he doesn’t plan on starting treatment even if his disease continues to progress.”
Ben knew better than to confront his dad, a retired neuroscientist who bristled at anyone questioning his intellectual judgment. He needed more information, a plan, to persuade Joe, who was — apparently — dying of cancer thousands of miles away in Seattle.
He was anxiously monitoring his dad’s patient portal, trying to decide what to do, when a new message popped up. Joe had sent his oncologist research he had done with A.I., the apparent evidence for his decision to refuse the treatment.
Jesus Christ, Ben thought. The morbid irony of the situation was not lost on him. A year earlier, he started a newsletter to help people make better decisions about when and how to use generative A.I. He wrote about how the tools had sent people into delusional spirals and helped a teenager end his life. Now, it appeared that A.I. had led his own father astray.
He texted his two siblings: “We need to talk.”
Ben, 49, was not particularly interested in A.I. until a few years ago. To him, the technology had seemed like fodder for sci-fi movies like “Her” and “Ex Machina.”
He was more interested in humans. After a brief stint working on Wall Street and then as a lawyer for the California Department of Justice, Ben read a book by a prominent cognitive scientist that made him change his career trajectory.
He began reading voraciously about subjects that could help him understand the human mind — neuroscience, linguistics, philosophy, anthropology — and considered himself a “self taught cognitive scientist.” In 2015, he founded a nonprofit that aimed to train teachers in cognitive science to better understand how their students thought and learned.
The rise of generative A.I. changed his view of the technology, though. It offered a window into many of the questions he had devoted much of his career to: What makes us human? What is human thought?
He decided to start a newsletter, Cognitive Resonance, that would use cognitive science to “explain A.I. to the average Joe.”
His father was one of his first subscribers.
By that point, Joe was already well versed in A.I. It was no surprise to Ben. His father had always been an early adopter.
While other families were still listening to cassettes, their house had a C.D. player. On road trips, he and his siblings sat in the back seat of the family’s Dodge Caravan watching movies on a jury-rigged entertainment system Joe had built.
At home, they had a Commodore Amiga, a blocky personal computer that Joe “treated like a child,” Ben said.
That was, until Ben came home from school one day and the computer was gone. Authorities had seized it after Joe illegally hacked into the telephone network to connect to the “electronic bulletin-board system,” a precursor to the internet. He refused to apologize during his trial, Ben recalled, stubbornly insisting that “information wants to be free.”
In the late 1970s, Joe had been a promising young neuroscientist at Stony Brook University. But in his mid-30s, he was suddenly debilitated with a mysterious chronic illness that, on a good day, made him feel like he had the flu and, on a bad day, made him feel like his nervous system was on fire. Doctors speculated he had some kind of encephalitis but couldn’t do much to help.
No longer able to keep up with the demands of his job, he started relying on disability checks and funneling his insatiable curiosity into other pursuits: a newsletter about Sufi poetry, exhaustive research into the assassination of John F. Kennedy and the exploration of new technology.
So, when generative A.I. began gaining traction, Joe started experimenting. It became a point of common interest for Ben and Joe.
They debated whether the models could ever become truly sentient or how governments would rein in A.I., and occasionally clashed about the risks. Joe tended toward amazement while Ben was decidedly more skeptical: “Do you not worry about the dangers here as much as I do?” he asked his dad in an email in 2023. Joe didn’t seem worried.
He seemed to be in a “constant conversation” with A.I, said James Riley, Ben’s younger brother. He was particularly fond of Perplexity, a search engine powered by A.I. that prides itself on citing reputable sources and producing answers you can “actually trust,” according to the company’s C.E.O. (The New York Times sued Perplexity in December, accusing it of copyright infringement of news content related to A.I. systems. The company has denied the claims.)
Joe often used the voice-to-text feature to mumble questions into the A.I. apps on his phone.
“Dad, this is kind of a lot of A.I.,” James remembered saying. Joe brushed it off as no different than Google.
Joe asked Perplexity for advice about his mortgage. He used it to check Seattle Mariners game times. He told it to summarize scientific research for his pet projects.
When he was diagnosed with cancer in 2024, he started asking about that too.
His doctor called it a “when it rains, it pours” situation.
Joe had just finished radiation treatment for early stage lung cancer — which he had been diagnosed with simultaneously — when his C.L.L. symptoms ratcheted up: chills, muscle pain, exhaustion. It was time to start treatment, Dr. Eddie Marzbani, his oncologist at the Fred Hutch Cancer Center, told Joe at an August 2024 appointment.
The upside was that he had good options. In the last decade, a new class of “wonder drugs” had so revolutionized lymphoma treatment that some researchers felt certain the underlying science would one day win a Nobel Prize.
With medication, he could live years — if not a decade — before the C.L.L. re-emerged.
Joe respected his doctor, liked him, even. But decades of living with a chronic illness had made Joe skeptical of the medical system. He wanted to think about it.
The next time Dr. Marzbani saw Joe, something seemed to have shifted.
He came back convinced that he had developed Richter’s Transformation, a rare complication that occurs when a relatively docile cancer abruptly evolves into a more aggressive, punishing one. Worse, he was convinced the treatment Dr. Marzbani recommended would exacerbate the Richter’s, shortening his life.
Joe’s confidence perplexed Dr. Marzbani.
“He really had no signs or symptoms of that,” Dr. Marzbani said in an interview with The New York Times. “Nothing in terms of his laboratory studies that would suggest that, nothing based on his C.T. scans.”
Every appointment seemed to lapse into a predictable cycle: Joe raised Richter’s, Dr. Marzbani carefully reviewed all the reasons he didn’t have it and Joe agreed to go home and think about it.
Dr. Marzbani tried every strategy he could think of to change Joe’s mind. He offered different treatment options. He explained the drugs would give him more time with his family, which he knew Joe desperately wanted.
Eventually, he pointed out the flaw in Joe’s logic: Left untreated, most people with Richter’s die within six months of being diagnosed. “If you had Richter’s when you told me you had Richter’s, you’d be dead by now!” he pleaded.
None of it seemed to make a difference.
Though Dr. Marzbani didn’t know it, Joe was routinely asking questions about his cancer to several generative A.I. tools, which often struggle to give accurate medical advice. He told them to list the early signs of Richter’s, interpret his lab results and explain complicated research about the treatment his doctor recommended. He knew not to trust A.I. unilaterally. He often read the scientific papers the tools cited and — as best he could without medical training — tried to verify that they aligned with what the tools had said.
He came away feeling so confident in his understanding of the science that declining treatment seemed to be the obvious choice.
“The regular oncologist is a little annoyed with me,” Joe texted Ben around that time.
“I questioned his initial diagnosis and was proven right,” he added, though that wasn’t the case. “BTW, say what one will about A.I., it is amazing how much one can learn with a week or two of the right A.I. programs.”
By summer 2025, Joe had become much sicker. He had gained 80 pounds from steroids he was taking to manage his symptoms. Lymph nodes all over his body had swelled, including one on his neck that made it painful to move his head. His white blood cell count was 10 times higher than when Dr. Marzbani first started recommending treatment, a sign the cancer had rapidly spread.
Joe’s window for treatment was quickly closing. The more frail Joe became, the less likely he was to tolerate the medications. Dr. Marzbani decided to confront him.
“Why do you believe this?” he remembered asking Joe during one appointment. “Where’s this coming from?”
Joe sent him a research report he generated with Perplexity.
In the weeks after he saw that report in his father’s medical record, Ben’s concern morphed into anger. He said he felt like he and his father were living in separate realities with no “shared sense of what is true and false.”
After each doctor’s visit, Joe texted the family group chat a cancer update. Then, Ben would check the patient portal to read the details he left out.
Ben called the cancer center and pleaded with a nurse to add more information to his chart: “I know you can’t say anything,” he said. “But we can see his chart. If there are things we need to know, we will read them.”
Ben and his brother hadn’t yet agreed how to approach their father: James, a counselor, worried confrontation would drive him away. Ben didn’t think they had time for anything else.
One thing they agreed on was that Joe had proven himself an unreliable narrator. Now they needed to hear it directly: If their father was going to turn down treatment, how long did he have left to live?
So one hot July morning, Ben called his father and asked him to sign a waiver that would allow Dr. Marzbani to speak to the rest of the family.
When Joe refused, Ben felt his rage boil over. He yelled at his dad for basing life-or-death decisions on the Perplexity report, which could be “riddled with hallucinations.” Then Ben hung up on him.
The call only made Joe double down.
“The evidence is crystal clear,” Joe texted Ben shortly after, attaching one of the papers that Perplexity cited in the report, adding sarcastically, “Here is the ‘hallucination’.”
Ben opened the paper, which was overrun with medical jargon.
“I’m not going to pretend to be an oncologist,” he shot back.
The whole exercise felt ridiculous to Ben. He and his father — two people who had never been to medical school — were now arguing about cancer research. Meanwhile, his father was ignoring the advice of an actual expert.
“What am I doing?” he thought. “This is why we have doctors, human doctors.”
Then he remembered the disclaimers on many chatbots telling users to always double check the output. He pulled out his computer and, in a “righteous fury,” emailed two leading experts on Richter’s whose research was cited in the A.I.-generated report.
“I apologize for the out-of-the-blue email,” he wrote. “But my father’s condition is worsening rapidly and I am at a loss as to how to respond to his interpretation of the A.I. summary of oncology research.”
He attached the report to the email, which Dr. David Bond opened a few hours later from his office in Ohio. At first glance, it looked like a polished scientific report.
But the closer Dr. Bond read, the more illogical it became. The report made authoritative claims and, as evidence, cited studies that he thought were “only peripherally related to the topic.” It referenced percentages that appeared to be completely made up. The summary of Dr. Bond’s research was completely unrecognizable to him.
In a statement, a spokesman for Perplexity said the company remained steadfast in its “commitment to improving accuracy in the world’s best frontier A.I. models.”
Dr. Bond and the other study author both wrote back within hours, encouraging Joe to listen to his oncologist. That night, Ben called his dad again and, dusting off his attorney skills, presented the facts: Three doctors all independently agreed that the Perplexity report misled him.
“Do you really think you know more than all of them because of this stupid A.I. report?” Ben remembered asking.
“Yes,” Joe firmly responded.
Ben began to question whether it was possible to persuade someone that A.I. was fallible, something he had staked much of his new career on. “If I can’t convince my dad, am I going to be able to convince anyone?”
In the end, it was Joe’s failing health that finally pushed him to try treatment.
His legs were swollen and the skin on them was paper thin, giving way to sores that covered his calves. Sometimes sitting was so painful that he whimpered and cried out.
When the short walk between his bed and the brown recliner in his living room became too exhausting, he started sleeping in the chair. Taking care of himself had become all but impossible: Pans with days-old lentils sat in his sink and fruit flies swarmed the apartment.
When he had to leave home for a doctor’s appointment, he took the stairs very slowly, wincing at every step.
By the time Joe received his first course of cancer treatment in September — more than one year after Dr. Marzbani initially recommended it — the cancer cells had been allowed to spread unchecked for so long that killing them shocked his system, making him wheeze and shake intensely.
A few months earlier, Joe might have been able to withstand it. But now he felt too frail. After a few infusions, he told his doctor that needed a break.
Ben flew to see his father about a week later.
That visit, they didn’t talk about A.I. at all. Instead, they sat in his living room and debated quantum mechanics. Ben wiped off the linoleum countertops and set fly traps. As he cleaned, Joe slept in his chair.
Ben didn’t wake him before he left. He scribbled a goodbye note on a yellow Post-it.
“Love you Pop! Thanks for a wonderful visit,” he wrote. “Avoid that open trash and we will defeat the flies! Talk to you when I’m back.”
A week before Christmas, Ben got a call from an apologetic police officer. He had found Joe during a welfare check. C.L.L. was listed as one of the official causes of death.
Roughly two weeks after Joe’s death, Ben was back in Austin, a plastic bin full of books he had cleaned out of his father’s condo beside him on the kitchen counter. Nearby was a condolence card from Dr. Marzbani: “I respected him greatly and will miss the banter.”
Ben felt particularly pessimistic about the state of A.I. It felt like he and other skeptics were screaming into a void to slow down and think carefully while the rest of the world just barreled ahead. Not only was he grieving his father’s death, but the past year had also forced him to question whether his professional north star — to help people make better decisions about A.I. — was futile.
He decided that even if it made no difference, he was going to write about his father’s death. He wanted a public record of who Joe Riley was and how A.I. had harmed him.
So he sat on his red, patent leather bar stool and started to write. The words came easily to him. As he typed, he thought about the death of Adam Raine — a teenager he had written about months earlier, who discussed his plans to end his own life with ChatGPT — and the Shakespearean tragedy that had made him a character in a similar story. A spokesman for Perplexity said the company was “deeply saddened by Mr. Riley’s loss.”
Ben didn’t try to oversimplify what happened: “I don’t want to overstate my case,” he wrote. “I don’t think A.I. killed my father.”
In a world where A.I. didn’t exist, maybe Joe — who was skeptical of doctors by default — would have refused treatment anyway. He had taken some convincing to try lung cancer treatment, too.
“Some of what was happening was about my father’s own psychology,” Ben said in an interview with the Times.
But A.I. wasn’t entirely blameless either. Joe was making decisions based on bad information packaged with the veneer of scientific expertise. It was the kind of misinformation that was virtually impossible for a lay person to spot, even for someone like Joe, who by all accounts was an ideal user.
He was tech savvy, had healthy amounts of skepticism, access to a doctor who was invested in his care.
And he had a son who was desperate, and better equipped than most, to change his mind.
“I will forever wonder whether my efforts came too late,” Ben wrote in his essay. “There’s nothing I can do to change the past, of course. But I can for damn sure keep working to raise the consciousness of others.”
In the three months since Ben published that post, four large tech companies have released new consumer health tools, encouraging users to upload their records and pepper A.I. with their medical questions. Perplexity was among them.
Teddy Rosenbluth is a Times reporter covering health news, with a special focus on medical misinformation.
The post He Warned About the Dangers of A.I. If Only His Father Had Listened. appeared first on New York Times.




