Four years ago, Casey Harrell sang his last bedtime nursery rhyme to his daughter.
By then, A.L.S. had begun laying waste to Mr. Harrell’s muscles, stealing from him one ritual after another: going on walks with his wife, holding his daughter, turning the pages of a book. “Like a night burglar,” his wife, Levana Saxon, wrote of the disease in a poem.
But no theft was as devastating to Mr. Harrell, 46, as the fading of his speech. He had sung his last Whitney Houston song at karaoke. A climate activist, he had delivered his last unassisted Zoom presentation to fellow organizers.
Last July, doctors at the University of California, Davis, surgically implanted electrodes in Mr. Harrell’s brain to try to discern what he was trying to say. That made him the latest test subject in a daunting scientific quest, one that has attracted deep-pocketed firms like Elon Musk’s company Neuralink: connecting people’s brains to computers, potentially restoring their lost faculties. Doctors told him that he would be advancing the cause of science, but that he was not likely to reverse his fortunes.
Yet the results surpassed expectations, the researchers reported on Wednesday in The New England Journal of Medicine, setting a new bar for implanted speech decoders and illustrating the potential power of such devices for people with speech impairments.
“It’s very exciting,” said Dr. Edward Chang, a neurosurgeon at University of California, San Francisco, who was not involved in Mr. Harrell’s case but has developed different speech implants. A device that just years ago “seemed like science fiction,” he said, is now “improving, getting optimized, so quickly.”
Mr. Harrell’s team sank into his brain’s outer layer four electrode arrays that looked like tiny beds of nails. That was double the number that had recently been implanted in the speech areas of someone with A.L.S., or amyotrophic lateral sclerosis, in a separate study. Each array’s 64 spikes picked up electric impulses from neurons that fired when Mr. Harrell tried to move his mouth, lips, jaw and tongue to speak.
Three weeks after surgery, scientists gathered in Mr. Harrell’s living room in Oakland, Calif., to “plug him in,” connecting the implant to a bank of computers with cables attached to two metal posts protruding from Mr. Harrell’s skull.
After only briefly training the computers to recognize Mr. Harrell’s speech, the implant began recording what he intended to say from a 50-word vocabulary with 99.6 percent accuracy.
The device worked so well, so soon, that the scientists had to cut an initial session from their analysis: Halfway through trying to speak his first prompt aloud — “What good is that?” — a shaking, smiling Mr. Harrell crumpled into tears.
To the average listener, “what” and “good” had come out of Mr. Harrell’s mouth muddled and indecipherable. But to the electrodes tuned to individual neurons in Mr. Harrell’s brain, the words were perfectly clear. A screen in front of him displayed exactly what he had been trying to say.
The device had, in effect, made an end run around Mr. Harrell’s disease, relying not on his weakened facial muscles but rather on the parts of his motor cortex where he was first laying down the instructions for what to say.
“The key innovation was putting more arrays, with very precise targeting, into the speechiest parts of the brain we can find,” said Sergey Stavisky, a neuroscientist at the University of California, Davis, who helped lead the study.
By day two, the machine was ranging across an available vocabulary of 125,000 words with 90 percent accuracy and, for the first time, producing sentences of Mr. Harrell’s own making. The device spoke them in a voice remarkably like his own, too: Using podcast interviews and other old recordings, the researchers had created a deep fake of Mr. Harrell’s pre-A.L.S. voice.
“I’m looking for a cheetah,” came his second-ever unprompted line, a string of words so odd that researchers later shown a video of the session became convinced there had been a decoding error, said Dr. Leigh Hochberg, a neurologist with Brown University and the Department of Veterans Affairs who directs a network of clinical trials that included Mr. Harrell’s case.
For the doctors in the room, though, the line was an early signal that the implant could recognize even Mr. Harrell’s most idiosyncratic lines: His daughter Aya had just come home, dressed in a cheetah onesie, and her father wanted to take part in her fantasy. “Sweet daughter of mine,” he continued, “I have been waiting for this for a long time.”
As scientists continued training the device to recognize his sounds, it got only better. Over a period of eight months, the study said, Mr. Harrell came to utter nearly 6,000 unique words. The device kept up, sustaining a 97.5 percent accuracy.
That exceeded the accuracy of many smartphone applications that transcribe people’s intact speech. It also marked an improvement on previous studies in which implants reached accuracy rates of roughly 75 percent, leaving one of every four words liable to misinterpretation.
And whereas devices like Neuralink’s help people move cursors across a screen, Mr. Harrell’s implant allowed him to explore the infinitely larger and more complex terrain of speech.
“It went from a scientific demonstration to a system that Casey can use every day to speak with family and friends,” said Dr. David Brandman, the neurosurgeon who operated on Mr. Harrell and led the study alongside Dr. Stavisky.
That leap was enabled in part by the types of artificial intelligence that power language tools like ChatGPT. At any given moment, Mr. Harrell’s implant picks up activity in an ensemble of neurons, translating their firing pattern into vowel or consonant units of sound. Computers then agglomerate a string of such sounds into a word, and a string of words into a sentence, choosing the output they deem likeliest to correspond to what Mr. Harrell has tried to say.
I interviewed Mr. Harrell twice in recent days. In between long pauses, during which the computers wove his sounds into sentences and he adjusted words here or there on a screen before prompting them to be spoken aloud, Mr. Harrell described the differences in his decoded voice.
It talked more formally than he used to, given the system’s proclivity for complete sentences. And the research team had needed to nudge his A.I. tool to better recognize uncommon phrases he used all the time. (One was “asset manager”: Mr. Harrell’s organizing work is focused on calling out the complicity of companies like BlackRock and Vanguard in the climate crisis.)
But the new persona also wakened parts of Mr. Harrell that had long laid dormant. He and Ms. Saxon started bantering again. Just as speaking a foreign language can enable people to express otherwise buried parts of their personalities, Mr. Harrell said, his decoder gave him back old elements of himself, even if they had become slightly changed in transit.
And sometimes, he said, the machine emulated the old him.
“I have many different words that sound exactly like how I was saying them,” he told me. “For example, ‘What up?’” Mr. Harrell smiled as the decoder uttered his old phrase, in his old voice. “I love that one.”
In stretching the boundaries of what he was able to get across in conversation, the implant also changed what others could say to him, Mr. Harrell said.
He could now tell Aya, 5, that he loved her. She, in turn, shared more with him, knowing that she would understand her father’s responses.
Visiting health workers who once seemed to take his impaired speech to mean he was stupid and hard of hearing — he is neither — now speak at normal volumes and touch him more carefully, Mr. Harrell said. That it had taken brain surgery to effect that change angered him, he said, “but I would rather let it go.”
The implant allowed Mr. Harrell to fantasize, too, about picking up the shards of a shattered social life. He could reach back out to old friends who had drifted away, and who he worried were too ashamed to get back in touch. This time, Mr. Harrell said, he could “connect with them in a way that makes them where they are at” — he meant “meets,” not “makes,” he corrected quickly — rather than on the wordless terrain that had for so long unsettled them.
“It allows me to forgive them,” he said. “I want to be able to tell them that it is OK, and that they can make amends now.”
Whether the same implant would prove as helpful to more severely paralyzed people is unclear. Mr. Harrell’s speech had deteriorated, but not disappeared.
And for all its utility, the technology cannot mitigate the crushing financial burden of trying to live and work with A.L.S.: Insurance will pay for Mr. Harrell’s caregiving needs only if he goes on hospice care, or stops working and becomes eligible for Medicaid, Ms. Saxon said, a situation that, she added, drives others with A.L.S. to give up trying to extend their lives.
Those very incentives also make it likelier that people with disabilities will become poor, putting access to cutting-edge implants even further out of their reach, said Melanie Fried-Oken, a neurologist at Oregon Health & Science University.
For Mr. Harrell, living in a world capable of connecting computers to brains but not of addressing the financial precarity of those who need them most has proved troubling. “Very lucky, and very angry,” he pronounced himself.
In the interviews, Mr. Harrell described working more productively and independently since the surgery. That, he said, was a source of pride, as well as another reason to hasten efforts to make implants more widely accessible.
But when he turns on the machine each morning, he gives it a test sentence drawn not from the work emails he is preparing to write but from the songs he would someday like to sing again.
Scientists are working to make that possible. Until then, Mr. Harrell contents himself with waking his implant up each morning by trying to speak a lyric. Recently, he chose a line from an old song by the band Chicago that he likes reciting to his wife: “If you leave me now,” his voice said through the speakers, “you will take away the biggest part of me.”
The post A.L.S. Stole His Voice. A.I. Retrieved It. appeared first on New York Times.