Washington: He has not been able to speak since 2003, when he was paralysed at age 20 by a severe stroke after a terrible car crash.
Now, in a scientific milestone, researchers have tapped into the speech areas of his brain - allowing him to produce comprehensible words and sentences simply by trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain transmit signals to a computer that displays them on the screen.
His first recognisable sentence, researchers said, was, “My family is outside.”
The achievement, published on Wednesday in the New England Journal of Medicine, could eventually help many patients with conditions that steal their ability to talk.
“This is farther than we’ve ever imagined we could go,” said Melanie Fried-Oken, a professor of neurology and pediatrics at Oregon Health & Science University, who was not involved in the project.
That could include people with brain injuries or conditions like amyotrophic lateral sclerosis (also known as Lou Gehrig’s disease) or cerebral palsy, in which patients have insufficient muscle control to speak.
“The urgency can’t be overstated,” said Hochberg, who directs a project called BrainGate that implants tinier electrodes to read signals from individual neurons; it recently decoded a paralyzed patient’s attempted handwriting motions.
“It’s now only a matter of years,” he said, “before there will be a clinically useful system that will allow for the restoration of communication.”
Three years ago, when Pancho, now 38, agreed to work with neuroscience researchers, they were unsure if his brain had even retained the mechanisms for speech.
“That part of his brain might have been dormant, and we just didn’t know if it would ever really wake up in order for him to speak again,” said Dr. Edward Chang, chairman of neurological surgery at University of California, San Francisco, who led the research.
The team implanted a rectangular sheet of 128 electrodes, designed to detect signals from speech-related sensory and motor processes linked to the mouth, lips, jaw, tongue and larynx. In 50 sessions over 81 weeks, they connected the implant to a computer by a cable attached to a port in Pancho’s head, and asked him to try to say words from a list of 50 common ones he helped suggest, including “hungry,” “music” and “computer.”
As he did, electrodes transmitted signals through a form of artificial intelligence that tried to recognise the intended words.
“Our system translates the brain activity that would have normally controlled his vocal tract directly into words and sentences,” said David Moses, a postdoctoral engineer who developed the system with Sean Metzger and Jessie R. Liu, graduate students. The three are lead authors of the study.
Pancho (who asked to be identified only by his nickname to protect his privacy) also tried to say the 50 words in 50 distinct sentences like “My nurse is right outside” and “Bring my glasses, please” and in response to questions like “How are you today?”
His answer, displayed on-screen: “I am very good.”
In nearly half of the 9,000 times Pancho tried to say single words, the algorithm got it right. When he tried saying sentences written on the screen, it did even better.
By funneling algorithm results through a kind of autocorrect language-prediction system, the computer correctly recognized individual words in the sentences nearly three-quarters of the time and perfectly decoded entire sentences more than half the time.
“To prove that you can decipher speech from the electrical signals in the speech motor area of your brain is groundbreaking,” said Fried-Oken, whose own research involves trying to detect signals using electrodes in a cap placed on the head, not implanted.
After a recent session, observed by The New York Times, Pancho, wearing a black fedora over a white knit hat to cover the port, smiled and tilted his head slightly with the limited movement he has. In bursts of gravelly sound, he demonstrated a sentence composed of words in the study: “No, I am not thirsty.”
A life-changing experience
In interviews over several weeks for this article, he communicated through email exchanges using a head-controlled mouse to painstakingly type key-by-key, the method he usually relies on.
The brain implant’s recognition of his spoken words is “a life-changing experience,” he said.
“I just want to, I don’t know, get something good, because I always was told by doctors that I had 0 chance to get better,” Pancho typed during a video chat from the Northern California nursing home where he lives.
Later, he emailed: “Not to be able to communicate with anyone, to have a normal conversation and express yourself in any way, it’s devastating, very hard to live with.”
During research sessions with the electrodes, he wrote, “It’s very much like getting a second chance to talk again.”
Pancho was a healthy field worker in California’s vineyards until a car crash after a soccer game one summer Sunday, he said. After surgery for serious damage to his stomach, he was discharged from the hospital, walking, talking and thinking he was on the road to recovery.
But the next morning, he was “throwing up and unable to hold myself up,” he wrote. Doctors said he experienced a brainstem stroke, apparently caused by a post-surgery blood clot.
A week later, he woke up from a coma in a small, dark room. “I tried to move, but I couldn’t lift a finger, and I tried to talk, but I couldn’t spit out a word,” he wrote. “So, I started to cry, but as I couldn’t make any sound, all I made were some ugly gestures.”
It was terrifying. “I wished I didn’t ever come back from the coma I was in,” he wrote.
For years, Pancho communicated by spelling out words on a computer using a pointer attached to a baseball cap, an arduous method that allowed him to type about five correct words per minute.
“I had to bend/lean my head forward, down, and poke a key letter one-by-one to write,” he emailed.
Last year, the researchers gave him another device involving a head-controlled mouse, but it is still not nearly as fast as the brain electrodes in the research sessions.
Similar phonetic sounds
Through the electrodes, Pancho communicated 15 to 18 words per minute. That was the maximum rate the study allowed because the computer waited between prompts. Chang says faster decoding is possible, although it’s unclear if it will approach the pace of typical conversational speech: about 150 words per minute. Speed is a key reason the project focuses on speaking, tapping directly into the brain’s word production system rather than hand movements involved in typing or writing.
“It’s the most natural way for people to communicate,” he said.
Pancho’s buoyant personality has helped the researchers navigate challenges, but also occasionally makes speech recognition uneven.
“I sometimes can’t control my emotions and laugh a lot and don’t do too good with the experiment,” he emailed.
Chang recalled times when, after the algorithm successfully identified a sentence, “you could see him visibly shaking and it looked like he was kind of giggling.” When that happened or when, during the repetitive tasks, he’d yawn or get distracted, “it didn’t work very well because he wasn’t really focused on getting those words. So, we’ve got some things to work on because we obviously want it to work all the time.”
The algorithm sometimes confused words with similar phonetic sounds, identifying “going” as “bring,” “do” as “you,” and words beginning with “F” - “faith,” “family,” “feel” - as a V-word, “very.”
Longer sentences needed more help from the language-prediction system. Without it, “How do you like my music?” was decoded as “How do you like bad bring?” and “Hello how are you?” became “Hungry how am you?”
But in sessions that the pandemic interrupted for months, accuracy improved, Chang said, both because the algorithm learned from Pancho’s efforts and because “there’s definitely things that are changing in his brain,” helping it “light up and show us the signals that we needed to get these words out.”
Before his stroke, Pancho had attended school only up to sixth grade in his native Mexico. With remarkable determination, he has since earned a high school diploma, taken college classes, received a web developer certificate and begun studying French.
Decade of research
“I think the car wreck got me to be a better person, and smarter too,” he emailed.
With his restricted wrist movement, Pancho can manoeuvre an electric wheelchair, pressing the joystick with a stuffed sock tied around his hand with rubber bands. At stores, he’ll hover near something until cashiers decipher what he wants, like a cup of coffee.
“They place it in my wheelchair, and I bring it back to my home so I can get help drinking it,” he said. “The people here at the facility find themselves surprised, they always asked me, ‘HOW DID YOU BUY THAT, AND HOW DID YOU TELL THEM WHAT YOU WANTED!?’”
He also works with other researchers using the electrodes to help him manipulate a robotic arm.
His twice-weekly speech sessions can be difficult and exhausting, but he is always “looking forward to wake up and get out of bed every day, and wait for my UCSF people to arrive.”
The speech study is the culmination of over a decade of research, in which Chang’s team mapped brain activity for all vowel and consonant sounds and tapped into the brains of healthy people to produce computerized speech.
Researchers emphasize that the electrodes are not reading Pancho’s mind, but detecting brain signals corresponding to each word he tries to say.
“He is thinking the word,” Fried-Oken said. “It’s not random thoughts that the computer is picking up.”
Chang said “in the future, we might be able to do what people are thinking,” which raises “some really important questions about the ethics of this kind of technology.” But this, he said, “is really just about restoring the individual’s voice.”
In newer tasks, Pancho mimes words silently and spells out less common words using the military alphabet: “delta” for “d,” “foxtrot” for “f.”
“He is truly a pioneer,” Moses said.
The team also wants to engineer implants with more sensitivity and make it wireless for complete implantation to avoid infection, said Chang.
As more patients participate, scientists might find individual brain variations, Fried-Oken said, adding that if patients are tired or ill, the intensity or timing of their brain signals might change.
“I just wanted to somehow be able to do something for myself, even a tiny bit,” Pancho said, “but now I know, I’m not doing it just for myself.”