Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Brain-computer interface allows man with ALS to ‘speak’ again

In a clinical trial and study supported by Brown scientists and alumni, a participant regained nearly fluent speech using a brain-computer interface that translates brain signals into speech with up to 97% accuracy.

PROVIDENCE, R.I. [Brown University] — Scientists with the BrainGate research consortium have developed a brain-computer interface that translates brain signals into speech with up to 97% accuracy, offering a significant breakthrough for individuals with speech impairments due to conditions like amyotrophic lateral sclerosis.

The technology involves using implanted sensors in the brain to interpret brain signals when a user attempts to speak. These signals are then converted into text, which is read aloud by a computer.

The work is described in a new study in the New England Journal of Medicine published on Wednesday, Aug. 14, that was led by neurosurgeon David Brandman and neuroscientist Sergey Stavisky, both of whom are Brown University alumni and faculty members at UC Davis Health.

“Our BCI technology helped a man with paralysis to communicate with friends, families and caregivers,” Brandman said. “Our paper demonstrates the most accurate speech neuroprosthesis ever reported.”

ALS, also known as Lou Gehrig's disease, affects nerve cells controlling muscle movement, leading to the gradual loss of mobility and speech. BCI technology aims to restore communication for those who have lost the ability to speak due to paralysis or neurological disorders.

The system allowed Casey Harrell, a 45-year-old person with ALS, to communicate his intended speech effectively within minutes of activation. The powerful moment brought tears to Harrell and his family. Harrell, reflecting on his experience with the technology, described the impact that regaining the ability to communicate could have on others facing similar challenges.

“Not being able to communicate is so frustrating and demoralizing. It is like you are trapped,” Harrell said. “Something like this technology will help people back into life and society.”

The study is part of the BrainGate clinical trial, directed by Dr. Leigh Hochberg, a critical care neurologist and a professor at Brown University’s School of Engineering who is affiliated with the University’s Carney Institute for Brain Science

“Casey and our other BrainGate participants are truly extraordinary,” Hochberg said. “They deserve tremendous credit for joining these early clinical trials. They do this not because they’re hoping to gain any personal benefit, but to help us develop a system that will restore communication and mobility for other people with paralysis.”

It is the latest in a series of advances in brain-computer interfaces made by the BrainGate consortium, which along with other work using BCIs has been developing systems for several years that enable people to generate text by decoding the user’s intent. Last year, the consortium described how a brain-computer interface they developed enabled a clinical trial participant who lost the ability to speak to create text on a computer at rates that approach the speed of regular speech, just by thinking of saying the words.

“The field of brain computer interface has come remarkably far in both precision and speed,” said John Ngai, director of the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies® Initiative (The BRAIN Initiative®), which funded earlier phases of the BrainGate consortium. “This latest development brings technology closer to helping people, ‘locked in’ by paralysis, regain their ability to communicate with friends and loved ones, and enjoy the best quality of life possible.”

In July 2023, the team at UC Davis Health implanted the BCI device, consisting of four microelectrode arrays, into Harrell’s left precentral gyrus, a brain region responsible for coordinating speech. These arrays record brain activity from 256 cortical electrodes and detect their attempt to move their muscles and talk.

“We are recording from the part of the brain that’s trying to send these commands to the muscles,” Stavisky said. “We are basically listening into that, and we’re translating those patterns of brain activity into a phoneme — like a syllable or the unit of speech — and then the words they’re trying to say.”

BCI allows man to 'speak’

 

A new brain-computer interface translates brain signals into speech with up to 97% accuracy — the most accurate system of its kind.

The study reports on 84 data collection sessions over 32 weeks. In total, Harrell used the speech BCI in self-paced conversations for over 248 hours to communicate in person and over video chat. The system showed decoded words on a screen and read them aloud in a voice synthesized from Harrell’s pre-ALS voice samples.

In the first session, the system achieved 99.6% word accuracy with a 50-word vocabulary in just 30 minutes. In another session with a vocabulary expanded to 125,000 words, the system achieved 90.2% accuracy after an additional 1.4 hours of training data. After continued data collection, the BCI has maintained 97.5% accuracy.

“At this point, we can decode what Casey is trying to say correctly about 97% of the time, which is better than many commercially available smartphone applications that try to interpret a person’s voice,” Brandman said. “This technology is transformative because it provides hope for people who want to speak but can’t.”

The research included funding from the National Institutes of Health.

This story was adapted from a news release published by Nadine Yehya at UC Davis Health.

CAUTION: Investigational device. Limited by federal law to investigational use.