Bioengineering Blog: Using Technology to Give Speech to the Speechless
Researchers develop neuro prosthesis that translates brain signals into words.
Edward Chang, a UCSF neurosurgeon who leads the research, said it appears to be the first successful demonstration of direct decoding of full words from brain activity from a paralyzed person unable to speak. The results were published in July in the New England Journal of Medicine.
Biotechnology engineers played important roles in the work that shows promise to restore communication by tapping into the brain’s natural speech machinery, said Chang in published reports. The results are the culmination of years of work for team members to map cortical activity patterns from vocal tract movements that produce each consonant and vowel. David Moses, a postdoc engineer in Chang’s lab and one of the paper’s authors, developed methods for decoding speech patterns in real time. He also worked to develop statistical language models for better accuracy.
More for You: Bioengineering Blog: European Program Targets AM-Produced Custom Implants for the Elderly
That work is impressive but by necessity was done with test participants who were able to speak. The researchers produced a model that mapped complex brain patterns and intended speech. But they did not know if their models would work with someone who was unable to speak, nor did they know if signals controlling the vocal tract would be intact or useful after long periods of not using vocal muscles.
The test subject—a man in his 30s—was surgically implanted with a high-density electrode array over his speech motor cortex. After recovery, the team recorded 22 hours of neural activity over several months as the subject attempted to say each of 50 vocabulary words.
Translation was accomplished by two other team members in a joint UCSF-UC Berkeley bioengineering Ph.D. program. Their custom neural network models picked up subtle patterns in brain activity to detect the subject’s attempts to speak and then identify the words.
Editor’s Pick: Bioengineering Blog: Hydrogel Holds Promise in Stopping Hemorraghing
Their system decoded words from brain activity at a rate of 18 words per minute with up to 93 percent accuracy. They also included a language model that provided an automatic correct function used in speech recognition software.
As the test continued, the team prompted the subject with questions such as “How are you?” and “Would you like some water?” The answers then appeared on screen as short sentences.
Chang and the team believe the results are proof of concept, and have potential to help others. There are many causes for losing the ability to speak, such as brainstem stroke and spinal cord injury. Communication is tedious and even arduous. Patients may be able to spell words slowly using residual eye movements or muscle twitches picked up through a computer interface. The test subject communicates by pecking out words on a computer screen using a stylus attached to his head.
But areas of the brain that control muscles of the larynx, jaw, lips and tongue often are unaffected in such cases. Team members said the next step is to test the system on more people with severe paralysis, and increase the number of words and the rate of speech.
John Kosowatz is senior editor.