For people with a complete loss of speech, such as patients with late-stage ALS, roadblocks exist between the signals being fired in their brain and the ability to voice their thoughts in words.
Virginia Commonwealth University biomedical engineering professor Dean Krusienski, Ph.D., is collaborating with computer science researchers at University of Bremen in Germany to find a way to overcome those barriers.
New technology, in the form of a speech neuroprosthetic to help users produce intelligible speech based on brain signals — in real time — could someday restore the power of communication to people with speech disorders.
Krusienski’s project to develop a speech decoding and synthesis system using brain signals is being funded by a $604,757, three-year grant from the National Science Foundation’s Information and Intelligent Systems Division of Computer and Information Science and Engineering (CISE). A companion project is being funded by Germany’s Federal Ministry of Education and Research.
The efforts build on Krusienski’s previous work in developing approaches to directly synthesize speech from brain activity. “We are able to produce intelligible or semi-intelligible speech directly from brain activity,” he said. “When they’re talking, we can predict what the person has said or is saying from the brain activity.”
Their goal is to help someone who is disabled and cannot speak, Krusienski said. “Ultimately, we need to take the next step towards being able to produce it when they are attempting speech or imagining speech.”
In collaboration with neurosurgeons, Krusienski has been able to collect data from electrodes implanted on the brain surface and deeper within the brains of patients with severe epilepsy.
Tanja Schultz, Ph.D, a professor in the Department of Informatics/ Mathematics at the University of Bremen in Germany, specializes in speech processing and automatic speech recognition.
“Bringing our expertise together is pretty unique, because now we can take these measurements of recorded speech and look at them in different ways,” Krusienski said of Schultz. “She could take her representations of speech and relate them to the brain activity that produces or perceives them.”
Krusienski is focusing on the processes behind what turns speech from the brain driving the vocal tract and speech articulators to the production of a natural acoustic waveform. “How can we computationally model how these signals ultimately end up relating to an acoustic speech waveform?”
One key to making this happen is to create a feedback loop without a perceptible delay for the user. If a user imagining or attempting speech can hear what the decoding and synthesis system is producing as it is happening, the user may be able to adapt to make it more accurate. “We need to speed this process up so we can give them real-time feedback,” he said. “The real-time feedback also enables us to better investigate the imagined or attempted speech.”