A decoder, which can transform brain activity into speech, is presented in a study published in this week’s Nature.
Many patients with neurological conditions that result in the loss of speech rely on communication devices that use brain-computer interfaces or nonverbal movements of the head or eyes to control a cursor to select letters to spell out words. However, this process is much slower than the normal rate of human speech.
Edward Chang and colleagues developed a method to synthesize a person’s speech using the brain signals that are related to the movements of their jaw, larynx, lips and tongue. First, they recorded cortical activity from the brains of five participants as they spoke several hundred sentences aloud. Using these recordings, the authors designed a system that is capable of decoding the brain signals responsible for individual movements of the vocal tract. They were then able to synthesize speech from the decoded movements. In trials of 101 sentences, listeners could readily identify and transcribe the synthesized speech.
In separate tests, one participant was asked to speak sentences and then mime them - making the same articulatory movements, without sound. Although synthesis performance of mimed speech was inferior to that of audible speech, the authors conclude it is possible to decode features of speech that are never audibly spoken.
In an accompanying News & Views article, Chethan Pandarinath and Yahia Ali comment that although the authors have achieved a compelling proof-of-concept, many challenges remain before this can become a clinically viable speech brain-computer interface.
Cancer: A blood test may detect cancer at early stagesNature Communications