리서치 하이라이트

Neuroscience: Decoding brain activity to speak your mind

Nature

2019년4월25일

A decoder, which can transform brain activity into speech, is presented in a study published in this week’s Nature.

Many patients with neurological conditions that result in the loss of speech rely on communication devices that use brain-computer interfaces or nonverbal movements of the head or eyes to control a cursor to select letters to spell out words. However, this process is much slower than the normal rate of human speech.

Edward Chang and colleagues developed a method to synthesize a person’s speech using the brain signals that are related to the movements of their jaw, larynx, lips and tongue. First, they recorded cortical activity from the brains of five participants as they spoke several hundred sentences aloud. Using these recordings, the authors designed a system that is capable of decoding the brain signals responsible for individual movements of the vocal tract. They were then able to synthesize speech from the decoded movements. In trials of 101 sentences, listeners could readily identify and transcribe the synthesized speech.

In separate tests, one participant was asked to speak sentences and then mime them - making the same articulatory movements, without sound. Although synthesis performance of mimed speech was inferior to that of audible speech, the authors conclude it is possible to decode features of speech that are never audibly spoken.

In an accompanying News & Views article, Chethan Pandarinath and Yahia Ali comment that although the authors have achieved a compelling proof-of-concept, many challenges remain before this can become a clinically viable speech brain-computer interface.

doi: 10.1038/s41586-019-1119-1 | Original article

리서치 하이라이트