Highly structured encoding of vowel articulation in the human brain is reported in Nature Communications this week. The discovery of this structure allows for the high fidelity decoding of speech segments that may have implications for restoring features of speech that are lost as a result of paralysis.
Areas of the brain that are involved in speech production have long been identified, however the basic encoding of speech features in the firing patterns of neuronal populations remains unknown.
Shy Shoham and colleagues study the neural encoding of vowel articulation in the human cerebellum at a single unit and neuronal population level. They recorded and analysed the activity of 716 temporal and front lobe units in the brains of 11 patients who already had electrodes implanted in their brains for epilepsy monitoring. They found that two areas, the superior temporal gyrus (STG) and a region overlying Brodmann areas 11 and 12 (rAC/MOF) - which are commonly associated with speech - had the highest proportion of speech related and vowel-tuned neurons. They noted, however, that the neural tuning in these two areas was very different. Broadly tuned neurons that responded to all vowels were found only in the STG and sharply tuned neurons that activated exclusively for only one or two vowels were mainly found in rAC/MOF.
Whether these structured multi-level encoding schemes also exist in other speech areas like Broca’s and the speech motor cortex, and how they contribute to the coordinated production of speech is still to be investigated.