A new simpler reservoir computing structure is demonstrated for speech recognition in this week’s Nature Communications. This finding may lead to more cost effective information processing through the use high-speed components that would be too expensive for more complex structures.
Reservoir computing is a machine-learning paradigm that mimics the brain’s neuronal networks to tackle difficult computing tasks such as speech recognition.
Ingo Fischer and colleagues show that the more common three-level structure of this paradigm can be simplified to a single nonlinear node with delayed feedback. Their results demonstrate that this new structure performs well in a speech recognition test case. The authors suggest that applying this simpler structure in complex networks, such as electronics or photonics systems, could potentially be more resource-efficient.
Astronomy: The first global geological map of TitanNature Astronomy
Neuroscience: A brain-scanning bike helmetNature Communications
Material science: Sunflower-inspired material aligns with the lightNature Nanotechnology