A new simpler reservoir computing structure is demonstrated for speech recognition in this week’s Nature Communications. This finding may lead to more cost effective information processing through the use high-speed components that would be too expensive for more complex structures.
Reservoir computing is a machine-learning paradigm that mimics the brain’s neuronal networks to tackle difficult computing tasks such as speech recognition.
Ingo Fischer and colleagues show that the more common three-level structure of this paradigm can be simplified to a single nonlinear node with delayed feedback. Their results demonstrate that this new structure performs well in a speech recognition test case. The authors suggest that applying this simpler structure in complex networks, such as electronics or photonics systems, could potentially be more resource-efficient.
Technology: Slim display could enable holographic videos on mobile devicesNature Communications
Planetary science: Jupiter’s moon Europa may glow in the darkNature Astronomy
Materials: Making strong bio-based replacements for plasticsNature Communications
Biotechnology: ‘Porcupine’ system tags objects with DNANature Communications