A new simpler reservoir computing structure is demonstrated for speech recognition in this week’s Nature Communications. This finding may lead to more cost effective information processing through the use high-speed components that would be too expensive for more complex structures.
Reservoir computing is a machine-learning paradigm that mimics the brain’s neuronal networks to tackle difficult computing tasks such as speech recognition.
Ingo Fischer and colleagues show that the more common three-level structure of this paradigm can be simplified to a single nonlinear node with delayed feedback. Their results demonstrate that this new structure performs well in a speech recognition test case. The authors suggest that applying this simpler structure in complex networks, such as electronics or photonics systems, could potentially be more resource-efficient.
Electronics: Wireless power scales upNature Electronics
A diffuse core in Saturn revealed by ring seismologyNature Astronomy
Robotics: Chameleon-inspired soft robot mimics its backgroundNature Communications