1. Home
  2. Press Releases
  3. Neuroscience: Turning up speech volume using brain decoding (Nature Neuroscience)
Press release

Neuroscience: Turning up speech volume using brain decoding (Nature Neuroscience)

12 May 2026

A system that uses brain signals in real time to identify the voice a listener is focusing on and selectively amplify it among a group of different speakers is reported in Nature Neuroscience. The brain–computer interface could help to improve future hearing-aid functionality in noisy environments.

Understanding one person’s speech in a busy social setting with different speakers and background noise is difficult for many people, but especially for those who are hard of hearing. Conventional hearing aids typically amplify all sounds equally and do not focus on the speaker that the person wants to hear. A brain–computer interface approach called auditory attention decoding has sought to address this by using the listener’s neural signals to infer which speaker is being attended to. However, whether such decoding could improve hearing and understanding in real time has remained uncertain.

Nima Mesgarani and colleagues developed a closed-loop brain–computer interface that links neural activity directly to selective sound amplification. The authors measured high-resolution brain activity in auditory regions using implanted intracranial electrodes in four participants undergoing clinical monitoring for epilepsy, while they listened to two competing conversations. Their brain activity was then used to reconstruct the temporal pattern of the speech they were trying to listen to. A decoding model compared this reconstructed pattern with the competing speech streams and dynamically adjusted the sound levels. Across multiple experiments, the system accurately decoded auditory attention between 72.0% and 90.3% of the time and adjusted the relative loudness of the desired speech by several decibels. When the system was active, participants showed improved speech intelligibility and reduced listening effort, and reported a preference for the brain-controlled audio.

The authors note that the research relied on invasive intracranial recordings from a small number of participants, which are not practical for widespread use. Future research should explore scalable recording methods and assess performance in everyday listening scenarios.

Choudhari, V., Nentwich, M., Johnson, S. et al. Real-time brain-controlled selective hearing enhances speech perception in multi-talker environments. Nat Neurosci (2026). https://doi.org/10.1038/s41593-026-02281-5

 © 2026 Springer Nature Limited. All Rights Reserved.  

More Press Releases

advertisement
PrivacyMark System