Brain-controlled hearing aid cuts through noise

brain-controlled hearing – New brain-decoding tech could help hearing devices pick one voice in noisy rooms, improving clarity and reducing listening effort.
Turning down the wrong voice in a crowded room can be harder than it sounds. Researchers now say they have built brain-decoding technology that could help people using hearing assistance devices “cut through the noise” by amplifying the speaker their brain is actively focusing on.
In a study published in *Nature Neuroscience*. the team describes a “brain-controlled hearing aid” approach: instead of relying only on sound processing to separate voices. the system monitors brain activity and uses it to decide which conversation should be made louder.. The goal is to address a long-standing challenge for hearing technology, where background chatter often overwhelms speech.
The difficulty is commonly described as the “cocktail party problem.” In everyday life. many listeners can concentrate on a single speaker. and their brains automatically amplify that voice while suppressing competing sound.. For people who wear hearing aids or related devices. that same natural filtering process is much less effective—leaving them to struggle with mixed speech rather than clear conversations.
The new research is positioned as a solution that decodes a listener’s neural activity to select the target voice their hearing system will amplify.. Lead author Nima Mesgarani. an associate professor at Columbia University who directs the school’s Neural Acoustic Processing Lab. said the concept could improve future hearing technologies that include hearing aids. assistive listening devices. and cochlear implants.
The work builds on an earlier discovery made in 2012 by Mesgarani and Eddie Chang. a neurosurgeon at the University of California. San Francisco.. That finding helped explain how the brains of people with typical hearing solve the cocktail party problem: the auditory cortex shows distinct patterns of activity that reflect the specific sound source a person is attending to.. Mesgarani described these patterns as a “signature,” where brain activity tracked the intended source while ignoring other streams of speech.
In the current study. the team set out to determine whether those neural signatures could be used to improve real-world listening systems.. The effort was led by Vishal Choudhari. who was a graduate student in Mesgarani’s lab at the time and is now a research scientist at a startup focused on next-generation hearing technologies.
Testing was carried out with four people receiving epilepsy treatment at a hospital.. Importantly. the participants had typical hearing. and electrodes already placed in their brains for clinical purposes allowed researchers to monitor signals from the auditory cortex.. That setup made it possible to use brain data in parallel with the simulated listening task.
To simulate the cocktail party scenario, researchers used two loudspeakers positioned in front of participants, each playing a different conversation.. When both conversations were presented at the same volume, participants struggled to comprehend either one.. The researchers then switched on the brain-controlled system that automatically adjusted volume according to the pattern of brain activity.
According to Mesgarani. when the listener’s brain indicated a preference for “conversation one. ” the system increased that conversation’s loudness while reducing everything else.. The results. as reported by the team. showed that the system identified the intended conversation correctly up to 90% of the time.. When the system was activated. participants’ comprehension improved and listening effort decreased. suggesting that neural-guided amplification can support clearer speech processing.
Whether the method will work equally well for people with hearing loss remains an open question. according to Josh McDermott of MIT. who was not involved in the study.. His concern is that the neural signals used for decoding could be weaker in individuals with hearing impairment. potentially reducing accuracy.. Still. he said the approach is worth exploring because even the most advanced hearing aids do not fully solve the competing-voices problem.
McDermott noted that modern hearing devices can use algorithms to reduce background noise. but they generally cannot determine which specific speaker a person wants to follow in a crowded environment.. A brain-controlled hearing aid could offer a new way to target amplification more precisely.. He also pointed to another possible direction: using artificial intelligence to learn from a person’s behavior and predict which voice is most likely the target.
Demand for solutions to the cocktail party problem is rising.. The report highlights that more than half of people aged 75 and older live with disabling hearing loss.. McDermott emphasized the importance of basic research as well—arguing that hearing loss can be something many people experience as they age. making progress in both understanding and technology especially urgent.
As this work moves from a small clinical test to broader trials. the key challenge will be translating brain-decoding accuracy into everyday performance across different kinds of hearing impairment.. The promise is clear: if a device can reliably interpret which conversation a user’s brain is tracking. hearing technology could shift from generic sound cleanup to individualized listening. potentially reducing effort for the people who need it most.
brain-controlled hearing aid cocktail party problem auditory cortex decoding hearing loss technology cochlear implants Nature Neuroscience