Jun 23, 2021


The human brain is very versatile, but it is nevertheless challenged when we hear background noises and multiple conversations – as at a cocktail party – and need to focus only on our partner in conversation. How the brain deals with the abundance of sounds in our environments and sets priorities among them has been a topic of debate among cognitive neuroscientists for many decades.  


Often referred to as the “cocktail party problem,” its main question is whether we can absorb information from a few speakers simultaneously or we are limited to understanding speech from only one speaker at a time.


One reason this question is hard to answer is that attention is an internal state not directly accessible to researchers. By measuring the brain activity of listeners as they try to focus attention on a single speaker and ignore a task-irrelevant one, we can gain insight into the internal operations of attention and how these competing speech stimuli are represented and processed by the brain.


In a study recently published in the journal eLife under the title “Linguistic Processing of task-irrelevant speech at a cocktail party,” researchers at Bar-Ilan University (BIU) in Ramat Gan (near Tel Aviv) decided to explore whether words and phrases are identified linguistically or just represented in the brain as “acoustic noise” with no further language processing applied.


“Answering this question helps us better understand the capacity and limitations of the human speech-processing system,” said Dr. Elana Zion Golumbic of BIU’s Gonda (Goldschmied) Multidisciplinary Brain Research Center who led the study. “It also gives insight into how attention helps us deal with the multitude of stimuli in our environments – helping to focus primarily on the task-at-hand, while also monitoring what is happening around us.”  


Zion Golumbic and her team measured brain activity of human listeners as they listened to two speech stimuli, each presented to a different ear. Neural activity was recorded using magnetoencephalography (a functional neuroimaging technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain using very sensitive magnetometers) as human participants were asked to focus their attention on the content of one speaker and to ignore the other. 


The researchers found evidence that the so-called unattended speech – generated from background conversations and noise – is processed at both acoustic and linguistic levels, with responses observed in auditory and language-related areas of the brain. 


In addition, they found that the brain response to the attended speaker in language-related brain regions was stronger when it ‘competed’ with other speech (in comparison to non-speech competition). This suggests that the two speech-inputs compete for the same processing resources, which may underlie the increased listening effort required for staying focused when many people talk at once. 


The study contributes to efforts to understand how the brain deals with the abundance of auditory inputs in our environment. It has theoretical implications for how we understand the nature of attentional selection in the brain. It also carries substantial applicative potential for guiding the design of smart assistive devices to help individuals focus their attention better, or navigate noisy environments. The methods they developed also provide a useful new approach for testing the basis for individual differences in the ability to focus attention in noisy environments, the team said.