Scientists say that understanding how the cocktail party effect works could help people who have trouble deciphering sounds in a noisy environment. Guests make it look easy at a Dolce and Gabbana Lounge party in London in 2010.
Scientists are beginning to understand how people tune in to a single voice in a crowded, noisy room.
This ability, known as the "cocktail party effect," appears to rely on areas of the brain that have completely filtered out unwanted sounds, researchers report in the journal Neuron. So when a person decides to focus on a particular speaker, other speakers "have no representation in those [brain] areas," says Elana Zion Golumbic of Columbia University.
The ability to extract sense from auditory chaos has puzzled scientists since the 1950s, Golumbic says. "It's something we do all the time, not only in cocktail parties," she says. "You're on the street, you're in a restaurant, you're in your office. There are a lot of background sounds all the time, and you constantly need to filter them out and focus on the one thing that's important to you."
But until a few years ago, how the brain did this was a mystery. That's changing, Golumbic says, thanks to new technology that allows scientists to monitor many different areas of the brain as they listen to multiple voices.
The technology involves a grid of electrodes placed on the surface of the brain. Experiments have relied on volunteers who already had these electrodes in place: people in the hospital awaiting surgery for severe epilepsy.
"We bring in a cart with a computer and a screen and speakers," Golumbic says. "And we show them movies."
Try The Experiment Yourself
This video shows clips of two people telling stories at the same time. Try focusing on one person, then the other.
One movie, for example, shows a woman telling a brief story about a parrot. Another shows a man telling a story about how he never liked to clean up his room.
To simulate a cocktail party, though, participants watched a third movie in which the man and woman are both on screen telling those stories simultaneously. The researchers asked them to focus on just one of the speakers while they monitored what was going on in their brains.
And the brain monitoring revealed something remarkable. When a person's brain is in cocktail party mode, some areas, like those involved in hearing, continue to respond to both voices. But other parts of the brain, like those devoted to language, appear to respond only to the selected speaker.
Afterward, volunteers who focused on the man had no trouble remembering that he didn't like to clean his room. But they didn't recall anything about the woman's parrot.
The study also found that the brain areas responding only to the selected voice were constantly fine-tuning their reception, says Charles Schroeder, a neuroscientist at Columbia University and New York state's Nathan Kline Institute. "As the sentence unfolds, the brain's tracking of the signal becomes better and better and better," he says.
This suggests that the brain is separating one voice from the rest by identifying its unique characteristics, Schroeder says. It's also likely that the brain is using information from the first words in a sentence to predict which words are likely to come next.
A better understanding of the cocktail party effect could eventually help people who have trouble deciphering a single voice in a noisy environment, says Edward Chang, an assistant professor of neurological surgery and physiology at the University of California, San Francisco.
That's a problem for many people as they get older, he says. It's also a problem for people with attention deficit hyperactivity disorder, or ADHD, Chang says, adding that he saw this problem up close when a person with the disorder volunteered for an experiment that involved trying to focus on just one of two speakers.
"This person had significant problems with the ability to select the correct speaker," Chang says.
Understanding precisely how the brain solves the cocktail party problem could also allow machines to do a better job deciphering human speech, Chang says. That could mean better cellphones and less frustrating conversations with the computers that often answer phone calls to customer service hotlines.