This was a little distressing to me, because I am terrible at this particular skill. When I'm in a bar or other loud, chaotic environment, I can often pick out a few words, but understanding entire sentences is tricky. I also run out of steam really quickly -- I can focus for a while, but suddenly the whole thing descends into a wall of noise.
The evidence, though, seems strong.  "The relationship between cognitive ability and speech-perception performance transcended diagnostic categories," said Bonnie Lau, lead author on the paper.  "That finding was consistent across all three groups studied [an autistic group, a group who had fetal alcohol syndrome, and a neurotypical control group]."
So.  Yeah.  Not a favorable result for yours truly.  I mean, I get why it makes sense; focusing on one conversation when there are others going on is a complex task.  "You have to segregate the streams of speech," Lau explained.  "You have to figure out and selectively attend to the person that you're interested in, and part of that is suppressing the competing noise characteristics.  Then you have to comprehend from a linguistic standpoint, coding each phoneme, discerning syllables and words.  There are semantic and social skills, too -- we're smiling, we're nodding.  All these factors increase the cognitive load of communicating when it is noisy."
While I'm not seriously concerned that about the implications regarding my own intelligence, it does make me wonder about sensory synthesis and interpretation in general.  A related phenomenon I've noticed is that if there is a song playing while there's noise going on -- in a restaurant, or on earphones at the gym -- I often have no idea what the song is, can't understand a single word or pick up the beat or figure out the music, until something clues me in to what the song is.  Then, all of a sudden, I find I'm able to hear it clearly.
A while back, some neuroscientists at the University of California - Berkeley elucidated what's happening in the brain that causes this oddity in auditory perception, and it provides an interesting contrast to this week's study. A paper in Nature: Communications in 2016, by Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen, considered how the perception of garbled speech changes when subjects are told what's being said -- and found through a technique called spectrotemporal receptive field mapping that the brain is able to retune itself in less than a second.
The authors write:
A related phenomenon is visual priming, which occurs when people are presented with a seemingly meaningless pattern of dots and blotches, such as the following:
Once you're told that the image is a cow, it's easy enough to find -- and after that, impossible to unsee.
"Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise," said study co-author Frédéric Theunissen. "It’s not like I am generating those words in my head. I really have the feeling of hearing the words in the noise with this pop-out phenomenon. It is such a mystery."
Apparently, once the set of possibilities of what you're hearing (or seeing) is narrowed, your brain is much better at extracting meaning from noise. "Your brain tries to get around the problem of too much information by making assumptions about the world," co-author Christopher Holdgraf said. "It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’ By doing that, it is faster and expends less energy."
A while back, some neuroscientists at the University of California - Berkeley elucidated what's happening in the brain that causes this oddity in auditory perception, and it provides an interesting contrast to this week's study. A paper in Nature: Communications in 2016, by Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen, considered how the perception of garbled speech changes when subjects are told what's being said -- and found through a technique called spectrotemporal receptive field mapping that the brain is able to retune itself in less than a second.
The authors write:
Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.What astonishes me about this is how quickly the brain is able to accomplish this -- although that is certainly matched by my own experience of suddenly being able to hear lyrics of a song once I recognize what's playing. As James Anderson put it, writing about the research in ReliaWire, "The findings... confirm hypotheses that neurons in the auditory cortex that pick out aspects of sound associated with language, the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment."
A related phenomenon is visual priming, which occurs when people are presented with a seemingly meaningless pattern of dots and blotches, such as the following:
Once you're told that the image is a cow, it's easy enough to find -- and after that, impossible to unsee.
"Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise," said study co-author Frédéric Theunissen. "It’s not like I am generating those words in my head. I really have the feeling of hearing the words in the noise with this pop-out phenomenon. It is such a mystery."
Apparently, once the set of possibilities of what you're hearing (or seeing) is narrowed, your brain is much better at extracting meaning from noise. "Your brain tries to get around the problem of too much information by making assumptions about the world," co-author Christopher Holdgraf said. "It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’ By doing that, it is faster and expends less energy."
It makes me wonder about the University of Washington finding, though, if there might be an association between poor auditory discernment and attention-related disorders like ADHD.  My own experience is that I can focus on what's being said in a noisy environment, it's just exhausting.  Perhaps -- like with the song phenomenon, and things like visual priming -- chaotic brains like mine simply can't throw away extraneous information fast enough to retune.  Eventually, it just gives up, and the whole world turns into white noise.
In any case, there's another fascinating, and mind-boggling, piece of how our brains make sense of the world. It's wonderful that evolution could shape such an amazingly adaptive device, although the survival advantage is obvious. The faster you are at pulling a signal out of the noise, the more likely you are to make the right decisions about what it is that you're perceiving -- whether it's you talking to a friend in a crowded bar or a proto-hominid on the African savanna trying to figure out if that odd shape in the grass is a predator lying in wait.
In any case, there's another fascinating, and mind-boggling, piece of how our brains make sense of the world. It's wonderful that evolution could shape such an amazingly adaptive device, although the survival advantage is obvious. The faster you are at pulling a signal out of the noise, the more likely you are to make the right decisions about what it is that you're perceiving -- whether it's you talking to a friend in a crowded bar or a proto-hominid on the African savanna trying to figure out if that odd shape in the grass is a predator lying in wait.
Even if it means that I personally would probably have been a lion's afternoon snack.
****************************************

 
No comments:
Post a Comment