Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, December 24, 2016

Signal out of noise

I think I share with a lot of people a difficulty in deciphering what someone is saying when holding a conversation in a noisy room.  I can often pick out a few words, but understanding entire sentences is tricky.  A related phenomenon I've noticed is that if there is a song playing while there's noise going on -- in a bar, or on earphones at the gym -- I often have no idea what the song is, can't understand a single word or pick up the beat or figure out the music, until something clues me in to what the song is.  Then, all of a sudden, I find I'm able to hear it more clearly.

Some neuroscientists at the University of California - Berkeley have just found out what's happening in the brain that causes this oddity in auditory perception.  In a paper in Nature: Communications that came out earlier this week, authors Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen studied how the perception of garbled speech changes when subjects are told what's being said -- and found through a technique called spectrotemporal receptive field mapping that the brain is able to retune itself in less than a second.

The authors write:
Experience shapes our perception of the world on a moment-to-moment basis.  This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood.  Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models.  We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex.  Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range.  This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.
What astonishes me about this is how quickly the brain is able to accomplish this -- although that is certainly matched by my own experience of suddenly being able to hear lyrics of a song once I recognize what's playing.  As James Anderson put it, writing about the research in ReliaWire, "The findings... confirm hypotheses that neurons in the auditory cortex that pick out aspects of sound associated with language, the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment."

A related phenomenon is visual priming, which occurs when people are presented with a seemingly meaningless pattern of dots and blotches, such as the following:


Once you're told that the image is a cow, it's easy enough to find -- and after that, impossible to unsee.

"Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise," said study co-author Frédéric Theunissen.  "It’s not like I am generating those words in my head.  I really have the feeling of hearing the words in the noise with this pop-out phenomenon.  It is such a mystery."

Apparently, once the set of possibilities of what you're hearing (or seeing) is narrowed, your brain is much better at extracting meaning from noise.  "Your brain tries to get around the problem of too much information by making assumptions about the world," co-author Christopher Holdgraf said.  "It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’ By doing that, it is faster and expends less energy."

So there's another fascinating, and mind-boggling, piece of how our brains make sense of the world.  It's wonderful that evolution could shape such an amazingly adaptive device, although the survival advantage is obvious.  The faster you are at pulling a signal out of the noise, the more likely you are to make the right decisions about what it is that you're perceiving -- whether it's you talking to a friend in a crowded bar or a proto-hominid on the African savanna trying to figure out if that odd shape in the grass is a crouching lion.

No comments:

Post a Comment