Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label auditory cortex. Show all posts
Showing posts with label auditory cortex. Show all posts

Wednesday, October 26, 2022

Sounding off

Ever have the experience of getting into a car, closing the door, and accidentally shutting the seatbelt in the door?

What's interesting about this is that most of the time, we immediately realize it's happened, reopen the door, and pull the belt out.  It's barely even a conscious thought.  The sound is wrong, and that registers instantly.  We recognize when something "sounds off" about noises we're familiar with -- when latches don't seat properly, when the freezer door hasn't completely closed, even things like the difference between a batter's solid hit and a tip during a baseball game.

Turns out, scientists at New York University have just figured out that there's a brain structure that's devoted to that exact phenomenon.

A research team led by neuroscientist David Schneider trained mice to learn to associate a particular sound with pushing a lever for a treat.  After learning the sound, it became as habituated in their brains as our own expectation of what the car door closing is supposed to sound like.  If after that the tone was varied even a little, or the timing between the lever push and the sound was changed, a part of the mouse's brain began to fire rapidly.

The activated part of the brain is a cluster of neurons in the auditory cortex, but I think of it as the "What The Hell Just Happened?" module.

"We listen to the sounds our movements produce to determine whether or not we made a mistake," Schneider said.  "This is most obvious for a musician or when speaking, but our brains are actually doing this all the time, such as when a golfer listens for the sound of her club making contact with the ball.  Our brains are always registering whether a sound matches or deviates from expectations.  In our study, we discovered that the brain is able to make precise predictions about when a sound is supposed to happen and what it should sound like...  Because these were some of the same neurons that would have been active if the sound had actually been played, it was as if the brain was recalling a memory of the sound that it thought it was going to hear."

As a musician, I find myself wondering if this is why I had such a hard time unlearning my tendency to make a face when I hit a wrong note, when I first started performing on stage.  My bandmates said (rightly) that if it's not a real howler, most mistakes will just zoom right past the audience unnoticed -- unless the musician clues them in by wincing.  (My bandmate Kathy also added that if it is a real howler, just play it that way again the next time that bit of the tune comes around, and the audience will think it's a deliberate "blue note" and be really impressed about how avant-garde we are.) 

My band Crooked Sixpence, with whom I played for an awesome ten years -- l. to r., Kathy Selby (fiddle), me (flute), John Wobus (keyboard)

I found it a hard response to quell, though.  My awareness of having hit a wrong note was so instantaneous that it's almost like my ears are connected directly to my facial wince-muscles, bypassing my brain entirely.  I did eventually get better, both in the sense of making fewer mistakes and also responding less when I did hit a clam, but it definitely took a while for the flinch response to calm down.

It's interesting to speculate on why we have this sense, and evidently share it with other mammals.  The obvious explanation is that a spike of awareness about something sounding off could be a good clue to the presence of danger -- the time-honored trope in horror movies of one character saying something doesn't seem quite right.  (That character, however, is usually the first one to get eaten by the monster, so the response may be of dubious evolutionary utility, at least in horror movies.)

I find it endlessly fascinating how our brains have evolved independent little subunits for dealing with contingencies like this.  Our sensory processing systems are incredibly fine-tuned, and they can alert us to changes in our surroundings so quickly it hardly involves conscious thought.

Think about that the next time your car door doesn't close completely.

****************************************


Monday, March 8, 2021

Music on the brain

It is a source of tremendous curiosity to me why music is as powerful an influence as it is.  Music has been hugely important in my own life, and remains so to this day.  I remember my parents telling me stories about my early childhood, including tales of when I couldn't have been more than about four years old and I clamored to be allowed to use the record player myself.  At first they were reluctant, but my insistence finally won the day.  They showed me how to handle the records carefully, operate the buttons to drop the needle onto the record, and put everything away when I was done.  There were records I played over and over again (that I wasn't discouraged is a testimony to my parents' patience and forbearance) -- and I never damaged a single one.  They were simply too important to me to handle roughly.

The transformative experience of music is universal to the human species.  A 43,000 year old carved bone was found in Slovenia that many think was one of the earliest musical instruments -- if this contention is correct, our drive to make music must be very old indeed.


The neurological underpinning of our musical experience, however, has not been easy to elucidate.  Until recently, there was speculation that our affinity for music had something to do with the tonal-based expression of emotion in language, but that is still speculative.  And recently, three scientists in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology have shown that we have a dedicated module in our brains for experiencing and responding to music.

A team led by Sam Norman-Haignere did fMRIs of individuals who were listening to music, and others listening to a variety of other familiar sounds (including human speech).  They then compared the type of sound to the three-dimensional neural response pattern -- what the scientists call a voxel -- to see if they could find correlations between them.

The relationship turned out to be unmistakable.  They found that there were distinct firing patterns in regions of the brain that occurred only when the subject was listening to music -- and that it didn't matter what the style of music was.  Norman-Haignere said, "The sound of a solo drummer, whistling, pop songs, rap, almost everything that had a musical quality to it, melodic or rhythmic, would activate it.  That's one reason the results surprised us."

The research team writes:
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels.  To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex...  Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex...  [This research] identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.
This study opens up a whole new approach to understanding why our auditory centers are structured the way they are, although it does still leave open the question of why music is so tremendously important across cultures. "Why do we have music?" study senior author Nancy Kanwisher said in an interview with the New York Times.  "Why do we enjoy it so much and want to dance when we hear it?  How early in development can we see this sensitivity to music, and is it tunable with experience?  These are the really cool first-order questions we can begin to address."

What I find the most curious about this is that the same region of the brain is firing in response to incredibly dissimilar inputs.  Consider, for example, the differences between a sitar solo, a Rossini aria, a Greydon Square rap, and a Bach harpsichord sonata.  Isn't it fascinating that we all have a part of the auditory cortex that responds to all of those -- regardless of our cultural background or musical preferences?

I find the whole thing tremendously interesting, and can only hope that the MIT team will continue their investigations.  I'm fascinated not only with the universality of musical appreciation, but the peculiar differences -- why, for example, I love Bach, Stravinsky, Shostakovich, and Vaughan Williams, but Chopin, Brahms, Mahler, and Schumann leave me completely cold.  Must be something about my voxels, I suppose -- but wouldn't it be cool to find out what it is?

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, December 24, 2020

Signal out of noise

I think I share with a lot of people a difficulty in deciphering what someone is saying when holding a conversation in a noisy room.  I can often pick out a few words, but understanding entire sentences is tricky.  A related phenomenon I've noticed is that if there is a song playing while there's noise going on -- in a bar, or on earphones at the gym -- I often have no idea what the song is, can't understand a single word or pick up the beat or figure out the music, until something clues me in to what the song is.  Then, all of a sudden, I find I'm able to hear it more clearly.

Some neuroscientists at the University of California - Berkeley have found out what's happening in the brain that causes this oddity in auditory perception.  In a paper in Nature: Communications, authors Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen studied how the perception of garbled speech changes when subjects are told what's being said -- and found through a technique called spectrotemporal receptive field mapping that the brain is able to retune itself in less than a second.

The authors write:
Experience shapes our perception of the world on a moment-to-moment basis.  This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models.  We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range.  This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.
What astonishes me about this is how quickly the brain is able to accomplish this -- although that is certainly matched by my own experience of suddenly being able to hear lyrics of a song once I recognize what's playing.  As James Anderson put it, writing about the research in ReliaWire, "The findings... confirm hypotheses that neurons in the auditory cortex that pick out aspects of sound associated with language, the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment."

A related phenomenon is visual priming, which occurs when people are presented with a seemingly meaningless pattern of dots and blotches, such as the following:


Once you're told that the image is a cow, it's easy enough to find -- and after that, impossible to unsee.

"Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise," said study co-author Frédéric Theunissen.  "It’s not like I am generating those words in my head.  I really have the feeling of hearing the words in the noise with this pop-out phenomenon.  It is such a mystery."

Apparently, once the set of possibilities of what you're hearing (or seeing) is narrowed, your brain is much better at extracting meaning from noise.  "Your brain tries to get around the problem of too much information by making assumptions about the world," co-author Christopher Holdgraf said.  "It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’  By doing that, it is faster and expends less energy."

So there's another fascinating, and mind-boggling, piece of how our brains make sense of the world.  It's wonderful that evolution could shape such an amazingly adaptive device, although the survival advantage is obvious.  The faster you are at pulling a signal out of the noise, the more likely you are to make the right decisions about what it is that you're perceiving -- whether it's you talking to a friend in a crowded bar or a proto-hominid on the African savanna trying to figure out if that odd shape in the grass is a crouching lion.

****************************************

Not long ago I was discussing with a friend of mine the unfortunate tendency of North Americans and Western Europeans to judge everything based upon their own culture -- and to assume everyone else in the world sees things the same way.  (An attitude that, in my opinion, is far worse here in the United States than anywhere else, but since the majority of us here are the descendants of white Europeans, that attitude didn't come out of nowhere.)  

What that means is that people like me, who live somewhere WEIRD -- white, educated, industrialized, rich, and democratic -- automatically have blinders on.  And these blinders affect everything, up to and including things like supposedly variable-controlled psychological studies, which are usually conducted by WEIRDs on WEIRDs, and so interpret results as universal when they might well be culturally-dependent.

This is the topic of a wonderful new book by anthropologist Joseph Henrich called The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous.  It's a fascinating lens into a culture that has become so dominant on the world stage that many people within it staunchly believe it's quantifiably the best one -- and some act as if it's the only one.  It's an eye-opener, and will make you reconsider a lot of your baseline assumptions about what humans are and the ways we see the world -- of which science historian James Burke rightly said, "there are as many different versions of that as there are people."

[Note:  If you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Wednesday, October 23, 2019

A chat at the pub

When I'm out in a crowded bar, I struggle with something that I think a lot of us do -- trying to isolate the voice of the person I'm talking to from all of the background noise.

I can do it, but it's a struggle.  When I'm tired, or have had one too many pints of beer, I find that my ability to hear what my friend is saying suddenly disappears, as if someone had flipped off a switch.  His voice is swallowed up by a cacophony of random noise in which I literally can't isolate a single word.

Usually my indication that it's time to call it a night.

[Image is in the Public Domain]

It's an interesting question, though, how we manage to do this at all.  Think about it; the person you're listening to is probably closer to you than the other people in the pub, but the others might well be louder.  Add to that the cacophony of glasses clinking and music blaring and whatever else might be going on around you, and the likelihood is that your friend's overall vocal volume is probably about the same as anyone or anything else picked up by your ears.

Yet most of us can isolate that one voice and hear it distinctly, and tune out all of the other voices and ambient noise.  So how do you do this?

Scientists at Columbia University got a glimpse of how our brains might accomplish this amazing task in a set of experiments described in a paper that appeared in the journal Neuron this week.  In "Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception," by James O’Sullivan, Jose Herrero, Elliot Smith, Catherine Schevon, Guy M. McKhann, Sameer A. Sheth, Ashesh D. Mehta, and Nima Mesgarani, we find out that one part of the brain -- the superior temporal gyrus (STG) -- seems to be capable of boosting the gain of a sound we want to pay attention to, and to do so virtually instantaneously.

The auditory input we receive is a complex combination of acoustic vibrations in the air received all at the same time, so sorting them out is no mean feat.  (Witness how long it's taken to develop good vocal transcription software -- which, even now, is fairly slow and inaccurate.)  Yet your brain can do it flawlessly (well, for most of us, most of the time).  What O'Sullivan et al. found was that once received by the auditory cortex, the neural signals are passed through two regions -- first the Heschl's gyrus (HG), and then the STG.  The HG seems to create a multi-dimensional neural representation of what you're hearing, but doesn't really pick out one set of sounds as being more important than another.  The STG, though, is able to sort through that tapestry of electrical signals and amplify the ones it decides are more important.

"We’ve long known that areas of auditory cortex are arranged in a hierarchy, with increasingly complex decoding occurring at each stage, but we haven’t observed how the voice of a particular speaker is processed along this path," said study lead author James O’Sullivan in a press release.  "To understand this process, we needed to record the neural activity from the brain directly...  We found that that it’s possible to amplify one speaker’s voice or the other by correctly weighting the output signal coming from HG.  Based on our recordings, it’s plausible that the STG region performs that weighting."

The research has a lot of potential applications, not only for computerized vocal recognition, but for guiding the creation of devices to help the hearing impaired.  It's long been an issue that traditional hearing aids amplify everything equally, so a hearing-impaired individual in a noisy environment has to turn up the volume to hear what (s)he wants to listen to, but this can make the ambient background noise deafeningly loud.  If software can be developed that emulates what the STG does, it might create a much more natural-sounding and comfortable experience.

All of which is fascinating, isn't it?  The more we learn about our own brains, the more astonishing they seem.  Abilities we take entirely for granted are being accomplished by incredibly complex arrays and responses in that 1.3-kilogram "meat machine" sitting inside our skulls, often using mechanisms that still amaze me even after thirty-odd years of studying neuroscience.  

And it leaves me wondering what we'll find out about our own nervous systems in the next thirty years.

**************************************

In keeping with Monday's post, this week's Skeptophilia book recommendation is about one of the most enigmatic figures in mathematics; the Indian prodigy Srinivasa Ramanujan.  Ramanujan was remarkable not only for his adeptness in handling numbers, but for his insight; one of his most famous moments was the discovery of "taxicab numbers" (I'll leave you to read the book to find out why they're called that), which are numbers that are expressible as the sum of two cubes, two different ways.

For example, 1,729 is the sum of 1 cubed and 12 cubed; it's also the sum of 9 cubed and 10 cubed.

What's fascinating about Ramanujan is that when he discovered this, it just leapt out at him.  He looked at 1,729 and immediately recognized that it had this odd property.  When he shared it with a friend, he was kind of amazed that the friend didn't jump to the same realization.

"How did you know that?" the friend asked.

Ramanujan shrugged.  "It was obvious."

The Man Who Knew Infinity by Robert Kanigel is the story of Ramanujan, whose life ended from tuberculosis at the young age of 32.  It's a brilliant, intriguing, and deeply perplexing book, looking at the mind of a savant -- someone who is so much better than most of us at a particular subject that it's hard even to conceive.  But Kanigel doesn't just hold up Ramanujan as some kind of odd specimen; he looks at the human side of a man whose phenomenal abilities put him in a class by himself.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]






Saturday, December 24, 2016

Signal out of noise

I think I share with a lot of people a difficulty in deciphering what someone is saying when holding a conversation in a noisy room.  I can often pick out a few words, but understanding entire sentences is tricky.  A related phenomenon I've noticed is that if there is a song playing while there's noise going on -- in a bar, or on earphones at the gym -- I often have no idea what the song is, can't understand a single word or pick up the beat or figure out the music, until something clues me in to what the song is.  Then, all of a sudden, I find I'm able to hear it more clearly.

Some neuroscientists at the University of California - Berkeley have just found out what's happening in the brain that causes this oddity in auditory perception.  In a paper in Nature: Communications that came out earlier this week, authors Christopher R. Holdgraf, Wendy de Heer, Brian Pasley, Jochem Rieger, Nathan Crone, Jack J. Lin, Robert T. Knight, and Frédéric E. Theunissen studied how the perception of garbled speech changes when subjects are told what's being said -- and found through a technique called spectrotemporal receptive field mapping that the brain is able to retune itself in less than a second.

The authors write:
Experience shapes our perception of the world on a moment-to-moment basis.  This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood.  Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models.  We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex.  Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range.  This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.
What astonishes me about this is how quickly the brain is able to accomplish this -- although that is certainly matched by my own experience of suddenly being able to hear lyrics of a song once I recognize what's playing.  As James Anderson put it, writing about the research in ReliaWire, "The findings... confirm hypotheses that neurons in the auditory cortex that pick out aspects of sound associated with language, the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes, continually tune themselves to pull meaning out of a noisy environment."

A related phenomenon is visual priming, which occurs when people are presented with a seemingly meaningless pattern of dots and blotches, such as the following:


Once you're told that the image is a cow, it's easy enough to find -- and after that, impossible to unsee.

"Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise," said study co-author Frédéric Theunissen.  "It’s not like I am generating those words in my head.  I really have the feeling of hearing the words in the noise with this pop-out phenomenon.  It is such a mystery."

Apparently, once the set of possibilities of what you're hearing (or seeing) is narrowed, your brain is much better at extracting meaning from noise.  "Your brain tries to get around the problem of too much information by making assumptions about the world," co-author Christopher Holdgraf said.  "It says, ‘I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don’t have to do a lot of processing.’ By doing that, it is faster and expends less energy."

So there's another fascinating, and mind-boggling, piece of how our brains make sense of the world.  It's wonderful that evolution could shape such an amazingly adaptive device, although the survival advantage is obvious.  The faster you are at pulling a signal out of the noise, the more likely you are to make the right decisions about what it is that you're perceiving -- whether it's you talking to a friend in a crowded bar or a proto-hominid on the African savanna trying to figure out if that odd shape in the grass is a crouching lion.

Saturday, February 13, 2016

Music on the brain

Dear Readers:

I will be taking a short (three day) break from Skeptophilia -- but please keep those comments and suggestions coming!  I'll be back on Thursday, February 18.  Cheers!

***********************

It is a source of tremendous curiosity to me why music is as powerful an influence as it is.  Music has been hugely important in my own life, and remains so to this day.  I remember my parents telling me stories about my early childhood, including tales of when I couldn't have been more than about four years old and I clamored to be allowed to use the record player myself.  At first they were reluctant, but my insistence finally won the day.  They showed me how to handle the records carefully, operate the buttons to drop the needle onto the record, and put everything away when I was done.  There were records I played over and over again (that I wasn't discouraged is a testimony to my parents' patience and forbearance) -- and I never damaged a single one.  They were simply too important to me to handle roughly.

The transformative experience of music is universal to the human species.  A 43,000 year old carved bone was found in Slovenia that many think was one of the earliest musical instruments -- if this contention is correct, our drive to make music must be very old indeed.


The neurological underpinning of our musical experience, however, has not been easy to elucidate.  Until recently, there was speculation that our affinity for music had something to do with the tonal-based expression of emotion in language, but that is still speculative.  And recently, three scientists in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology have shown that we have a dedicated module in our brains for experiencing and responding to music.

Sam Norman-Haignere, Nancy G. Kanwisher, and Josh H. McDermott did fMRIs of individuals who were listening to music, and others listening to a variety of other familiar sounds (including human speech).  They then compared the type of sound to the three-dimensional neural response pattern -- what the scientists called a voxel -- to see if they could find correlations between them.

The relationship turned out to be unmistakable.  They found that there were distinct firing patterns in regions of the brain that occurred only when the subject was listening to music -- and that it didn't matter what the style of music was.  Norman-Haignere said, "The sound of a solo drummer, whistling, pop songs, rap, almost everything that had a musical quality to it, melodic or rhythmic, would activate it.  That's one reason the results surprised us."

The research team writes:
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels.  To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex...  Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex...  [This research] identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.
This study opens up a whole new approach to understanding why our auditory centers are structured the way they are, although it does still leave open the question of why music is so tremendously important across cultures.  "Why do we have music?" Kanwisher said in an interview.  "Why do we enjoy it so much and want to dance when we hear it?  How early in development can we see this sensitivity to music, and is it tunable with experience?  These are the really cool first-order questions we can begin to address."

What I find the most curious about this is that the same region of the brain is firing in response to incredibly dissimilar inputs.  Consider, for example, the differences between a sitar solo, a Rossini aria, a Greydon Square rap, and a Bach harpsichord sonata.  Isn't it fascinating that we all have a part of the auditory cortex that responds to all of those -- regardless of our cultural background or musical preferences?

I find the whole thing tremendously interesting, and can only hope that the MIT team will continue their investigations.  I'm fascinated not only with the universality of musical appreciation, but the peculiar differences -- why, for example, I love Bach, Haydn, and Vaughan Williams, but Chopin and Schumann leave me completely cold.  Must be something about my voxels, I suppose -- but wouldn't it be cool to find out what it is?