Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label tonal languages. Show all posts
Showing posts with label tonal languages. Show all posts

Wednesday, March 5, 2025

Watch your tone!

You probably know that there are many languages -- the most commonly-cited are Mandarin and Thai -- that are tonal.  The pitch, and pitch change across a syllable, alter its meaning.  For example, in Mandarin, the syllable "ma" spoken with a high steady tone means "mother;" with a falling then rising tone, it means "horse."

If your mother is anything like mine was, confusing these is not a mistake you'd make twice.

A pitch vs. time graph of the five tones in Thai [Image licensed under the Creative Commons Thtonesen.jpg: Lemmy Laffer derivative from Bjankuloski06en, Thai tones, CC BY-SA 3.0]

English is not tonal, but there's no doubt that pitch and stress change can communicate meaning.  The difference is that pitch alterations in English don't change the denotative (explicit) meaning, but can drastically change the connotative (implied) meaning.  Consider the following sentence:

He told you he gave the package to her?

Spoken with a neutral tone, it's simply an inquiry about a person's words and actions.  Now, one at a time, change which word is stressed:

  • He told you he gave the package to her?  (Implies the speaker was expecting someone else to do it.)
  • He told you he gave the package to her? (Implies surprise that you were told about the action.)
  • He told you he gave the package to her? (Implies surprise that you were the one told about it)
  • He told you he gave the package to her? (Implies the speaker expected the package should have been paid for)
  • He told you he gave the package to her? (Implies that some different item was expected to be given)
  • He told you he gave the package to her? (Implies surprise at the recipient of the package)

Differences in word choice can also create sentences with identical denotative meanings and drastically different connotative meanings.  Consider "Have a nice day" vs. "I hope you manage to enjoy your next twenty-four hours," and "Forgive me, Father, for I have sinned" vs. "I'm sorry, Daddy, I've been bad."

You get the idea.

All of this is why mastery of a language you weren't born to is a long, fraught affair.

The topic comes up because of some new research out of Northwestern University that identified the part of the brain responsible for recognizing and abstracting meaning from pitch and inflection -- what linguists call the prosody of a language.  A paper this week in Nature Communications showed that Heschl's gyrus, a small structure in the superior temporal lobe, actively analyzes spoken language for subtleties of rhythm and tone and converts those perceived differences into meaning.

"Our study challenges the long-standing assumptions how and where the brain picks up on the natural melody in speech -- those subtle pitch changes that help convey meaning and intent," said G. Nike Gnanataja, who was co-first author of the study.  "Even though these pitch patterns vary each time we speak, our brains create stable representations to understand them."

"The results redefine our understanding of the architecture of speech perception," added Bharath Chandrasekaran, the other co-first author.  "We've spent a few decades researching the nuances of how speech is abstracted in the brain, but this is the first study to investigate how subtle variations in pitch that also communicate meaning are processed in the brain."

It's fascinating that we have a brain area dedicated to discerning alterations in the speech we hear, and curious that similar research on other primates shows that while they have a Heschl's gyrus, it doesn't respond to changes in prosody.  (What exact role it does have in other primates is still a subject of study.)  This makes me wonder if it's yet another example of preaptation -- where a structure, enzyme system, or gene evolves in one context, then gets co-opted for something else.  If so, our ancestors' capacity for using their Heschl's gyri to pick up on subtleties of speech drastically enriched their abilities to encode meaning in language.

But I should wrap this up, because I need to go do my Japanese language lessons for the day.  Japanese isn't tonal, but word choice strongly depends on the relative status of the speaker and the listener, so which words you use is critical if you don't want to be looked upon as either boorish on the one hand, or putting on airs on the other.

I wonder how the brain figures all that out?

****************************************


Saturday, April 29, 2023

Pitch perfect

Consider the simple interrogative English sentence, "She gave the package to him today?"

Now, change one at a time which word is stressed:

  • "She gave the package to him today?"
  • "She gave the package to him today?"
  • "She gave the package to him today?"
  • "She gave the package to him today?"
  • "She gave the package to him today?"

English isn't a tonal language -- where patterns of rise and fall of pitch change the meaning of a word -- but stress (usually as marked by pitch and loudness changes) sure can change the connotation of a sentence.  In the above example, the first one communicates incredulity that she was the one who delivered the package (the speaker expected someone else to do it), while the last one clearly indicates that the package should have been handed over some other time than today.

In tonal languages, like Mandarin, Thai, and Vietnamese, pitch shifts within words completely change the word's meaning.  In Mandarin, for example,  (the vowel spoken with a high level tone) means "mother," while  (the vowel spoken with a dip in tone in the middle, followed by a quick rise) means "horse."  While this may sound complex to people -- like myself -- who don't speak a tonal language, if you learn it as a child it simply becomes another marker of meaning, like the stress shifts I gave in my first example.  My guess is that if you're a native English speaker, if you heard any of the above sentences spoken aloud, you wouldn't even have to think about what subtext the speaker was trying to communicate.

What's interesting about all this is that because most of us learn spoken language when we're very little, which language(s) we're exposed to alters the wiring of the language-interpretive structures in our brain.  Exposed to distinctive differences early (like tonality shifts in Mandarin), and our brains adjust to handle those differences and interpret them easily.  It works the other way, too; the Japanese liquid consonant /ɾ/, such as the second consonant in the city name Hiroshima, is usually transcribed into English as an "r" but the sound it represents is often described as halfway between an English /r/ and and English /l/.  Technically, it's an apico-alveolar tap -- similar to the middle consonant in the most common American English pronunciation of bitter and butter.  The fascinating part is that monolingual Japanese children lose the sense of a distinction between /r/ and /l/, and when learning English as a second language, not only often have a hard time pronouncing them as different phonemes, they have a hard time hearing the difference when listening to native English speakers.

All of this is yet another example of the Sapir-Whorf hypothesis -- that the language(s) you speak alter your neurology, and therefore how you perceive the world -- something I've written about here before.

The reason all this comes up is a study in Current Biology this week showing that the language we speak modifies our musical ability -- and that speakers of tonal languages show an enhanced ability to remember melodies, but a decreased ability to mimic rhythms.  Makes sense, of course; if tone carries meaning in the language you speak, it's understandable your brain pays better attention to tonal shifts.

The rhythm thing, though, is interesting.  I've always had a natural rhythmic sense; my bandmate once quipped that if one of us played a wrong note, it was probably me, but if someone screwed up the rhythm, it was definitely her.  Among other styles, I play a lot of Balkan music, which is known for its oddball asymmetrical rhythms -- such wacky time signatures as 7/8, 11/16, 18/16, and (I kid you not) 25/16:


I picked up Balkan rhythms really quickly.  I have no idea where this ability came from.  I grew up in a relatively non-musical family -- neither of my parents played an instrument, and while we had records that were played occasionally, nobody in my extended family has anywhere near the passion for music that I do.  I have a near-photographic memory for melodies, and an innate sense of rhythm -- whatever its source.

In any case, the study is fascinating, and gives us some interesting clues about the link between language and music, and that the language we speak remodels our brain and changes how we hear and understand the music we listen to..  The two are deeply intertwined, there's no doubt about that; singing is a universal phenomenon.  And making music of other sorts goes back to our Neanderthal forebears, on the order of forty thousand years ago, to judge by the Divje Babe bone flute.

I wonder how this might be connected to what music we react emotionally to.  This is something I've wondered about for ages; why certain music (a good example for me is Stravinsky's Firebird) creates a powerful emotional reaction, and other pieces generate nothing more than a shoulder shrug.

Maybe I need to listen to Firebird and ponder the question further.

****************************************



Wednesday, July 24, 2019

Meaning in music

As someone fascinated by neuroscience, language, and music, you can imagine how excited I was to find some new research that combined all three.

A link sent to me by a loyal reader of Skeptophilia describes a study that is the subject of a paper in Nature Neuroscience last week with the rather intimidating title "Divergence in the Functional Organization of Human and Macaque Auditory Cortex Revealed by fMRI Responses to Harmonic Tones."  Written by Sam V. Norman-Haignere (Columbia University), Nancy Kanwisher (MIT), Josh H. McDermott (MIT), and Bevil R. Conway (National Institute of Health), the paper shows evidence that even our close primate relatives don't have the capacity for discriminating harmonic tones that humans have -- that our perception of music may well be a uniquely human capacity.

"We found that a certain region of our brains has a stronger preference for sounds with pitch than macaque monkey brains," said Bevil Conway, senior author of the study.  "The results raise the possibility that these sounds, which are embedded in speech and music, may have shaped the basic organization of the human brain."

Monkeys, apparently, respond equally to atonal/aharmonic sounds, while humans have a specific neural module that lights up on an fMRI scan when the sounds they hear are tonal in nature.  "These results suggest the macaque monkey may experience music and other sounds differently," Conway said.  "In contrast, the macaque's experience of the visual world is probably very similar to our own.  It makes one wonder what kind of sounds our evolutionary ancestors experienced."

[Image is in the Public Domain]

It immediately put me in mind of tonal languages (such as Thai and Chinese) where the same syllable spoken with a rising, falling, or steady tone completely changes its denotative meaning.  Even non-tonal languages (like English) express connotation with tone, such as the rising tone at the end of a question.  And subtleties like stress patterns can substantially change the meaning.  For example, consider the sentence "She told me to give you the money today?"  Now, read it aloud while stressing the words as follows:
  • SHE told me to give you the money today?
  • She TOLD me to give you the money today?
  • She told ME to give you the money today?
  • She told me to GIVE you the money today?
  • She told me to give YOU the money today?
  • She told me to give you the MONEY today?
  • She told me to give you the money TODAY?
No two of these connote the same idea, do they?

I'm reminded of how the brilliant neuroscientist David Eagleman describes the concept of the umwelt of an organism:
In 1909, the biologist Jakob von Uexküll introduced the concept of the umwelt.  He wanted a word to express a simple (but often overlooked) observation: different animals in the same ecosystem pick up on different environmental signals.  In the blind and deaf world of the tick, the important signals are temperature and the odor of butyric acid. For the black ghost knifefish, it's electrical fields.  For the echolocating bat, it's air-compression waves.  The small subset of the world that an animal is able to detect is its umwelt... 
The interesting part is that each organism presumably assumes its umwelt to be the entire objective reality "out there."  Why would any of us stop to think that there is more beyond what we can sense?
So tone, apparently, is part of the human umwelt, but not that of macaques (and probably other primate species).  Perhaps other animals include tone in their umwelt, but that point is uncertain.  I'd guess that these would include many bird species, which communicate using (often very complex) songs.  Echolocating cetaceans and bats, maybe.  Other than that, probably not many.

"This finding suggests that speech and music may have fundamentally changed the way our brain processes pitch," Conway said.  "It may also help explain why it has been so hard for scientists to train monkeys to perform auditory tasks that humans find relatively effortless."

I wonder what music sounds like to my dogs?  I get a curious head-tilt when I play the piano or flute, and I once owned a dog who would curl up at my feet while I practiced.  Both my dogs, however, immediately remember other pressing engagements and leave the premises as soon as I take out my bagpipes.

Although most humans do the same thing, so maybe that part's not about tonal perception per se.

************************************

The subject of Monday's blog post gave me the idea that this week's Skeptophilia book recommendation should be a classic -- Konrad Lorenz's Man Meets Dog.  This book, written back in 1949, is an analysis of the history and biology of the human/canine relationship, and is a must-read for anyone who owns, or has ever owned, a doggy companion.

Given that it's seventy years old, some of the factual information in Man Meets Dog has been superseded by new research -- especially about the genetic relationships between various dog breeds, and between domestic dogs and other canid species in the wild.  But his behavioral analysis is impeccable, and is written in his typical lucid, humorous style, with plenty of anecdotes that other dog lovers will no doubt relate to.  It's a delightful read!

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]