There's a tremendous diversity in how languages work. On the basic level, the phonetics of languages can differ greatly; each language has a unique sound structure. Some really different, at least to my English-speaking brain; consider Xhosa, the language spoken by over ten million people in South Africa, which has three different consonants that are clicks (usually written "c" for the dental click, "x" for the lateral click, and "q" for the palatal click). If you want to hear Xhosa sung, check out this video of the legendary Miriam Makeba singing the song "Qongqothwane:"
Another complication is tonality -- for many languages, the same syllable spoken with a rising vs. a falling tone actually has a completely different meaning. (English only has one consistent tonal feature, which is that a rise in tone at the end of a sentence can denote a question, but the pitch change doesn't alter the meaning, as it does in many languages.)
It can be odder than that, though. There are whistled languages, such as Silbo in the Canary Islands. Many examples exist -- France, Greece, Turkey, India, Nepal, and Mexico all have groups who communicate by whistling (although they also have spoken language; no group I've ever heard of communicates exclusively by whistles). Along the same lines -- and it was recent research on this topic that spurred this post -- are drummed languages.
Linguist Frank Seifart was researching endangered languages in Colombia, and was in a village where the Bora language is spoken while the chief was away. The chief was sent for -- by someone drumming out a pattern that meant, "A stranger has arrived. Come home."
And it's not just a code, like Morse code; the drumbeat patterns actually mimic the changes in timbre, pitch, and rhythm of the speech the drummer is trying to emulate. The paper, which appeared in the journal Royal Society Open Science last week, was titled, "Reducing Language to Rhythm: Amazonian Bora Drummed Language Exploits Speech Rhythm for Long-Distance Communication," and begins as follows:
Many drum communication systems around the world transmit information by emulating tonal and rhythmic patterns of spoken languages in sequences of drumbeats. Their rhythmic characteristics, in particular, have not been systematically studied so far, although understanding them represents a rare occasion for providing an original insight into the basic units of speech rhythm as selected by natural speech practices directly based on beats. Here, we analyse a corpus of Bora drum communication from the northwest Amazon, which is nowadays endangered with extinction. We show that four rhythmic units are encoded in the length of pauses between beats. We argue that these units correspond to vowel-to-vowel intervals with different numbers of consonants and vowel lengths. By contrast, aligning beats with syllables, mora or only vowel length yields inconsistent results. Moreover, we also show that Bora drummed messages conventionally select rhythmically distinct markers to further distinguish words. The two phonological tones represented in drummed speech encode only few lexical contrasts. Rhythm thus appears to crucially contribute to the intelligibility of drummed Bora. Our study provides novel evidence for the role of rhythmic structures composed of vowel-to-vowel intervals in the complex puzzle concerning the redundancy and distinctiveness of acoustic features embedded in speech.An amusing part of the research is that in the Bora drummed language, each message is followed by a pattern that means, "Now, don't say that I am a liar." Seifart says that the gist is much like a parent yelling at a child, "Don't tell me you didn't hear me!"
The whole thing is fascinating -- when communicating over distances long enough that our voices won't reach, people have invented new ways to send messages -- and those new ways incorporate many of the phonetic, tonal, and syntactic frameworks of the original language.
The biologist in me, however, is curious about how this is being processed in the brain. Does drummed speech get interpreted in the same place in the brain where spoken language is? There's been a parallel study on whistled languages -- Onur Güntürkün, a biopsychologist at the Institute of Cognitive Neuroscience in Bochum, Germany, who has studied how whistled languages are processed in the brain, found that there was an intriguing difference between the activity of the brain while listening to whistled versus spoken language. Since we process melodic tones primarily in the right side of the cerebrum and language primarily in the left, Güntürkün suspected that whistled languages would activate both sides equally -- and he was right.
As far as drummed languages, Güntürkün was especially interested in how the content of messages could be conveyed by milliseconds-long variations in the rhythm pattern. "I’m amazed that these tiny milliseconds are doing the job," he said, adding that the next step is an analysis of how the two hemispheres of the brain process drummed speech, specifically timing cues.
All of which brings home again not only the amazing processing power of the brain, but the drive in humans to communicate. It emphasizes once again the importance of preserving these endangered languages -- not only for reasons of protecting people's cultural identities, but for what it tells us about the neurological underpinning of our own minds.
This week's featured book on Skeptophilia should be in every good skeptic's library: Michael Shermer's Why People Believe Weird Things. It's a no-holds-barred assault against goofy thinking, taking on such counterfactual beliefs as psychic phenomena, creationism, past-life regression, and Holocaust denial. Shermer, the founder of Skeptic magazine, is a true crusader, and his book is a must-read. You can buy it at the link below!
Post a Comment