Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label phonetics. Show all posts
Showing posts with label phonetics. Show all posts

Thursday, September 19, 2024

Onomatopoeia FTW

Given my ongoing fascination with languages, it's a little surprising that I didn't come across a paper published a while back in the Proceedings of the National Academy of Sciences earlier.  Entitled, "Sound–Meaning Association Biases Evidenced Across Thousands of Languages," this study proposes something that is deeply astonishing: that the connection between the sounds in a word and the meaning of the word may not be arbitrary.

It's a fundamental tenet of linguistics that language is defined as "arbitrary symbolic communication."  Arbitrary because there is no special connection between the sound of a word and its meaning, with the exception of the handful of words that are onomatopoeic (such as boom, buzz, splash, and splat).  Otherwise, the phonemes that make up the word for a concept would be expected to having nothing to do with the concept itself, and therefore would vary randomly from language to language (the word bird is no more fundamentally birdy than the French word oiseau is fundamentally oiseauesque).

That idea may have to be revised.  Damián E. Blasi (of the University of Zürich), Søren Wichmann (of the University of Leiden), Harald Hammarström and Peter F. Stadler (of the Max Planck Institute), and Morten H. Christiansen (of Cornell University) did an exhaustive statistical study, using dozens of basic vocabulary words representing 62% of the world's six thousand languages and 85% of its linguistic lineages and language families.  And what they found was that there are some striking patterns when you look at the phonemes represented in a variety of linguistic morphemes, patterns that held true even with completely unrelated languages.  Here are a few of the correspondences they found:
  • The word for ‘nose’ is likely to include the sounds ‘neh’ or the ‘oo’ sound, as in ‘ooze.’
  • The word for ‘tongue’ is likely to have ‘l’ or ‘u.’
  • ‘Leaf’ is likely to include the sounds ‘b,’ ‘p’ or ‘l.’
  • ‘Sand’ will probably use the sound ‘s.’
  • The words for ‘red’ and ‘round’ often appear with ‘r.’
  • The word for ‘small’ often contains the sound ‘i.’
  • The word for ‘I’ is unlikely to include sounds involving u, p, b, t, s, r and l.
  • ‘You’ is unlikely to include sounds involving u, o, p, t, d, q, s, r and l.
"These sound symbolic patterns show up again and again across the world, independent of the geographical dispersal of humans and independent of language lineage," said Morten Christiansen, who led the study.  "There does seem to be something about the human condition that leads to these patterns.  We don’t know what it is, but we know it’s there."

[Image licensed under the Creative Commons M. Adiputra, Globe of language, CC BY-SA 3.0]

One possibility is that these correspondences are actually not arbitrary at all, but are leftovers from (extremely) ancient history -- fossils of the earliest spoken language, which all of today's languages, however distantly related, descend from.  The authors write:
From a historical perspective, it has been suggested that sound–meaning associations might be evolutionarily preserved features of spoken language, potentially hindering regular sound change.  Furthermore, it has been claimed that widespread sound–meaning associations might be vestiges of one or more large-scale prehistoric protolanguages.  Tellingly, some of the signals found here feature prominently in reconstructed “global etymologies” that have been used for deep phylogeny inference.  If signals are inherited from an ancestral language spoken in remote prehistory, we might expect them to be distributed similarly to inherited, cognate words; that is, their distribution should to a large extent be congruent with the nodes defining their linguistic phylogeny.
But this point remains to be tested.  And there's an argument against it; if these similarities come from common ancestry, you'd expect not only the sounds, but their positions in words, to have been conserved (such as in the English/German cognate pair laugh and lachen).  In fact, that is not the case.  The sounds are similar, but their positions in the word show no discernible pattern.  The authors write:
We have demonstrated that a substantial proportion of words in the basic vocabulary are biased to carry or to avoid specific sound segments, both across continents and linguistic lineages.  Given that our analyses suggest that phylogenetic persistence or areal dispersal are unlikely to explain the widespread presence of these signals, we are left with the alternative that the signals are due to factors common to our species, such as sound symbolism, iconicity, communicative pressures, or synesthesia...  [A]lthough it is possible that the presence of signals in some families are symptomatic of a particularly pervasive cognate set, this is not the usual case.  Hence, the explanation for the observed prevalence of sound–meaning associations across the world has to be found elsewhere.
Which I think is both astonishing and fascinating.  What possible reason could there be that the English word tree is composed of the three phonemes it contains?  The arbitrariness of the sound/meaning relationship seemed so obvious to me when I first learned about it that I didn't even stop to question how we know it's true.

Generally a dangerous position for a skeptic to be in.

I hope that the research on this topic is moving forward, because it certainly would be cool to find out what's actually going on here.  I'll have to keep my eyes out for any follow-ups.  But now I'm going to go get a cup of coffee, which I think we can all agree is a nice, warm, comforting-sounding word.
  
****************************************


Wednesday, March 13, 2024

Speaking beauty

My novel In the Midst of Lions, the first of a trilogy, has a character named Anderson Quaice, who is a linguistics professor.  He also has a strong pessimistic streak, something that proves justified in the course of the story.  He develops a conlang called Kalila not only as an entertaining intellectual exercise, but because he fears that civilization is heading toward collapse, and he wants a way to communicate with his friends that will not be understood by (possibly hostile) outsiders.

Kalila provides a framework for the entire trilogy, which spans over fourteen centuries.  I wanted the conlang to follow a similar trajectory as Latin did; by the second book, The Scattering Winds, Kalila has become the "Sacred Language," used in rituals and religion; by the third, The Chains of Orion, it has been relegated to a small role as a historical curiosity, something learned (and mourned!) only by academics, and which few speak fluently. 

But of course, in order to incorporate it into the narrative, I had to invent the conlang.  While I'm not a professor like Quaice, my master's degree is in historical linguistics, so I have a fairly solid background for comprehending (and thus creating) a language structure.  I've mostly studied inflected languages, like Old Norse, Old English, Latin, and Greek -- ones where nouns, verbs, and adjectives change form depending on how they're being used in sentences -- so I decided to make Kalila inflected.  (Interestingly, along the way English lost most of its noun inflections; in the sentences The dog bit the cat and The cat bit the dog you know who bit whom by word order, not because the words dog and cat change form, as they would in most inflected languages.  English does retain a few inflections, holdovers from its Old English roots -- he/him/his, she/her/hers, they/them/theirs, and who/whom are examples of inflections we've hung onto.)

One of the interesting choices I had to make centers on phonetics.  What repertoire of sounds did I want Kalila to have?  I decided I was aiming for something vaguely Slavic-sounding, with a few sound combinations and placements you don't find in English (for example, the initial /zl/ combination in the word for "quick," zlavo.)  I included only one sound that isn't found in English -- the unvoiced velar fricative (the final sound in the name Bach), which in accordance with the International Phonetic Alphabet I spelled with a letter "x" in the written form; lexa, pronounced /lekha/, means "hand."

Of course, in the end I used about one percent of all the syntax and morphology and lexicon and whatnot I'd invented in the actual story.  But it was still a lot of fun to create.

The topic comes up because of a really cool study that recently came out in the journal Language and Speech, by a team led by linguist Christine Mooshammer of Humboldt University in Berlin.  The researchers wanted to find out why some languages are perceived as sounding more pleasant-sounding than others -- but to avoid the bias that would come with actual spoken languages, they confined their analysis to conlangs such as Quenya, Sindarin, Dothraki, Klingon, Cardassian, Romulan, and Orkish.

The first stanza of a poem in Quenya, written in the lovely Tengwar script Tolkien invented [Image is in the Public Domain]

The results, perhaps unsurprisingly, rated Quenya and Sindarin (the two main Elvish languages in Tolkien's world) as the most pleasant, and Dothraki (from Game of Thrones) and Klingon to sound the most unpleasant.  Interestingly, Orkish -- at least when not being snarled by characters like Azog the Defiler -- was ranked somewhere in the middle.

Some of their conclusions:

  • Languages with lower consonantal clustering were rated as more pleasant.  (On the extreme low end of this scale are Hawaiian and Japanese, which have almost no consonant clusters at all.)
  • A higher frequency of front vowels (such as /i/ and /e/) as opposed to back vowels (such as /o/ and /u/) correlates with higher pleasantness ratings.
  • Languages with a higher frequency of continuants (such as /l/, /r/, and /m/) as opposed to stops and plosives (like /t/ and /p/) were ranked as more pleasant-sounding.
  • Higher numbers of unvoiced sibilants (such as /s/) and velars (such as the /x/ I used in Kalila) correlated with a lower ranking for pleasantness.
  • The more similar the phonemic inventory of the conlang was to the test subject's native language, the more pleasant the subject thought it sounded; familiarity, apparently, is important.

This last one introduces the bias I mentioned earlier, something that Mooshammer admits is a limitation of the study.  "One of our main findings was that Orkish doesn’t sound evil without the special effects, seeing the speakers and hearing the growls and hissing sounds in the movies," she said, in an interview with PsyPost.  "Therefore, the average person should be aware of the effect of stereotypes that do influence the perception of a language.  Do languages such as German sound orderly and unpleasant and Italian beautiful and erotic because of their sounds, or just based on one’s own attitude toward their speakers?"

I wonder how the test subjects would have ranked spoken Kalila?  If the researchers want a sample, I'd be happy to provide it.

It's a fun study, which I encourage you to read in its entirety.  It brings up the bigger question, though, of why we find anything aesthetically pleasing.  I'm fascinated by why certain pieces of music are absolutely electrifying to me (one example is Stravinsky's Firebird) while others that are considered by many to be masterpieces do nothing for me at all (I've yet to hear a piece of music by Brahms that elicits more than "meh" from me).  There's an emotional resonance there with some things and not others, but I'm at a loss to explain it.

So maybe I should end with a song by Enya, which is not only beautiful musically, but is sung in the conlang she invented, Loxian.  Give this a listen and see where you'd rank it.


I don't know about you, but I think that's pretty sweet-sounding.

****************************************



Wednesday, December 7, 2022

Swearing off

I've been fascinated with words ever since I can remember.  It's no real mystery why I became a writer, and (later) got my master's degree in historical linguistics; I've lived in the magical realm of language ever since I first learned how to use it.

Languages are full of curiosities, which is my impetus for doing my popular daily bit called #AskLinguisticsGuy on TikTok.  And one of the posts I've done that got the most views was a piece on "folk etymology" -- stories invented (with little or no evidence) to explain word origins -- specifically, that the word "fuck" does not come from the acronym for "Fornication Under Consent of the King."

The story goes that in bygone years, when a couple got married, if the king liked the bride's appearance, he could claim the right of "prima nocta" (also called "droit de seigneur"), wherein he got to spend the first night of the marriage with the bride.  (Apparently this did occasionally happen, but wasn't especially common.)  Afterward -- and now we're in the realm of folk etymology -- the king gave his official permission for the bride and groom to go off and amuse themselves as they wished, at which point he stamped the couple's marriage documents "Fornication Under Consent of the King," meaning it was now legal for the couple to have sex with each other.

This bit, of course, is pure fiction.  The truth is that the word "fuck" probably comes from a reconstructed Proto-Germanic root *fug meaning "to strike."  There are cognates (same meaning, different spelling) in just about every Germanic language there is.  The acronym explanation is one hundred percent false, but you'll still see it claimed (which is why I did a TikTok video on it).

The whole subject of taboo words is pretty fascinating, and every language has 'em.  Most cultures have some levels of taboo surrounding sex and other private bodily functions, but there are some odd ones.  In Québecois French, for example, the swear word that will get your face slapped by your prudish aunt is tabernacle!, which is the emotional equivalent of the f-bomb, but comes (obviously) from religious practice, not sex.  Interestingly, in Québecois French, the English f-word has been adopted in the phrase j'ai fucké ça, which is considered pretty mild -- an English equivalent would be "I screwed up."  (The latter phrase, of course, derives from the sexual definition of "to screw," so maybe they're not so different after all.)

[Image licensed under the Creative Commons Juliescribbles, Money being put in swear jar, CC BY-SA 4.0]

Linguists are not above studying such matters.  I found this out when I was in graduate school and was assigned the brilliant 1982 paper by John McCarthy called "Prosodic Structure and Expletive Infixation," which considers the morphological rules governing the placement of the word "fucking" into other words -- why, for example, we say "abso-fucking-lutely" but never "ab-fucking-solutely."  (The rule has to do with stress -- you put "fucking" before the primary stressed syllable, as long as there is a secondary stressed syllable that comes somewhere before it.)  I was (and am) delighted by this paper.  It might be the only academic paper I ever read in grad school from which I simultaneously learned something and had several honest guffaws.

The reason this whole sweary subject comes up is because of a paper by Shiri Lev-Ari and Ryan McKay that came out just yesterday in the journal Psychonomic Bulletin & Review, called, "The Sound of Swearing: Are There Universal Patterns in Profanity?"  Needless to say, I also thought this paper was just fan-fucking-tastic.  And the answer is: yes, across languages, there are some significant patterns.  The authors write:

Why do swear words sound the way they do?  Swear words are often thought to have sounds that render them especially fit for purpose, facilitating the expression of emotion and attitude.  To date, however, there has been no systematic cross-linguistic investigation of phonetic patterns in profanity.  In an initial, pilot study we explored statistical regularities in the sounds of swear words across a range of typologically distant languages.  The best candidate for a cross-linguistic phonemic pattern in profanity was the absence of approximants (sonorous sounds like l, r, w and y).  In Study 1, native speakers of various languages judged foreign words less likely to be swear words if they contained an approximant.  In Study 2 we found that sanitized versions of English swear words – like darn instead of damn – contain significantly more approximants than the original swear words.  Our findings reveal that not all sounds are equally suitable for profanity, and demonstrate that sound symbolism – wherein certain sounds are intrinsically associated with certain meanings – is more pervasive than has previously been appreciated, extending beyond denoting single concepts to serving pragmatic functions.

The whole thing put me in mind of my dad, who (as befits a man who spent 29 years in the Marine Corps) had a rather pungent vocabulary.  Unfortunately, my mom was a tightly-wound prude who wrinkled her nose if someone said "hell" (and who couldn't even bring herself to utter the word "sex;" the Good Lord alone knows how my sister and I were conceived).  Needless to say, this difference in attitude caused some friction between them.  My dad solved the problem of my mother's anti-profanity harangues by making up swear words, often by repurposing other words that sounded like they could be vulgar.  His favorite was "fop."  When my mom would give him a hard time for yelling "fop!" if he smashed his thumb with a hammer, he would patiently explain that it actually meant "a dandified gentleman," and after all, there was nothing wrong with yelling that.  My mom, in desperate frustration not to lose the battle, would snarl back something like, "It doesn't mean that the way you say it!", but in the end my dad's insistence that he'd said nothing inappropriate was pretty unassailable.

Interesting that "fop" fits into the Lev-Ari/McKay phonetic pattern like a hand in a glove.

Anyhow, as regular readers of Skeptophilia already know, I definitely inherited my dad's salty vocabulary.  But -- as one of my former principals pointed out -- all they are is words, and what really matters is the intent behind them.  And like any linguistic phenomenon, it's an interesting point of study, if you can get issues of prudishness well out of the damn way.

****************************************


Saturday, January 8, 2022

Streams of sound

Even though it's not the area of linguistics I concentrated on, I've always been fascinated with phonetics -- the sound repertoire of languages.  There's more variation in language phonetics than a lot of people realize.  The language with the smallest phonemic inventory seems to be Rotokas, spoken on the island of Bougainville (east of Papua-New Guinea), which has only eleven distinct sounds.  The Khoisan language ǃXóõ, spoken in parts of Botswana and Namibia, is probably the highest, with around a hundred (depending on how finely you slice them), including twenty or so "click consonants" and four different tones (i.e., speaking a vowel with a rising or a falling tone can change the meaning of the word -- a characteristic it shares with Thai, Mandarin, and Vietnamese, and to a lesser extent, Swedish and Norwegian).


[Image licensed under the Creative Commons Snow white1991, Phonetic alphabet, CC BY-SA 3.0]

The result is that languages have a characteristic sound pattern that can be picked up even if you don't speak the language.  Check out this video from a few years ago, illustrating how American English sounds to a non-English-speaker:


Then, there's the song "Prisencolinensinainciusol," written by Italian singer Adriano Celentano -- which uses gibberish lyrics with American English phonetics to create a pop song that doesn't make sense -- but to an English-speaking American, sure sounds like it should:


What brings this topic up is some research out of Eötvös Loránd University in Budapest that appeared in the journal NeuroImage this week, that looked at how dogs hear human language.  We can identify the phonemic repertoire of languages we're familiar with, even if we don't speak them.  Can dogs?

Turns out, amazingly, the answer is yes.

"Some years ago I moved from Mexico to Hungary to join the Neuroethology of Communication Lab at the Department of Ethology, Eötvös Loránd University for my postdoctoral research," said lead author, neuroscientist Laura Cuaya.  "My dog, Kun-kun, came with me.  Before, I had only talked to him in Spanish.  So I was wondering whether Kun-kun noticed that people in Budapest spoke a different language, Hungarian.  We know that people, even preverbal human infants, notice the difference.  But maybe dogs do not bother.  After all, we never draw our dogs' attention to how a specific language sounds.  We designed a brain imaging study to find this out."

What they did was to use fMRI technology to look at the brain activity in the primary and secondary auditory cortexes (the main parts of the brain involved in the recognition and processing of sounds) of the brains of seventeen dogs, including Kun-Kun.  First, they compared the response the dogs had to language vs. non-language -- the latter being just random strings of phonemes.  Turns out, dogs can tell the difference, giving lie to the old claim that you can say damn near anything to a dog and as long as you say it in a pleasant tone, they won't be able to tell.

Then, they compared the response the dogs had to speech in the language they were familiar with, and speech in an unfamiliar language -- and it turns out dogs can distinguish those, as well.  So it's not the "naturalness" of the sound flow, which might have been the issue with the nonsense phonemic strings in the first experiment.  But somehow, dogs are picking up on the overall sound pattern of the language, and can tell the one they're familiar with from ones that are unfamiliar, even if the words and sentences they're hearing are ones they've never heard before.

"This study showed for the first time that a non-human brain can distinguish between two languages," said Attila Andics, senior author of the study.  "It is exciting, because it reveals that the capacity to learn about the regularities of a language is not uniquely human.  Still, we do not know whether this capacity is dogs’ specialty, or general among non-human species.  Indeed, it is possible that the brain changes from the tens of thousand years that dogs have been living with humans have made them better language listeners, but this is not necessarily the case.  Future studies will have to find this out."

So your ability to identify spoken languages based upon how they sound, even if you don't understand the words, is shared by dogs.  Makes you wonder what else they understand.  I've had the impression before that when my dog Guinness gives me his intent stare and head-tilt when I'm talking to him, it's because he is really trying to understand what I'm saying, and maybe that's not so far from the truth.  If so, I'm going to be more careful what I say around him.  He already gets away with enough mischief as it is.

*********************************

One of my favorite writers is the inimitable Mary Roach, who has blended her insatiable curiosity, her knowledge of science, and her wonderfully irreverent sense of humor into books like Stiff (about death), Bonk (about sex), Spook (about beliefs in the afterlife), and Packing for Mars (about what we'd need to prepare for if we made a long space journey and/or tried to colonize another planet).  Her most recent book, Fuzz: When Nature Breaks the Law, is another brilliant look at a feature of humanity's place in the natural world -- this time, what happens when humans and other species come into conflict.

Roach looks at how we deal with garbage-raiding bears, moose wandering the roads, voracious gulls and rats, and the potentially dangerous troops of monkeys that regularly run into humans in many places in the tropics -- and how, even with our superior brains, we often find ourselves on the losing end of the battle.

Mary Roach's style makes for wonderfully fun reading, and this is no exception.  If you're interested in our role in the natural world, love to find out more about animals, or just want a good laugh -- put Fuzz on your to-read list.  You won't be disappointed.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Saturday, July 10, 2021

F-word origin

Being a linguistics nerd, I've often wondered why the phonemic repertoire differs between different languages.  Put more simply: why do languages all sound different?

I first ran into this -- although I had to have it pointed out to me -- with French and English.  I grew up in a bilingual family (my mom's first language was French), so while I'd heard, and to a lesser extent spoken, French during my entire childhood I'd never noticed that there were sounds in one language that didn't occur in the other.  When I took my first formal French class as a ninth-grader, the teacher told us that French has two sounds that don't occur in English at all -- the vowel sound in the pronoun tu (represented in the International Phonetic Alphabet as /y/) and the one in coeur (represented as /ø/).  Also, the English r-sound (/r/) and the French r-sound (/ʁ/) aren't the same -- the English one doesn't occur in French, and vice-versa.

The International Phonetic Alphabet [image is in the Public Domain]

Not only are there different phonemes in different languages, the number of phonemes can vary tremendously.  The Hawaiian language has only thirteen different phonemes: /a/, /e/, /i/, /o/, /u/, /k/, /p/, /h/, /m/, /n/, /l/, /w/, and /ʔ/.  The last is the glottal stop -- usually represented in written Hawaiian as an apostrophe, as in the word for "circle" -- po'ai.

If you're curious, the largest phonemic inventory of any human language is Taa, one of the Khoisan family of languages, spoken mainly by people in western Botswana.  Taa has 107 different phonemes, including 43 different "click consonants."  If you want to hear the most famous example of a language with click consonants, check out this recording of the incomparable South African singer Miriam Makeba singing the Xhosa folk song "Qongqothwane:"


It's a mystery why different languages have such dramatically different sound systems, but at least a piece of it may have been cleared up by a paper in Science last week that was sent my way by my buddy Andrew Butters, writer and blogger over at the wonderful Potato Chip Math.  The contention -- which sounds silly until you see the evidence -- is that the commonness of the labiodental fricative sounds, /f/ and /v/, is due to an alteration in our bites that occurred when we switched to eating softer foods when agriculture became prominent.

I was a little dubious, but the authors make their case well.  Computer modeling of bite physiology and sound production shows that an overbite makes the /f/ and /v/ phonemes take 29% less effort than someone with an edge-to-edge bite exerts.  Most persuasively, they found that current languages spoken by hunter-gatherer societies have only one-quarter the incidence of labiodental fricatives as other languages do.

So apparently my overbite and fondness for mashed potatoes are why I like the f-word so much.  Who knew?  As I responded to Andrew, "Wow, this is pretty fucking fascinating."

Once a language develops a sound system, it's remarkably resistant to change, probably because one of the first pieces of language a baby learns is the phonetic repertoire, and after that it's pretty well locked in for life.  In her wonderful TED Talk, linguist Patricia Kuhl describes studying the phonetics of babbling.  When babies first start to vocalize at age about three months, they make sounds of just about every sort.  But between six and nine months, something fascinating happens -- they stop making sounds they're not hearing, and even though they're still not speaking actual words, the sound repertoire gradually becomes the one from the language they're exposed to.  One example is the English /l/ and /r/ phonemes, as compared to the Japanese liquid consonant [ɾ] (sometimes described as being halfway between an English /l/ and an English /r/).  Very young babies will vocalize all three sounds -- but by nine months, a baby hearing English will retain /l/ and /r/ and stop saying [ɾ], while a baby hearing Japanese does exactly the opposite.

If you've studied a second language that has a different phonemic set than your native language, you know that getting the sounds right is one of the hardest things to do well.  As a friend of mine put it, "My mouth just won't wrap itself around French sounds."  This is undoubtedly because we learn the phonetics of our native language so young -- and once that window has closed, adding to and rearranging our phonemic inventory becomes a real challenge.

So if you've ever wondered why your language has the sounds it does, here's at least a partial explanation.  I'll end with another video that is a must-watch, especially for Americans who are interested in regional accents.  I live in upstate New York but was raised in Louisiana and spent ten years living in Seattle, so I've thought of my own speech as relatively homogenized, but maybe I should listen to myself more carefully.

*************************************

Most people define the word culture in human terms.  Language, music, laws, religion, and so on.

There is culture among other animals, however, perhaps less complex but just as fascinating.  Monkeys teach their young how to use tools.  Songbirds learn their songs from adults, they're not born knowing them -- and much like human language, if the song isn't learned during a critical window as they grow, then never become fluent.

Whales, parrots, crows, wolves... all have traditions handed down from previous generations and taught to the young.

All, therefore, have culture.

In Becoming Wild: How Animal Cultures Raise Families, Create Beauty, and Achieve Peace, ecologist and science writer Carl Safina will give you a lens into the cultures of non-human species that will leave you breathless -- and convinced that perhaps the divide between human and non-human isn't as deep and unbridgeable as it seems.  It's a beautiful, fascinating, and preconceived-notion-challenging book.  You'll never hear a coyote, see a crow fly past, or look at your pet dog the same way again.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]


Saturday, June 27, 2020

Talking to birds

A friend of mine, knowing my interest in linguistics and birdwatching, sent me a link to a fairly mindblowing post on the blog Corvid Research a couple of days ago.  But first, a little background.

Bird communication is generally not considered to be language.  The usual definition of language is "arbitrary symbolic communication that has a characteristic and meaningful structure."  The "arbitrary" bit is sometimes misunderstood; it doesn't mean any sounds can mean anything within a language (something that's obviously not true).  In this context, it means that the sound-to-meaning correspondence is arbitrary, in that the word "dog" is no inherently doggier than the French word chien or the Japanese word inu.  With the exception of a few onomatopoeic words, like "bang" and "swish" and "splat," the sound of the word itself has no particular connection to the concept it represents.

So bird song fails the definition of language on a number of counts.  When the Carolina wrens that nest in our back yard start their outsized calls of "TEAKETTLE TEAKETTLE TEAKETTLE" at four in the morning, those vocalizations don't mean anything more than "I'm a male bird in a territory and you need to leave" or, to any available females, "Hey, baby, how about it?"  They're not capable of representational language, in the sense of using a different set of sounds to represent discrete concepts.

The situation blurs considerably when you look at parrots, many of which can learn to mimic human speech convincingly.  An African gray parrot named Alex learned, with the help of cognitive behavior Irene Pepperberg, not only to mimic speech but to understand that it has meaning, connecting sounds to objects in a consistent fashion.  There's no indication that Alex comprehended syntactic structure -- and the jury's still out as to whether he was simply learning to behave in a particular way to get a reward, similar to training a dog to sit or stay or roll over.  (Although -- as you'll see if you watch the video -- Alex did know how to count, at least up to five, which is pretty impressive.)

The blur only gets worse when you consider corvids, the group that contains crows and ravens.  Corvids are widely considered to be among the most intelligent birds, and their ability to problem solve is astonishing.  They do a great many higher-level behaviors, including having a sophisticated sense of play -- such as the crow that used a plastic lid as a sled on a snow-covered roof, doing it for no apparent reason other than the fact that it was fun.  But some research released recently in the journal EvoLang has shown another facet of corvid intelligence; they can apparently distinguish between different human languages.

[Image licensed under the Creative Commons Aomorikuma(あおもりくま) , Carrion crow 20090612, CC BY-SA 3.0]

Sabrina Schalz (Middlesex University) and Ei-Ichi Izawa (Keio University) studied eight large-billed crows (Corvus macrorhynchos) that were raised in captivity in Japan, cared for by fluent Japanese speakers.  Schalz and Izawa wanted to find out if the birds were able to distinguish that language from another, so they played recordings of people speaking Japanese and people speaking Dutch.  The Japanese recordings didn't elicit much of a response; their attitude seemed to be, "Meh, I've heard that before."  But the Dutch recordings were a different story.  The crows gathered around the speaker, and sat perfectly still, their attention fixed on the sounds they were hearing.

It was clear that they were able to recognize the sound of Dutch as being foreign!

The cadence of a language, and how one differs from another, is a fascinating topic.  It has not only to do with the phonemic repertoire (the exact list of sounds that occur in the language) but pacing, stress, and tone.  The latter is why to non-Mandarin speakers, Mandarin sounds "sing-song" -- the rising and falling pitches actually change the meanings of the words being spoken.  Put another way, the same syllable spoken with a rising tone means something entirely different to the same syllable spoken with a falling tone, something that is not true in non-tonal languages like English (with minor exceptions such as the rising tone at the end of a sentence indicating a question).

Besides the different sounds in the Japanese language as compared to Dutch, the two languages also differ greatly in how words are stressed.  Japanese syllables can differ in length (and in fact, that carries meaning, much as tone does in Mandarin), but they're all stressed about equally.  This is why the way most Americans pronounce the city name Hiroshima is distinctly non-Japanese -- usually either hi-RO-shi-ma or hi-ro-SHI-ma.  The Japanese pronunciation stresses the syllables evenly.  (Try saying the word out loud with no change in syllable stress, and you'll hear the difference.)

So what were the crows picking up on?  I doubt seriously that they were thinking, "Okay, I don't know that word," but was it the sound system differences, the stress patterns, or something else?  Probably impossible to know, although it would be interesting to try to tease that out -- having someone speak Dutch with artificially even stress, or Japanese with non-Japanese syllable stress, and seeing what the reaction is, would be an interesting next step.  If it's the sound repertoire, then the reaction of crows to two mutually-unintelligible languages with similar phonetics, one of which they'd heard and one which is novel -- say, Dutch and German -- should elicit identical reactions.

Whatever's going on here, it's fascinating, and another indication of how intelligent these creatures are.  And it does make me wonder if I should be a little more careful when I'm talking outdoors -- who knows?  Maybe the crows are taking notes of what they overhear, in hopes of eventual world domination.

**************************************

I know I sometimes wax rhapsodic about books that really are the province only of true science geeks like myself, and fling around phrases like "a must-read" perhaps a little more liberally than I should.  But this week's Skeptophilia book recommendation of the week is really a must-read.

No, I mean it this time.

Kathryn Schulz's book Being Wrong: Adventures in the Margin of Error is something that everyone should read, because it points out the remarkable frailty of the human mind.  As wonderful as it is, we all (as Schulz puts it) "walk around in a comfortable little bubble of feeling like we're absolutely right about everything."  We accept that we're fallible, in a theoretical sense; yeah, we all make mistakes, blah blah blah.  But right now, right here, try to think of one think you might conceivably be wrong about.

Not as easy as it sounds.

She shocks the reader pretty much from the first chapter.  "What does it feel like to be wrong?" she asks.  Most of us would answer that it can be humiliating, horrifying, frightening, funny, revelatory, infuriating.  But she points out that these are actually answers to a different question: "what does it feel like to find out you're wrong?"

Actually, she tells us, being wrong doesn't feel like anything.  It feels exactly like being right.

Reading Schulz's book makes the reader profoundly aware of our own fallibility -- but it is far from a pessimistic book.  Error, Schulz says, is the window to discovery and the source of creativity.  It is only when we deny our capacity for error that the trouble starts -- when someone in power decides that (s)he is infallible.

Then we have big, big problems.

So right now, get this book.  I promise I won't say the same thing next week about some arcane tome describing the feeding habits of sea slugs.  You need to read Being Wrong.

Everyone does.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Tuesday, February 19, 2019

The power of phonemes

Language is defined as arbitrary symbolic communication.

"Symbolic" because spoken sounds or written character strings stand for concepts, actions, or objects; "arbitrary" because those sounds or characters have no logical connection to what they represent.  The word "dog" is no more inherently doggy than the French word (chien) or Swahili word (mbwa).  The exceptions, of course, are onomatopoeic words like "bang," "pop," "splat," and so on.

That's the simple version, anyhow.  Reality is always a lot messier.  There are words that are sort-of-onomatopoeic; "scream" sounds a lot screamier than "yell" does, even though they mean approximately the same thing.  And it's the intersection between sound and meaning that is the subject of the research of cognitive psychologist Arthur Glenberg of Arizona State University.

In an article in The Conversation, Glenberg provides some interesting evidence that even in ordinary words, the sound/meaning correspondence may not be as arbitrary as it seems at first.  It's been known for a while that hearing spoken language elicits response from the parts of the brain that would be activated if what was heard was reality; in Glenberg's example, hearing the sentence "The lovers held hands as they walked along the moonlit tropical beach" causes a response not only in the emotional centers of the brain, but in the visual centers and (most strikingly) in the part of the motor center that coordinates walking.  When hearing language, then, our brains on some level become what we hear.

Glenberg wondered if it might work the other way -- if altering the sensorimotor systems might affect how we interpret language.  Turns out it does.  Working with David Havas, Karol Gutowski, Mark Lucarelli, and Richard Davidson of the University of Wisconsin-Madison, Glenberg showed that individuals who had received Botox injections into their foreheads (which temporarily paralyzes the muscles used in frowning) were less able to perceive the emotional content of written language that would have ordinarily elicited a frown of anger.

Then, there's the kiki-booba experiment, done all the way back in 1929 by Wolfgang Köhler, which showed that at least in some cases, the sound/meaning correspondence isn't arbitrary at all.  Speakers of a variety of languages were shown the following diagram:

They're told that in a certain obscure language, one of these shapes is called "kiki" and the other is called "booba," and then are asked to guess which is which.  Just about everyone -- regardless of the language they speak -- thinks the left-hand one is "kiki" and the right-hand one is "booba."  The "sharpness" of "kiki" seems to fit more naturally with a spiky shape, and the "smoothness" of "booba" with a rounded shape, to just about everyone.

So Glenberg decided to go a step further.  Working with Michael McBeath and Christine S. P. Yu, Glenberg gave native English speakers a list of ninety word pairs where the only difference was that one had the phoneme /i/ and the other the phoneme /ʌ/, such as gleam/glum, seek/suck, seen/sun, and so on.  They were then asked which of each pair they thought was more positive.  Participants picked the /i/ word 2/3 of the time -- far more than you'd expect if the relationship between sound and meaning was truly arbitrary.

"We propose that this relation arose because saying 'eee' activates the same muscles and neural systems as used when smiling – or saying 'cheese!'" Glenberg writes.  "In fact, mechanically inducing a smile – as by holding a pencil in your teeth without using your lips – lightens your mood.  Our new research shows that saying words that use the smile muscles can have a similar effect.

"We tested this idea by having people chew gum while judging the words.  Chewing gum blocks the systematic activation of the smile muscles.  Sure enough, while chewing gum, the judged difference between the 'eee' and 'uh' words was only half as strong."

Glenberg then speculates about the effect on our outlook when we hear hateful speech -- if the constant barrage of fear-talk we're currently hearing from politicians actually changes the way we think whether or not we believe what we're hearing.  "The language that you hear gives you a vocabulary for discussing the world, and that vocabulary, by producing simulations, gives you habits of mind," he writes.  "Just as reading a scary book can make you afraid to go in the ocean because you simulate (exceedingly rare) shark attacks, encountering language about other groups of people (and their exceedingly rare criminal behavior) can lead to a skewed view of reality...  Because simulation creates a sense of being in a situation, it motivates the same actions as the situation itself.  Simulating fear and anger literally makes you fearful and angry and promotes aggression.  Simulating compassion and empathy literally makes you act kindly.  We all have the obligation to think critically and to speak words that become humane actions."

To which I can only say: amen.  I've been actively trying to stay away from social media lately, especially Twitter -- considering the current governmental shitstorm in the United States, Twitter has become a non-stop parade of vitriol from both sides.  I know it's toxic to my own mood.  It's hard to break the addiction, though.  I keep checking back, hoping that there'll be some positive development, which (thus far) there hasn't been.  The result is that the ugliness saps my energy, makes everything around me look gray and hopeless.

All of it brings home a quote by Ken Keyes, which seems like a good place to end: "A loving person lives in a loving world.  A hostile person lives in a hostile world.  Everyone you meet is your mirror."  This seems to be exactly true -- all the way down to the words we choose to speak.

***************************

You can't get on social media without running into those "What Star Trek character are you?" and "Click on the color you like best and find out about your personality!" tests, which purport to give you insight into yourself and your unconscious or subconscious traits.  While few of us look at these as any more than the games they are, there's one personality test -- the Myers-Briggs Type Indicator, which boils you down to where you fall on four scales -- extrovert/introvert, sensing/intuition, thinking/feeling, and judging/perceiving -- that a great many people, including a lot of counselors and psychologists, take seriously.

In The Personality Brokers, author Merve Emre looks not only at the test but how it originated.  It's a fascinating and twisty story of marketing, competing interests, praise, and scathing criticism that led to the mother/daughter team of Katharine Briggs and Isabel Myers developing what is now the most familiar personality inventory in the world.

Emre doesn't shy away from the criticisms, but she is fair and even-handed in her approach.  The Personality Brokers is a fantastic read, especially for anyone interested in psychology, the brain, and the complexity of the human personality.






Tuesday, October 23, 2018

Onomatopoeia FTW

Given my ongoing fascination with languages, it's a little surprising that I didn't come across a paper published two years ago in the Proceedings of the National Academy of Sciences earlier.  Entitled, "Sound–Meaning Association Biases Evidenced Across Thousands of Languages," this study proposes something that is deeply astonishing: that the connection between the sounds in a word and the meaning of the word may not be arbitrary.

It's a fundamental tenet of linguistics -- that language is defined as "arbitrary symbolic communication."  Arbitrary because there is no special connection between the sound of a word and its meaning, with the exception of the handful of words that are onomatopoeic (such as boom, buzz, splash, and splat).  Otherwise, the phonemes that make up the word for a concept would be expected to having nothing to do with the concept itself, and therefore would vary randomly from language to language (the word bird is no more fundamentally birdy than the French word oiseau is fundamentally oiseauesque).

That idea may have to be revised.  Damián E. Blasi (of the University of Zurich), Søren Wichmann (of the University of Leiden), Harald Hammarström and Peter F. Stadler (of the Max Planck Institute), and Morten H. Christiansen (of Cornell University) did an exhaustive statistical study, using dozens of basic vocabulary words representing 62% of the world's six thousand languages and 85% of its linguistic lineages and language families.  And what they found was that there are some striking patterns when you look at the phonemes represented in a variety of linguistic morphemes, patterns that held true even with completely unrelated languages.  Here are a few of the correspondences they found:
  • The word for ‘nose’ is likely to include the sounds ‘neh’ or the ‘oo’ sound, as in ‘ooze.’
  • The word for ‘tongue’ is likely to have ‘l’ or ‘u.’
  • ‘Leaf’ is likely to include the sounds ‘b,’ ‘p’ or ‘l.’
  • ‘Sand’ will probably use the sound ‘s.’
  • The words for ‘red’ and ‘round’ often appear with ‘r.’ 
  • The word for ‘small’ often contains the sound ‘i.’
  • The word for ‘I’ is unlikely to include sounds involving u, p, b, t, s, r and l.
  • ‘You’ is unlikely to include sounds involving u, o, p, t, d, q, s, r and l.
"These sound symbolic patterns show up again and again across the world, independent of the geographical dispersal of humans and independent of language lineage," said Morten Christiansen, who led the study.  "There does seem to be something about the human condition that leads to these patterns.  We don’t know what it is, but we know it’s there."

[Image licensed under the Creative Commons M. Adiputra, Globe of language, CC BY-SA 3.0]

One possibility is that these correspondences are actually not arbitrary at all, but are leftovers from (extremely) ancient history -- fossils of the earliest spoken language, which all of today's languages, however distantly related, descend from.  The authors write:
From a historical perspective, it has been suggested that sound–meaning associations might be evolutionarily preserved features of spoken language, potentially hindering regular sound change.  Furthermore, it has been claimed that widespread sound–meaning associations might be vestiges of one or more large-scale prehistoric protolanguages.  Tellingly, some of the signals found here feature prominently in reconstructed “global etymologies” that have been used for deep phylogeny inference.  If signals are inherited from an ancestral language spoken in remote prehistory, we might expect them to be distributed similarly to inherited, cognate words; that is, their distribution should to a large extent be congruent with the nodes defining their linguistic phylogeny.
But this point remains to be tested.  And there's an argument against it; if these similarities come from common ancestry, you'd expect not only the sounds, but their positions in words, to have been conserved (such as in the English/German cognate pair laugh and lachen).  In fact, that is not the case.  The sounds are similar, but their positions in the word show no discernible pattern.  The authors write:
We have demonstrated that a substantial proportion of words in the basic vocabulary are biased to carry or to avoid specific sound segments, both across continents and linguistic lineages.  Given that our analyses suggest that phylogenetic persistence or areal dispersal are unlikely to explain the widespread presence of these signals, we are left with the alternative that the signals are due to factors common to our species, such as sound symbolism, iconicity, communicative pressures, or synesthesia...  [A]lthough it is possible that the presence of signals in some families are symptomatic of a particularly pervasive cognate set, this is not the usual case.  Hence, the explanation for the observed prevalence of sound–meaning associations across the world has to be found elsewhere.
Which I think is both astonishing and fascinating.  What possible reason could there be that the English word tree is composed of the three phonemes it contains?  The arbitrariness of the sound/meaning relationship seemed so obvious to me when I first learned about it that I didn't even stop to question how we know it's true.

Generally a dangerous position for a skeptic to be in.

I hope that the research on this topic is moving forward, because it certainly would be cool to find out what's actually going on here.  I'll have to keep my eyes out for any follow-ups.  But now I'm going to go get a cup of coffee, which I think we can all agree is a nice, warm, comforting-sounding word.

***********************************

The Skeptophilia book recommendation of the week is a must-read for anyone interested in languages -- The Last Speakers by linguist K. David Harrison.  Harrison set himself a task to visit places where they speak endangered languages, such as small communities in Siberia, the Outback of Australia, and Central America (where he met a pair of elderly gentlemen who are the last two speakers of an indigenous language -- but they have hated each other for years and neither will say a word to the other).

It's a fascinating, and often elegiac, tribute to the world's linguistic diversity, and tells us a lot about how our mental representation of the world is connected to the language we speak.  Brilliant reading from start to finish.




Saturday, April 28, 2018

The beat goes on

I've been a language geek for a very long time, which at least partly explains how a guy who has a bachelor's degree in physics and teaches high school biology has a master's degree in linguistics.  There's something about the way communication works that is simply fascinating to me.

There's a tremendous diversity in how languages work.  On the basic level, the phonetics of languages can differ greatly; each language has a unique sound structure.  Some really different, at least to my English-speaking brain; consider Xhosa, the language spoken by over ten million people in South Africa, which has three different consonants that are clicks (usually written "c" for the dental click, "x" for the lateral click, and "q" for the palatal click).  If you want to hear Xhosa sung, check out this video of the legendary Miriam Makeba singing the song "Qongqothwane:"


Another complication is tonality -- for many languages, the same syllable spoken with a rising vs. a falling tone actually has a completely different meaning.  (English only has one consistent tonal feature, which is that a rise in tone at the end of a sentence can denote a question, but the pitch change doesn't alter the meaning, as it does in many languages.)


It can be odder than that, though.  There are whistled languages, such as Silbo in the Canary Islands.  Many examples exist -- France, Greece, Turkey, India, Nepal, and Mexico all have groups who communicate by whistling (although they also have spoken language; no group I've ever heard of communicates exclusively by whistles).  Along the same lines -- and it was recent research on this topic that spurred this post -- are drummed languages.

Linguist Frank Seifart was researching endangered languages in Colombia, and was in a village where the Bora language is spoken while the chief was away.  The chief was sent for -- by someone drumming out a pattern that meant, "A stranger has arrived.  Come home."

And it's not just a code, like Morse code; the drumbeat patterns actually mimic the changes in timbre, pitch, and rhythm of the speech the drummer is trying to emulate.  The paper, which appeared in the journal Royal Society Open Science last week, was titled, "Reducing Language to Rhythm: Amazonian Bora Drummed Language Exploits Speech Rhythm for Long-Distance Communication," and begins as follows:
Many drum communication systems around the world transmit information by emulating tonal and rhythmic patterns of spoken languages in sequences of drumbeats.  Their rhythmic characteristics, in particular, have not been systematically studied so far, although understanding them represents a rare occasion for providing an original insight into the basic units of speech rhythm as selected by natural speech practices directly based on beats.  Here, we analyse a corpus of Bora drum communication from the northwest Amazon, which is nowadays endangered with extinction.  We show that four rhythmic units are encoded in the length of pauses between beats.  We argue that these units correspond to vowel-to-vowel intervals with different numbers of consonants and vowel lengths.  By contrast, aligning beats with syllables, mora or only vowel length yields inconsistent results.  Moreover, we also show that Bora drummed messages conventionally select rhythmically distinct markers to further distinguish words.  The two phonological tones represented in drummed speech encode only few lexical contrasts.  Rhythm thus appears to crucially contribute to the intelligibility of drummed Bora.  Our study provides novel evidence for the role of rhythmic structures composed of vowel-to-vowel intervals in the complex puzzle concerning the redundancy and distinctiveness of acoustic features embedded in speech.
An amusing part of the research is that in the Bora drummed language, each message is followed by a pattern that means, "Now, don't say that I am a liar."  Seifart says that the gist is much like a parent yelling at a child, "Don't tell me you didn't hear me!"

The whole thing is fascinating -- when communicating over distances long enough that our voices won't reach, people have invented new ways to send messages -- and those new ways incorporate many of the phonetic, tonal, and syntactic frameworks of the original language.

The biologist in me, however, is curious about how this is being processed in the brain.  Does drummed speech get interpreted in the same place in the brain where spoken language is?  There's been a parallel study on whistled languages -- Onur Güntürkün, a biopsychologist at the Institute of Cognitive Neuroscience in Bochum, Germany, who has studied how whistled languages are processed in the brain, found that there was an intriguing difference between the activity of the brain while listening to whistled versus spoken language.  Since we process melodic tones primarily in the right side of the cerebrum and language primarily in the left, Güntürkün suspected that whistled languages would activate both sides equally -- and he was right.

As far as drummed languages, Güntürkün was especially interested in how the content of messages could be conveyed by milliseconds-long variations in the rhythm pattern.  "I’m amazed that these tiny milliseconds are doing the job," he said, adding that the next step is an analysis of how the two hemispheres of the brain process drummed speech, specifically timing cues.

All of which brings home again not only the amazing processing power of the brain, but the drive in humans to communicate.  It emphasizes once again the importance of preserving these endangered languages -- not only for reasons of protecting people's cultural identities, but for what it tells us about the neurological underpinning of our own minds.

******************************

This week's featured book on Skeptophilia should be in every good skeptic's library: Michael Shermer's Why People Believe Weird Things.  It's a no-holds-barred assault against goofy thinking, taking on such counterfactual beliefs as psychic phenomena, creationism, past-life regression, and Holocaust denial.  Shermer, the founder of Skeptic magazine, is a true crusader, and his book is a must-read.  You can buy it at the link below!