Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label language learning. Show all posts
Showing posts with label language learning. Show all posts

Saturday, December 7, 2024

Talking in your sleep

A little over a year ago, I decided to do something I've always wanted to do -- learn Japanese.

I've had a fascination with Japan since I was a kid.  My dad lived there for a while during the 1950s, and while he was there collected Japanese art and old vinyl records of Japanese folk and pop music, so I grew up surrounded by reminders of the culture.  As a result, I've always wanted to learn more about the country and its people and history, and -- one day, perhaps -- visit.

So in September of 2023 I signed up for Duolingo, and began to inch my way through learning the language.

[Image is in the Public Domain]

It's a challenge, to say the least.  Japanese usually shows up on lists of "the five most difficult languages to learn."  Not only are there the three different scripts you have to master in order to be literate, the grammatical structure is really different from English.  The trickiest part, at least thus far, is managing particles -- little words that follow nouns and indicate how they're being used in the sentence.  They're a bit like English prepositions, but there's a subtlety to them that is hard to grok.  Here's a simple example:

Watashi wa gozen juuji ni tokoshan de ane aimasu.

(I) (particle indicating the subject of the sentence) (A.M.) (ten o'clock) (particle indicating movement or time) (library) (particle indicating where something is happening) (my sister) (am meeting with) = "I am meeting my sister at ten A.M. at the library."

Get the particles wrong, and the sentence ends up somewhere between grammatically incorrect and completely incomprehensible.

So I'm coming along.  Slowly.  I have a reasonably good affinity for languages -- I grew up bilingual (English/French) and have a master's degree in linguistics -- but the hardest part for me is simply remembering the vocabulary.  The grammar patterns take some getting used to, but once I see how they work, they tend to stick.  The vocabulary, though?  Over and over again I'll run into a word, and I'm certain I've seen it before and at one point knew what it meant, and it will not come back to mind.  So I look it up...

... and then go, "Oh, of course.  Duh.  I knew that."

But according to a study this week out of the University of South Australia, apparently what I'm doing wrong is simple: I need more sleep.

Researchers in the Department of Neuroscience took 35 native English speakers and taught them "Mini-Pinyin" -- an invented pseudolanguage that has Mandarin Chinese vocabulary but English sentence structure.  (None of them had prior experience with Mandarin.)  They were sorted into two groups; the first learned the language in the morning and returned twelve hours later to be tested, and the second learned it in the evening, slept overnight in the lab, and were tested the following morning.

The second group did dramatically better than the first.  Significantly, during sleep their brains showed a higher-than-average level of brain wave patterns called slow oscillations and sleep spindles, that are thought to be connected with memory consolidation -- uploading short-term memories from the hippocampus into long-term storage in the cerebral cortex.  Your brain, in effect, talks in its sleep, routing information from one location to another.

"This coupling likely reflects the transfer of learned information from the hippocampus to the cortex, enhancing long-term memory storage," said Zachariah Cross, who co-authored the study.  "Post-sleep neural activity showed unique patterns of theta oscillations associated with cognitive control and memory consolidation, suggesting a strong link between sleep-induced brainwave co-ordination and learning outcomes."

So if you're taking a language class, or if -- like me -- you're just learning another language for your own entertainment, you're likely to have more success in retention if you study in the evening, and get a good night's sleep before you're called upon to use what you've learned.

Of course, many of us could use more sleep for a variety of other reasons.  Insomnia is a bear, and poor sleep is linked with a whole host of health-related woes.  But a nice benefit of dedicating yourself to getting better sleep duration and quality is an improvement in memory.

And hopefully for me, better scores on my Duolingo lessons.

****************************************

Monday, July 8, 2024

Beginner's mind

Last September, I started learning Japanese through Duolingo.

[Image licensed under the Creative Commons Grantuking from Cerrione, Italy, Flag of Japan (1), CC BY 2.0]

My master's degree is in historical linguistics, so I'm at least a little better than the average bear when it comes to languages, but still -- my graduate research focused entirely on Indo-European languages.  (More specifically, the effects of the Viking invasions on Old English and the Celtic languages.)  Besides the Scandinavian languages and the ones found in the British Isles, I have a decent, if rudimentary, grounding in Greek and Latin, but still -- until last September, anything off of the Indo-European family tree was pretty well outside my wheelhouse.

The result is that there are features of Japanese that I'm struggling with, because they're so different from any other language I've studied.  Languages like Old English, Old Norse, Gaelic, Greek, and Latin are all inflected languages -- nouns change form depending on how they're being used in a sentence.  A simple example from Latin: in the two sentences "Canis felem momordit" ("The dog bit the cat") and "Felis canem momordit" ("The cat bit the dog"), you know who bit whom not by the order of the words, but by the endings.  The biter ends in -s, the bitee ends in -m.  The sentence would still be intelligible (albeit a little strange-sounding) if you rearranged the words.

Not so in Japanese.  In Japanese, not only does everything have to be in exactly the right order, just about every noun has to be followed by the correct particle, a short, more-or-less untranslatable word that tells you what the function of the previous word is.  They act a little like case endings do in inflected languages, and a little like prepositions in English, but with some subtleties that are different from either.  For example, here's a sentence in Japanese:

Tanaka san wa, sono sushiya de hirugohan o tabemashou ka?

Mr. Tanaka [particle indicating respect, always used when addressing another person] [particle indicating who you're talking to or the subject of the sentence], that sushi shop [particle indicating going to a place] lunch [particle indicating the object of the sentence] should we eat [particle indicating that what you just said was a question]? = "Mr. Tanaka, would you like to eat lunch at that sushi shop?"

Woe betide if you forget the particle or use the wrong one, or put things out of order.  Damn near every time I miss something on Duolingo and get that awful "clunk" noise that tells you that you screwed up, it's because I made a particle-related mistake.

And don't even get me started about the three different writing systems you have to learn.

This is the first time in a while I've been in the position of starting from absolute ground zero with something.  I guess I do have a bit of a leg up from having a background in other languages, but it's not really that much.  Being a rank beginner is humbling -- if you're going to get anywhere, you have to be willing to let yourself make stupid mistakes (sometimes over and over and over), laugh about it, and keep going.  I'm not really so good at that -- not only do I take myself way too damn seriously most of the time, I have that unpleasant combination of being (1) ridiculously self-critical and (2) highly competitive.  If you're familiar with Duolingo, you undoubtedly know about the whole XP (experience points) and "leagues" thing -- when you complete a lesson you earn XP (as long as you don't lose points in the lesson because you fucked up the particles again), and at the end of the week, you are ranked in XP against other learners, and depending on your score, you can move up into a new "league."

Or get "demoted."  Heaven forbid.  Given my personality, my attitude is "death before demotion."  As my wife pointed out, nothing happens if I get demoted -- it's not like the app reaches into my cerebrum and deletes what I've learned, or anything.  

She's right of course, but still.

I'll be damned if I'm gonna let myself get demoted.

So last week I reached "Diamond League," which is the top-tier.  Yay me, right?  Only now, there's nowhere left to go.  But I have to keep hammering at it, because if I don't I'll get dropped back into Obsidian League, and screw that sideways.

On the other hand, I keep at it because I also want to learn Japanese, right?  Of course right.

In Zen Buddhism, there's a concept called shoshin (初心), usually translated as "beginner's mind."  It means approaching every endeavor as if you were just seeing it for the first time, with excitement, anticipation -- and no preconceived notions of how it should go.  This is a hard lesson for me, harder even than remembering kanji.  I've had to get used to taking it slowly, realizing that I'm not going to learn a difficult and unfamiliar language overnight, and to come at it from a standpoint of curiosity and enjoyment.

It's not a competition, however determined I am to stay in the "Diamond League."  The process and the knowledge and the achievement should be the point, not a focus on some arbitrary standard of where I think I should be.

And some day, I'd like to visit the lovely country of Japan, and (maybe?) be able to converse a little in their language.  

[Image licensed under the Creative CommonsKeihin Nike, Bunkyou Koishikawa Botanical Japanese Garden 1 (1), CC BY-SA 3.0]

When that day comes, I suspect if I can approach the whole thing with beginner's mind, I'll get a lot more out of the experience.  Until that time -- I could probably think of a few other aspects of my life that this principle could be applied to, as well.

****************************************



Monday, March 18, 2024

Memory boost

About two months ago I signed up with Duolingo to study Japanese.

I've been fascinated with Japan and the Japanese culture pretty much all my life, but I'm a total novice with the language, so I started out from "complete beginner" status.  I'm doing okay so far, although the fact that it's got three writing systems is a challenge, to put it mildly.  Like most Japanese programs, it's beginning with the hiragana system -- a syllabic script that allows you to work out the pronunciation of words -- but I've already seen a bit of katakana (used primarily for words borrowed from other languages) and even a couple of kanji (the ideographic script, where a character represents an entire word or concept).

[Image licensed under the Creative Commons 663highland, 140405 Tsu Castle Tsu MIe pref Japan01s, CC BY-SA 3.0]

While Duolingo focuses on getting you listening to spoken Japanese right away, my linguistics training has me already looking for patterns -- such as the fact that wa after a noun seems to act as a subject marker, and ka at the end of a sentence turns it into a question.  I'm still perplexed by some of the pronunciation patterns -- why, for example, vowel sounds sometimes don't get pronounced.  The first case of this I noticed is that the family name of the brilliant author Akutagawa Ryūnosuke is pronounced /ak'tagawa/ -- the /u/ in the second syllable virtually disappears.  I hear it happening fairly commonly in spoken Japanese, but I haven't been able to deduce what the pattern is.  (If there is one.  If there's one thing my linguistics studies have taught me, it's that all languages have quirks.  Try explaining to someone new to English why, for instance, the -ough combination in cough, rough, through, bough, and thorough are all pronounced differently.) 

Still and all, I'm coming along.  I've learned some useful phrases like "Sushi and water, please" (Sushi to mizu, kudasai) and "Excuse me, where is the train station?" (Sumimasen, eki wa doko desu ka?), as well as less useful ones like "Naomi Yamaguchi is cute" (Yamaguchi Naomi-san wa kawaii desu), which is only critical to know if you have a cute friend who happens to be named Naomi Yamaguchi.

The memorization, however, is often taxing to my 63-year-old brain.  Good for it, I have no doubt -- a recent study found that being bi- or multi-lingual can delay the onset of dementia by four years or more -- but it definitely is a challenge.  I go through my hiragana flash cards at least once a day, and have copious notes for what words mean and for any grammatical oddness I happen to notice.  Just the sheer amount of memorization, though, is kind of daunting.

Maybe what I should do is find a way to change the context in which I have to remember particular words, phrases, or characters.  That seems to be the upshot of a study I ran into a couple of days ago in Proceedings of the National Academy of Sciences, about a study by a group from Temple University and the University of Pittsburgh about how to improve retention.

I'm sure all of us have experienced the effects of cramming for a test -- studying like hell the night before, and then you do okay on the test but a week later barely remember any of it.  This practice does two things wrong; not only stuffing all the studying into a single session, but doing it all the same way.

What this study showed was two factors that significantly improved long-term memory.  One was spacing out study sessions -- doing shorter sessions more often definitely helped.  I'm already approaching Duolingo this way, usually doing a lesson or two over my morning coffee, then hitting it again for a few more after dinner.  But the other interesting variable they looked at was that test subjects' memories improved substantially when the context was changed -- when, for example, you're trying to remember as much as you can of what a specific person is wearing, but instead of being shown the same photograph over and over, you're given photographs of the person wearing the same clothes but in a different setting each time.

"We were able to ask how memory is impacted both by what is being learned -- whether that is an exact repetition or instead, contains variations or changes -- as well as when it is learned over repeated study opportunities," said Emily Cowan, lead author of the study.  "In other words... we could examine how having material that more closely resembles our experiences of repetition in the real world -- where some aspects stay the same but others differ -- impacts memory if you are exposed to that information in quick succession versus over longer intervals, from seconds to minutes, or hours to days."

I can say that this is one of the things Duolingo does right.  Words are repeated, but in different combinations and in different ways -- spoken, spelled out using the English transliteration, or in hiragana only.  Rather than always seeing the same word in the same context, there's a balance between the repetition we all need when learning a new language and pushing your brain to generalize to slightly different usages or contexts.

So all things considered, Duolingo had it figured out even before the latest research came out.  I'm hoping it pays off, because my son and I would like to take a trip to Japan at some point and be able to get along, even if we don't meet anyone cute named Naomi Yamaguchi.  But I should wind this up, so for now I'll say ja ane, mata ashita (goodbye, see you tomorrow).

****************************************



Tuesday, November 2, 2021

Canine gap analysis

One of the reasons that it's (generally) much easier to learn to read a second language than it is to understand it in speech has to do not with the words, but with the spaces in between them.

Students learning to understand spoken conversation in another language have the common complaint that "they talk so fast."  They don't, really, or at least no faster than the speakers of your native language.  But unfamiliarity with the lexicon of the new language makes it hard to figure out where the gaps are between adjacent words.  Unless you concentrate (and sometimes even if you do), it sounds like one continuous stream of random phonemes.

As an aside, sometimes I have the same problem with English spoken with a different accent than the one I grew up with.  The character of Yaz in the last three seasons of Doctor Who is from Yorkshire, and her accent -- especially when she's agitated and speaking quickly -- sometimes leaves me thinking, "Okay, what did she just say?"  (That's why I usually watch with the subtitles on.)  This isn't unique to accents from the UK, of course; it's why a lot of non-southerners find southern accents difficult to parse.  Say to someone from Louisiana, "Jeetyet? and they'll clearly hear "Did you eat yet?"; and one of the most common greetings is "howzyamommandem?"

I'd never really considered how important the spaces between the words are until I ran into some research last week in Current Biology in a paper entitled, "Dogs Learn About Word Boundaries as Human Infants Do," that showed dogs -- perhaps unique amongst non-human animals -- are able to use some pretty complex mental calculations to figure out where the gaps are in "Do you want to play ball?"  Say that phrase out loud, especially in an excited tone, and you'll notice that in the actual sounds there are minuscule gaps, or none at all, so what they're listening for can't be little bits of silence.

By looking at brain wave activity in pre-verbal infants presented with actual speech, speech using unfamiliar/rare words, and gibberish, scientists found that the neural activity spiked when syllables are spoken that almost always (in the infant's experience) occur together.  An example is the phrase, "Do you want breakfast now?"  The syllables /brek/ and /fǝst/ aren't used much outside of the word "breakfast," so apparently the brain is doing some complex statistical calculations to identify that as a discrete word and not adjoined to the words coming before or afterward.

What the current research finds is that dogs are doing precisely the same thing when they listen to human language.

The authors write:

To learn words, humans extract statistical regularities from speech.  Multiple species use statistical learning also to process speech, but the neural underpinnings of speech segmentation in non-humans remain largely unknown. Here, we investigated computational and neural markers of speech segmentation in dogs, a phylogenetically distant mammal that efficiently navigates humans’ social and linguistic environment.  Using electroencephalography (EEG), we compared event-related responses (ERPs) for artificial words previously presented in a continuous speech stream with different distributional statistics...  Using fMRI, we searched for brain regions sensitive to statistical regularities in speech.  Structured speech elicited lower activity in the basal ganglia, a region involved in sequence learning, and repetition enhancement in the auditory cortex.  Speech segmentation in dogs, similar to that of humans, involves complex computations, engaging both domain-general and modality-specific brain areas.
I know that when I talk to Guinness -- not using the short, clipped words or phrases recommended by dog trainers, but full complex sentences -- he has this incredibly intent, alert expression, and I get the sense that he's really trying to understand what I'm saying.  I've heard people say that outside of a few simple commands like "sit" or "stay," dogs respond only to tone of voice, not the actual words spoken.

Apparently that isn't true.


So I suppose when I say "whoozagoodboy?", he actually knows it's him.

"Keeping track of patterns is not unique to humans: many animals learn from such regularities in the surrounding world, which is called statistical learning," said Marianna Boros of Eötvös Loránd University, who co-authored the study, in an interview with Vinkmag.  "What makes speech special is its efficient processing requires complex computations.  To learn new words from continuous speech, it is not enough to count how often certain syllables occur together.  It is much more efficient to calculate the probability of those syllables occurring together.  This is exactly how humans, even eight-month-old infants, solve the seemingly difficult task of word segmentation: they calculate complex statistics about the probability of one syllable following the other.  Until now we did not know if any other mammal can also use such complex computations to extract words from speech.  We decided to test family dogs’ brain capacities for statistical learning from speech.  Dogs are the earliest domesticated animal species and probably the one we speak most often to.  Still, we know very little about the neural processes underlying their word learning capacities."

So remember this next time you talk to your dog.  He might well be understanding more than you realize.  He might not get much if you read to him from A Brief History of Time, but my guess is that common speech is less of a mystery to him than it might have seemed.

**********************************

My master's degree is in historical linguistics, with a focus on Scandinavia and Great Britain (and the interactions between them) -- so it was with great interest that I read Cat Jarman's book River Kings: A New History of Vikings from Scandinavia to the Silk Road.

Jarman, who is an archaeologist working for the University of Bristol and the Scandinavian Museum of Cultural History of the University of Oslo, is one of the world's experts on the Viking Age.  She does a great job of de-mythologizing these wide-traveling raiders, explorers, and merchants, taking them out of the caricature depictions of guys with blond braids and horned helmets into the reality of a complex, dynamic culture that impacted lands and people from Labrador to China.

River Kings is a brilliantly-written analysis of an often-misunderstood group -- beginning with the fact that "Viking" isn't an ethnic designation, but an occupation -- and tracing artifacts they left behind traveling between their homeland in Sweden, Norway, and Denmark to Iceland, the Hebrides, Normandy, the Silk Road, and Russia.  (In fact, the Rus -- the people who founded, and gave their name to, Russia -- were Scandinavian explorers who settled in what is now the Ukraine and western Russia, intermarrying with the Slavic population there and eventually forming a unique melded culture.)

If you are interested in the Vikings or in European history in general, you should put Jarman's book in your to-read list.  It goes a long way toward replacing the legendary status of these fierce, sea-going people with a historically-accurate reality that is just as fascinating.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Saturday, July 10, 2021

F-word origin

Being a linguistics nerd, I've often wondered why the phonemic repertoire differs between different languages.  Put more simply: why do languages all sound different?

I first ran into this -- although I had to have it pointed out to me -- with French and English.  I grew up in a bilingual family (my mom's first language was French), so while I'd heard, and to a lesser extent spoken, French during my entire childhood I'd never noticed that there were sounds in one language that didn't occur in the other.  When I took my first formal French class as a ninth-grader, the teacher told us that French has two sounds that don't occur in English at all -- the vowel sound in the pronoun tu (represented in the International Phonetic Alphabet as /y/) and the one in coeur (represented as /ø/).  Also, the English r-sound (/r/) and the French r-sound (/ʁ/) aren't the same -- the English one doesn't occur in French, and vice-versa.

The International Phonetic Alphabet [image is in the Public Domain]

Not only are there different phonemes in different languages, the number of phonemes can vary tremendously.  The Hawaiian language has only thirteen different phonemes: /a/, /e/, /i/, /o/, /u/, /k/, /p/, /h/, /m/, /n/, /l/, /w/, and /ʔ/.  The last is the glottal stop -- usually represented in written Hawaiian as an apostrophe, as in the word for "circle" -- po'ai.

If you're curious, the largest phonemic inventory of any human language is Taa, one of the Khoisan family of languages, spoken mainly by people in western Botswana.  Taa has 107 different phonemes, including 43 different "click consonants."  If you want to hear the most famous example of a language with click consonants, check out this recording of the incomparable South African singer Miriam Makeba singing the Xhosa folk song "Qongqothwane:"


It's a mystery why different languages have such dramatically different sound systems, but at least a piece of it may have been cleared up by a paper in Science last week that was sent my way by my buddy Andrew Butters, writer and blogger over at the wonderful Potato Chip Math.  The contention -- which sounds silly until you see the evidence -- is that the commonness of the labiodental fricative sounds, /f/ and /v/, is due to an alteration in our bites that occurred when we switched to eating softer foods when agriculture became prominent.

I was a little dubious, but the authors make their case well.  Computer modeling of bite physiology and sound production shows that an overbite makes the /f/ and /v/ phonemes take 29% less effort than someone with an edge-to-edge bite exerts.  Most persuasively, they found that current languages spoken by hunter-gatherer societies have only one-quarter the incidence of labiodental fricatives as other languages do.

So apparently my overbite and fondness for mashed potatoes are why I like the f-word so much.  Who knew?  As I responded to Andrew, "Wow, this is pretty fucking fascinating."

Once a language develops a sound system, it's remarkably resistant to change, probably because one of the first pieces of language a baby learns is the phonetic repertoire, and after that it's pretty well locked in for life.  In her wonderful TED Talk, linguist Patricia Kuhl describes studying the phonetics of babbling.  When babies first start to vocalize at age about three months, they make sounds of just about every sort.  But between six and nine months, something fascinating happens -- they stop making sounds they're not hearing, and even though they're still not speaking actual words, the sound repertoire gradually becomes the one from the language they're exposed to.  One example is the English /l/ and /r/ phonemes, as compared to the Japanese liquid consonant [ɾ] (sometimes described as being halfway between an English /l/ and an English /r/).  Very young babies will vocalize all three sounds -- but by nine months, a baby hearing English will retain /l/ and /r/ and stop saying [ɾ], while a baby hearing Japanese does exactly the opposite.

If you've studied a second language that has a different phonemic set than your native language, you know that getting the sounds right is one of the hardest things to do well.  As a friend of mine put it, "My mouth just won't wrap itself around French sounds."  This is undoubtedly because we learn the phonetics of our native language so young -- and once that window has closed, adding to and rearranging our phonemic inventory becomes a real challenge.

So if you've ever wondered why your language has the sounds it does, here's at least a partial explanation.  I'll end with another video that is a must-watch, especially for Americans who are interested in regional accents.  I live in upstate New York but was raised in Louisiana and spent ten years living in Seattle, so I've thought of my own speech as relatively homogenized, but maybe I should listen to myself more carefully.

*************************************

Most people define the word culture in human terms.  Language, music, laws, religion, and so on.

There is culture among other animals, however, perhaps less complex but just as fascinating.  Monkeys teach their young how to use tools.  Songbirds learn their songs from adults, they're not born knowing them -- and much like human language, if the song isn't learned during a critical window as they grow, then never become fluent.

Whales, parrots, crows, wolves... all have traditions handed down from previous generations and taught to the young.

All, therefore, have culture.

In Becoming Wild: How Animal Cultures Raise Families, Create Beauty, and Achieve Peace, ecologist and science writer Carl Safina will give you a lens into the cultures of non-human species that will leave you breathless -- and convinced that perhaps the divide between human and non-human isn't as deep and unbridgeable as it seems.  It's a beautiful, fascinating, and preconceived-notion-challenging book.  You'll never hear a coyote, see a crow fly past, or look at your pet dog the same way again.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]


Thursday, July 6, 2017

Basing education on research

If I have one major beef with the education system in the United States, it would be its steadfast refusal to use the latest research on how people learn to guide instruction.

As an example, consider how we teach foreign language.  In most public schools, foreign language instruction starts in middle school (ours doesn't begin until 8th grade).  Study after study has shown that age of acquisition is inversely correlated with final language proficiency; put simply, the older you are when you start learning, the poorer your eventual understanding of the language is likely to be.  (For a great summary of the research, check out this article by David Birdsong of the University of Texas - Austin.)

Has that changed how we teach language?  Not in most school systems, it hasn't.  Empirical research in neuroscience never seems to outweigh such considerations as "we've always done it this way" and "that's the way it was taught when I was in school" and "it would be too expensive/inconvenient."

And then, with no sense of irony, we question why students don't come out of school proficient.

So I have no particular optimism that a recent bit of research will change anything, although hope springs eternal and all that sort of stuff.  According to a report by the AmGen Foundation and Change the Equation, which are two groups that advocate for STEM education, American students in general are fascinated with science -- but dislike science classes.

Considering my own field, biology, the numbers are especially dire.  73% of the students questioned said they're interested in biology -- and after all, what's not to like?  Biology is all about sex, struggle, competition, and death, so if you like Game of Thrones, loving biology should be a no-brainer.  But a dismal 33% of students said they like biology class.

Why?  Because science classes in general, and biology classes in particular, usually fall back on learning from textbooks and worksheets, which were cited by these same students as their least favorite (and least successful) methods for learning new concepts.  Real-world, hands-on experiments, field trips to actual research sites and laboratories, and being able to choose the topics on which they focus are all cited as being factors that would make class more interesting -- but which are infrequently used in class, at least by comparison to book work and vocabulary worksheets.

[image courtesy of the Wikimedia Commons]

I'm sure that part of that is that it makes fewer demands on the teacher.  Labs are not only expensive for the school district, they are a considerable time-sink for the teacher to set up and break down.  Even more expensive and time-consuming are field trips; the district not only has to pay the bus driver to get the kids to and from the site, but pay for a sub for the teacher's other classes.  In my case -- given that last year my intro bio classes represented only half of my teaching assignment -- it would also entail my getting lessons together for my other classes that could be administered by a sub in my absence.

Unsurprising that most teachers minimize these sorts of things.

This, by the way, is not meant as a criticism of teachers, or at least not solely; we're incredibly busy, and some days I have to carve out a few minutes from the demands of my schedule just to get a chance to pee.  It's no wonder that we cut corners and economize with activities that are easy to administer and grade.  But the fact remains that these time-expensive (and often money-expensive) activities are the ones students like the best -- and engagement almost always equals improvements in learning.

One I'd like to look at more closely is "being able to choose topics on which students focus."  Author and behavioral scientist Daniel Pink, in his amazing talk, "The Surprising Truth About What Motivates Us," identifies three factors that improve engagement in both the business world and in schools: mastery, purpose, and autonomy.  Mastery is the good feeling we get from becoming better at stuff.  Purpose is feeling that what we are doing is important.  And autonomy is self-direction.

A combination of the three, Pink says, makes work and/or school far more pleasant -- and far more productive -- than the usual carrot-and-stick approach of grades and awards (the stick, of course, being failure and censure).  And I would argue that we in schools achieve mastery pretty well, purpose only infrequently, and autonomy barely at all.

We certainly encourage getting better at stuff, and (however effectively) do our best to make students improve their skills and understanding.  As far as purpose, think about what we tell students when they ask, "why do we have to learn this?"  I know some of us are able to give good answers to that, something beyond, "It's on the test" or "it's part of the curriculum" -- but even when we try to articulate why our class is important, we often do it so ineffectively that students don't believe us.  So much of what we do is disconnected enough from any real-world application that it honestly is hard to see how it connects to anything students are going to be asked to do after they graduate.

But the worst of all is autonomy.  Other than (some) choice in what classes they take, students almost never have any real, meaningful choice in what or how they learn.  I have heard of exceptions -- one school I know of teaches all of the core subjects in the context of "modules" (and before any teachers bristle at the use of the word, these are not the same "modules" used in the Common Core).  Each year, students choose four modules, two per semester, from a list of a dozen or so -- topics like "Oceans, Rivers, and Lakes," "Machines and Mechanization," and "Exploration of the World."  Each one builds in all of the subjects -- to take the first as an example, the topic of the watery part of the world incorporates biology (aquatic organisms and food webs), chemistry (the composition of fresh and marine water), physics/earth science (how bodies of water drive weather), English/writing (reading articles on the topic and writing summaries or responses), history & geography (the use of bodies of water for exploration and travel).

If you want the ultimate expression of how autonomy can generate success, though, consider schools in Finland -- ranked year after year at the top of every measure of school success there is.  But rather than my telling you about it, take an hour and watch The Finland Phenomenon (the link is to the first quarter of the documentary).  The students there are given huge amounts of autonomy with regards to how they learn the concepts and processes in the curriculum, and are tested only infrequently -- and yet, they consistently outperform our micromanaging, test-happy public schools here in the United States.

Of course, the problem is that in order to make this kind of change would require a complete restructuring of schools -- and retraining of teachers.  The fact is, classes designed around autonomy, purpose, and mastery require dedication, excellence, and (most importantly) time from the teachers -- and time is what even dedicated and excellent teachers are usually short of.

But we've got to do something, and maybe a good start would be listening to the research instead of saying, "we've always done it this way."  After all, it's hard to argue the point that we aren't doing a very good job of turning out well-rounded, confident critical thinkers now.  Certainly there will be adjustments and growing pains and setbacks if we do such a total revamp of the educational system.  Finland's switch from a U.S.-style, top-down, worksheet-and-test system thirty years ago wasn't without some bumps.

But considering what they have now -- and what we have now -- we don't have much to lose by trying.

Saturday, September 17, 2016

The language of morality

If we needed any more indication that our moral judgments aren't as solid as we'd like to think, take a look at some research by Janet Geipel and Constantinos Hadjichristidis of the University of Trento (Italy), working with Luca Surian of Leeds University (UK).

The study, entitled "How Foreign Language Shapes Moral Judgment," appeared in the Journal of Social Psychology.  What Geipel et al. did was to present multilingual individuals with situations which most people consider morally reprehensible, but where no one (not even an animal) was deliberately hurt -- such as two siblings engaging in consensual and safe sex, and a man cooking and eating his dog after it was struck by a car and killed.  These types of situations make the vast majority of us go "Ewwwww" -- but it's sometimes hard to pinpoint exactly why that is.

"It's just horrible," is the usual fallback answer.

So did the test subjects in the study find such behavior immoral or unethical?  The unsettling answer is: it depends on what language the situation was presented in.

Across the board, if the situation was presented in the subject's first language, the judgments regarding the situation were uniformly harsher and more negative.  Presented in languages learned later in life, the subjects were much more forgiving.

The researchers controlled for which languages were being spoken; they tested (for example) native speakers of Italian who had learned English, and native speakers of English who had learned Italian.  It didn't matter what the language was; what mattered was when you learned it.

[image courtesy of the Wikimedia Commons]

The explanation they offer is that the effort of speaking a non-native language "ties up" the cognitive centers, making us focus more on the acts of speaking and understanding and less on the act of passing moral judgment.  I wonder, however, if it's more that we expect more in the way of obeying social mores from our own tribe -- we subconsciously expect people speaking other languages to act differently than we do, and therefore are more likely to give a pass to them if they break the rules that we consider proper behavior.

A related study by Catherine L. Harris, Ayşe Ayçiçeĝi, and Jean Berko Gleason appeared in Applied Psycholinguistics.  Entitled "Taboo Words and Reprimands Elicit a Greater Autonomic Reactivity in a First Language Than in a Second Language," the study showed that our emotional reaction (as measured by skin conductivity) to swear words and harsh judgments (such as "Shame on you!") is much stronger if we hear them in our native tongue.  Even if we're fluent in the second language, we just don't take its taboo expressions and reprimands as seriously.  (Which explains why my mother, whose first language was French, smacked me in the head when I was five years old and asked her -- on my uncle's prompting -- what "va t'faire foutre" meant.)

All of which, as both a linguistics geek and someone who is interested in ethics and morality, I find fascinating.  Our moral judgments aren't as rock-solid as we think they are, and how we communicate alters our brain, sometimes in completely subconscious ways.  Once again, the neurological underpinnings of our morality turns out to be strongly dependent on context -- which is simultaneously cool and a little disturbing.

Saturday, November 21, 2015

Opening the door to the Chinese Room

The idea of artificial intelligence terrifies a lot of people.

The reasons for this fear vary.  Some are repelled by the thought that our mental processes could be emulated in a machine. Others worry that if we do develop AI, it will rise up and overthrow us, à la The Matrix.  Still others are convinced that humans have something that is inherently unrepresentable -- a heart, a soul, perhaps even simply consciousness -- so any machine that appeared to be intelligent and human-like would only be a clever replica.

The people who believe that human intelligence will never be emulated in a machine usually fall back on something like the John Searle's "Chinese Room Analogy" as an argument.  Searle, an American philosopher, has said that computers are simply string-conversion devices; they take an input string, manipulate it in some completely predictable way, and then create an output string which they then give you.  What they do is analogous to someone sitting in a locked room with a Chinese-English dictionary who is given a string of Chinese text, and uses the dictionary to convert it to English.  There is no true understanding; it's mere symbol manipulation.

[image courtesy of the Wikimedia Commons]

There are two significant problems with Searle's Chinese Room.  One is the question of whether our brains themselves aren't simply string-conversion devices.  Vastly more sophisticated ones, of course; but given our brain chemistry and wiring at a given moment, it's far from a settled question whether our neural networks aren't reacting in a completely deterministic fashion.

The second, of course, is the problem that even though the woman in the Chinese Room starts out being a simple string-converter, if she keeps doing it long enough, eventually she will learn Chinese.  At that point there will be understanding going on.

Yes, says Searle, but that's because she has a human brain, which can do more than a computer can.  A machine could never abstract a language, or anything of the sort, without having explicit programming -- lists of vocabulary, syntax rules, morphological structure -- to go by.  Humans learn language starting with a highly receptive tabula rasa that is unlike anything that could be emulated in a computer.

Which was true, until this month.

A team of researchers at the University of Sassari (Italy) and the University of Plymouth (UK) have devised a network of two million interconnected artificial neurons that is capable of learning language "organically" -- starting with nothing, and using only communication with a human interlocutor as input.  Called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), this network is capable of doing what AI people call "bootstrapping" or "recursive self-improvement" -- it begins with only a capacity for plasticity and improves its understanding as it goes, a feature that up till now has been considered by some to be impossible to achieve.

Bruno Golosio, head of the team that created ANNABELL, writes:
ANNABELL does not have pre-coded language knowledge; it learns only through communication with a human interlocutor, thanks to two fundamental mechanisms, which are also present in the biological brain: synaptic plasticity and neural gating.  Synaptic plasticity is the ability of the connection between two neurons to increase its efficiency when the two neurons are often active simultaneously, or nearly simultaneously.  This mechanism is essential for learning and for long-term memory.  Neural gating mechanisms are based on the properties of certain neurons (called bistable neurons) to behave as switches that can be turned "on" or "off" by a control signal coming from other neurons.  When turned on, the bistable neurons transmit the signal from a part of the brain to another, otherwise they block it.  The model is able to learn, due to synaptic plasticity, to control the signals that open and close the neural gates, so as to control the flow of information among different areas.
Which in my mind blows a neat hole in the contention that the human mind has some je ne sais quoi that will never be copied in a mechanical device.  This simple model (and compared to an actual brain, it is rudimentary, however impressive Golosio's team's achievement is) is doing precisely what an infant's brain does when it learns language -- taking in input, abstracting rules, and adjusting as it goes so that it improves over time.

Myself, I think this is awesome.  I'm not particularly concerned about machines taking over the world -- for one thing, a typical human brain has about 100 billion neurons, so to have something that really could emulate anything a human could do would take scaling up ANNABELL by a factor of 50,000.  (That's assuming that an intelligent mind couldn't operate out of a brain that was more compact and efficient, which is certainly a possibility.)  I also don't think it's demeaning to humans that we may be "nothing more than meat machines," as one biologist put it.  This doesn't diminish our own personal capacity for experience, it just means that we're built from the same stuff as the rest of the universe.

Which is sort of cool.

Anyhow, what Golosio et al. have done is only the beginning of what appears to be a quantum leap in AI research.  As I've said many times, and about many things; I can't imagine what wonders await in the future.