Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Monday, March 25, 2024

Dog days

Our new dog, Jethro, is in the middle of a six-week puppy obedience class.

After three weeks of intensive training, he reliably knows the command "Sit."  That's about it.  The difficulty is he's the most chill dog I've ever met.  He's not motivated to do much of anything except whatever it takes to get a belly rub. 

Jethro in a typical position

Otherwise, whatever he's doing, he's perfectly content to keep doing it, especially if it doesn't require any extra effort.  In class a couple of weeks ago I finally got him to lie down when I said, "Down," but then he didn't want to get up again.  In fact, he flopped over on his side and refused to move even when I tried tempting him with a doggie treat.  After a few minutes, the instructor said, "Is your dog still alive?"

I assured him that he was, and that this was typical behavior.

After a few more futile attempts, I gave up, sat on the floor, and gave him a belly rub.

Jethro, not the instructor.

So after working with Jethro in class and at home, I've reached three conclusions:

  1. He has an incredibly sweet, friendly disposition.
  2. He's cute as a button.
  3. He has the IQ of a PopTart.

When we give him a command, he looks at us with this cheerful expression, as if to say, "Those are words, aren't they?  I'm pretty sure those are words."  Then he thinks, "Maybe those words have something to do with belly rubs."  So he flops over on his back, and his lone functioning brain cell goes back to sleep, having accomplished its mission.

Jethro in a rare philosophical mood

I couldn't help but think of Jethro when I read a study out of Eötvös Loránd University in Budapest, Hungary, which looked at how an electroencephalogram trace changes when dogs are told the names of things (rather than commands to do things), and it found that the parts of the brain that are involved in mental representations of objects activate in dogs -- just as they do in humans.  The upshot is that dogs seem to form mental images when they hear the names of the objects.

"Dogs do not only react with a learned behavior to certain words," said study lead author Marianna Boros, in an interview with Science Daily.  "They also don't just associate that word with an object based on temporal contiguity without really understanding the meaning of those words, but they activate a memory of an object when they hear its name."

Interestingly, this response seemed to be irrespective of a particular dog's vocabulary.  "It doesn't matter how many object words a dog understands," Boros said.  "Known words activate mental representations anyway, suggesting that this ability is generally present in dogs and not just in some exceptional individuals who know the names of many objects."

"Dogs are not merely learning a specific behavior to certain words, but they might actually understand the meaning of some individual words as humans do," said Lilla Magyari, who co-authored the study.  "Your dog understands more than he or she shows signs of."

Well, okay, maybe your dog does.  With Jethro, the best response he seems to be capable of is mild puzzlement.  I wish he'd been one of the test subjects, but my fear would be that when they'd say a word to him, the response on the EEG would be *soft static*, and the researchers would come to me with grave expressions and say, "I'm sorry to give you the bad news, Mr. Bonnet, but your dog appears not to have any higher brain function."

Of course, I have to admit that it's hard to discern between "I don't understand what you're saying" and "I don't give a damn about what you're saying."  Yesterday when my wife was trying to teach him to catch a foam rubber frisbee, and he repeatedly allowed the frisbee to bonk off of the top of his head, it might be that he knew perfectly well what she wanted him to do and just didn't want to do it.  So perhaps Lilla Magyari's right, and he's smarter than we think he is. 

Given how often he's persuaded us to give up on all the "Sit," "Down," and "Stay" bullshit and just give him a belly rub, maybe he's not the one who's a slow learner.

****************************************



Saturday, March 23, 2024

Twisted faces

One of the most terrifying episodes The X Files ever did was called "Folie à Deux."  In the opening scene, a man sees his boss not as a human but as a hideous-looking insectile alien who is, one by one, turning the workers in the company into undead zombies.

The worst part is that he's the only one who sees all of this.  Everyone else thinks everything is perfectly normal.

The episode captures in appropriately ghastly fashion the horror of psychosis -- the absolute conviction that the awful things you're experiencing are real despite everyone's reassurance that they're not.  In the show, of course, they are real; it's the people who aren't seeing it who are delusional.  But when this sort of thing happens in the real world, it is one of the scariest things I can imagine.  As I made the point in my neuroscience classes, your brain is taking the information it receives from your sensory organs and trying to assemble a picture of reality from those inputs; if something goes wrong, and the brain puts that information together incorrectly, that flawed picture becomes your reality.  At that point, there is no reliable way to distinguish reality from hallucination.

I was, unfortunately, reminded of that episode when a friend and loyal reader of Skeptophilia sent me a link yesterday to a story in NBC News Online about a man with prosopometamorphopsia, a (thank heaven) rare disorder that causes the patient's perception of human faces to go awry.  When he looks at another person, he sees their face as grotesquely stretched, with deep grooves in the forehead and cheeks.

Computer-generated images of what the patient describes seeing [Image credit: Antônio Mello, Dartmouth University]

Weirdly, it doesn't happen when he looks at a drawing or a photograph; only actual faces trigger the shift.  A moving face -- someone talking, for example -- accentuates the distortion.

Some people with prosopometamorphopsia (PMO) have it from birth; most, though, acquire it through physical damage to the brain, such as a stroke or traumatic brain injury.  The patient who was the first subject of this study shows up in MRI images with a lesion on the left side of his brain that is undoubtedly the origin of the distorted perception.  As far as the origin of that, he had a severe concussion in his forties (he's now 59), but also suffered from accidental carbon monoxide poisoning four months before the onset of symptoms.  Which of those is the root cause of the lesion, or if it's from something else entirely, is unknown.

At least now that he knows what's going on, he has been reassured that he's not going insane -- or worse, that he's seeing the world as it actually is, and like the man in "Folie à Deux," become convinced that he's the only one who does.  "My first thought was I woke up in a demon world," the patient told researchers, regarding how he felt when the symptoms started.  "I came so close to having myself institutionalized.  If I can help anybody from the trauma that I experienced with it and keep people from being institutionalized and put on drugs because of it, that’s my number-one goal."

I was immediately reminded of a superficially similar disorder called Charles Bonnet syndrome. (Nota bene: Charles Bonnet is no relation.  My French great-grandfather's name was changed upon arrival in the United States, so my last name shouldn't even be Bonnet.)  In this disorder, people with partial blindness, often from macular degeneration, start putting together the damaged and incomplete information their eyes are relaying to their brains in novel ways, causing what are called visual release hallucinations.  They can be complex -- one elderly woman saw what appeared to be tame lions strolling about in her house -- but there's no actual psychosis.  The people experiencing them, as with PMO, know (or can be convinced) that what they're seeing isn't real, which takes away a great deal of the anxiety, fear, and trauma of having hallucinations.

So at least that's one upside for PMO sufferers.  Still, it's got to be disorienting to look at the world around you and know for certain that what you're seeing isn't the way it actually is.  My eyesight isn't great, even with bifocals, but at least what I am seeing is real.  I'll take that over twisted faces and illusory lions any day.

****************************************



Friday, March 22, 2024

Leading the way into darkness

New from the "I Thought We Already Settled This" department, we have: the West Virginia State Legislature has passed a bill, and the Governor is expected to sign it, which would allow the teaching of Intelligent Design and other "alternative theories" to evolution in public school biology classes.

It doesn't state this in so many words, of course.  The Dover (PA) decision of 2005 ruled that ID is not a scientific theory, has no place in the classroom, and to teach it violates the Establishment Clause of the United States Constitution.  No, the anti-evolutionists have learned from their mistakes.  State Senator Amy Grady (R), who introduced the bill, deliberately eliminated any specific mention of ID in the wording of the bill.  It says, "no local school board, school superintendent, or school principal shall prohibit a public school classroom teacher from discussing and answering questions from students about scientific theories of how the universe and/or life came to exist" -- but when questioned on the floor of the Senate, Grady reluctantly admitted that it would allow ID to be discussed.

And, in the hands of a teacher who was a creationist, to be presented as a viable alternative to evolution.

I think the thing that frosts me the most about all this is an exchange between Grady and Senator Mike Woelfel (D) about using the words "scientific theories" without defining them.  Woelfel asked Grady if there was such a definition in the bill, and she said there wasn't, but then said,  "The definition of a theory is that there is some data that proves something to be true.  But it doesn’t have to be proven entirely true."

*brief pause for me to scream obscenities*

No, Senator Grady, that is not the definition of a theory.  I know a lot of your colleagues in the Republican Party think we live in a "post-truth world" and agree with Kellyanne Conway that there are "alternative facts," but in science you can't just make shit up, or define terms whatever way you like and then base your argument on those skewed definitions.  Let me clarify for you what a scientific theory is, which I only have to do because apparently you can't even be bothered to read the first paragraph of a fucking Wikipedia article:

A scientific theory is an explanation of an aspect of the natural world and universe that can be (or a fortiori, that has been) repeatedly tested and corroborated in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results.  Where possible, some theories are tested under controlled conditions in an experiment... Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.

Intelligent Design is not a theory.  It does not come from the scientific method, it is not based on data and measurements, and it makes no predictions.  It hinges on the idea of irreducible complexity -- that there are structures or phenomena in biology that are too complex, or have too many interdependent pieces, to have arisen through evolution.  This sounds fancy, but it boils down to "we don't understand this, therefore God did it."  (If you want an absolutely brilliant takedown of Intelligent Design, read Richard Dawkins's book The Blind Watchmaker.  How, after reading that, anyone can buy ID is beyond me.)

[Image licensed under the Creative Commons Hannes Grobe, Watch with no background, CC BY 3.0]

And don't even get me started on Young-Earth Creationism.

What gets me is how few people are willing to call out people like Amy Grady on their bullshit.  People seem to have become afraid to stand up and say, "You are wrong."  "Alternative facts" aren't facts; they are errors at best and outright lies at worst.

And if we live in a "post-truth world" it's because we're choosing to accept errors and lies rather than standing up to them.

As historian Timothy Snyder put it, in his 2021 essay "The American Abyss":

Post-truth is pre-fascism...  When we give up on truth, we concede power to those with the wealth and charisma to create spectacle in its place.  Without agreement about some basic facts, citizens cannot form the civil society that would allow them to defend themselves.  If we lose the institutions that produce facts that are pertinent to us, then we tend to wallow in attractive abstractions and fictions...  Post-truth wears away the rule of law and invites a regime of myth.

But Carl Sagan warned us of this almost thirty years ago, in his brilliant (if unsettling) book The Demon-Haunted World: Science as a Candle in the Dark:

Science is more than a body of knowledge; it is a way of thinking.  I have a foreboding of an America in my children's or grandchildren's time – when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness.

People like Amy Grady are leading the way into that darkness, and it seems like hardly anyone notices.

We cannot afford to have a generation of children going through public school and coming out thinking that ignorant superstition is a theory, that sloppily-defined terms are truth, and that pandering to the demands of a few that their favorite myths be elevated to the status of fact is how science is done.  It's time to stand up to the people who are trying to co-opt education into religious indoctrination.

In the Dover Decision, we won a battle, but it's becoming increasingly apparent that we have not yet won the war.

****************************************



Thursday, March 21, 2024

Crown jewel

A white dwarf is the remnant of an average-to-small star at the end of its life.  When a star like our own Sun exhausts its hydrogen fuel, it goes through a brief period of fusing helium into carbon and oxygen, but that too eventually runs out.  This creates an imbalance between the two opposing forces ruling a star's life -- the outward thermal pressure from the heat released by fusion, and the inward compression from gravity.  When fusion ceases, the thermal pressure drops, and the star collapses until the electron degeneracy pressure becomes high enough to stop the expansion.  The Pauli Exclusion Principle states that two electrons can't occupy the same quantum state, and the force generated in order to prevent this happening is sufficient to counterbalance the gravitational pressure.  (At higher masses, even that's not enough to stop the collapse; the electrons are forced to fuse with protons, generating a neutron star, or at higher masses still, a black hole.)

For a star like our Sun, in a single-star system, that's pretty much that.  The outer layers of the star's atmosphere get blown away to form a ghostly shell called a planetary nebula, and the white dwarf -- actually the star's core -- remains to slowly cool down and dim over the next billion-odd years.  But in multiple-star systems, something far more interesting happens.

White dwarfs, although nowhere near as dense as neutron stars, still have a strong gravitational field.  If the white dwarf is part of a close binary system, the gravitational pull of the white dwarf is sufficient to siphon off gas from the upper atmosphere of its companion star.  The material from the companion is heated and compressed as it falls toward the white-hot surface of the white dwarf, and once enough of it builds up, it suddenly becomes hot enough to fuse, generating a huge burst of energy in a runaway thermonuclear reaction.

The result is called a nova -- a "new star," even though it's not new at all, it has merely flared up enough to see from a long way away.  (The other name for this phenomenon is a cataclysmic binary, which I like better not only because it's more accurate but because it sounds badass.)  Once the new fuel gets exhausted, it dims again, but the process merely starts over.  The siphoning restarts, and depending on the rate of accretion, there'll eventually be another flare-up.

Artist's concept of a nova flare-up [Image courtesy of NASA Conceptual Image Lab/Goddard Flight Center]

The topic comes up because there is a recurrent nova that is due to erupt soon, and when it does, a "new star" will be visible in the Northern Hemisphere.  It's in the rather dim, crescent-shaped constellation of Corona Borealis, between Boötes and Hercules, which can be seen in the evening in late spring to midsummer.  The star T Coronae Borealis is ordinarily magnitude +10, and thus far too dim to see with the naked eye; most people can't see anything unaided dimmer than magnitude +6, and that's if you've got great eyes and it's a completely clear, dark night.  But in 1946 this particular star started to dim even more, then suddenly flared up to magnitude +2 -- about as bright as Polaris -- before gradually dimming over the next days to weeks back down to its previous near-invisibility.

And the astrophysicists are seeing signs that it's about to repeat its behavior from 78 years ago.  The best guesses are that it'll flare some time before September, which is perfect timing for seeing it if you live in the Northern Hemisphere.  If you're a star-watcher, keep an eye on the usually unremarkable constellation of Corona Borealis -- at some point soon, there will be a new jewel in the crown, albeit a transient one.

You have to wonder, though, if at some point the white dwarf in the T Coronae Borealis binary system will pick up enough extra mass from its companion to cross the Chandrasekhar Limit.  This value -- about 1.4 solar masses -- was determined by the brilliant Indian physicist Subrahmanyan Chandrasekhar as the maximum mass a white dwarf can have before the electron degeneracy pressure is insufficient to halt the collapse.  At that point, it falls inward so fast the entire star blows itself to smithereens in a type-1a supernova, one of the most spectacular events in the universe.  If T Coronae Borealis did this -- not that it's likely any time soon -- it would be far brighter than the full Moon, and easily visible in broad daylight, probably for weeks to months.

Now that I would like to see.

****************************************



Wednesday, March 20, 2024

Grammar wars

In linguistics, there's a bit of a line in the sand drawn between the descriptivists and the prescriptivists.  The former believe that the role of linguists is simply to describe language, not establish hard-and-fast rules for how language should be.  The latter believe that grammar and other linguistic rules exist in order to keep language stable and consistent, and therefore there are usages that are wrong, illogical, or just plain ugly.

Of course, most linguists don't fall squarely into one camp or the other; a lot of us are descriptivists up to a point, after which we say, "Okay, that's wrong."  I have to admit that I'm far more of a descriptivist bent myself, but there are some things that bring out my inner ruler-wielding grammar teacher, like when I see people write "alot."  Drives me nuts.  And I know it's now become acceptable, but "alright" affects me exactly the same way.

It's "all right," dammit.

However, some research published in Nature shows, if you're of a prescriptivist disposition, eventually you're going to lose.

In "Detecting Evolutionary Forces in Language Change," Mitchell G. Newberry, Christopher A. Ahern, Robin Clark, and Joshua B. Plotkin of the University of Pennsylvania describe that language change is inevitable, unstoppable, and even the toughest prescriptivist out there isn't going to halt the adoption of new words and grammatical forms.

The researchers analyzed over a hundred thousand texts from 1810 onward, looking for changes in morphology -- for example, the decrease in the use of past tense forms like "leapt" and "spilt" in favor of "leaped" and "spilled."  The conventional wisdom was that irregular forms (like pluralizing "goose" to "geese") persist because they're common; less common words, like "turf" -- which once pluralized to "turves" -- eventually regularize because people don't use the word often enough to learn the irregular inflection, and eventually the regular one (in this case, "turfs") takes over.

The research by Newberry et al. shows that this isn't true -- when there are two competing forms, which one wins is more a matter of random chance than commonness.  They draw a very cool analogy between this phenomenon, which they call stochastic drift, to the genetic drift experienced by evolving populations of living organisms.

"Whether it is by random chance or selection, one of the things that is true about English – and indeed other languages – is that the language changes,” said Joshua Plotkin, who co-authored the study.  "The grammarians might [win the battle] for a decade, but certainly over a century they are going to be on the losing side.  The prevailing view is that if language is changing it should in general change towards the regular form, because the regular form is easier to remember.  But chance can play an important role even in language evolution – as we know it does in biological evolution."

So in the ongoing battles over grammatical, pronunciation, and spelling change, the purists are probably doomed to fail.  It's worthwhile remembering how many words in modern English that are now completely accepted by descriptivist and prescriptivist alike are the result of such mangling.  Both "uncle" and "umpire" came about because of an improper split of the indefinite article ("a nuncle" and "a numpire" became "an uncle" and "an umpire").  "To burgle" came about because of a phenomenon called back formation -- when a common linguistic pattern gets applied improperly to a word that sounds like it has the same basic construction.  A teacher teaches, a baker bakes, so a burglar must burgle.  (I'm surprised, frankly, given how English yanks words around, we don't have carpenters carpenting.)


Anyhow, if this is read by any hard-core prescriptivists, all I can say is "I'm sorry."  It's a pity, but the world doesn't always work the way we'd like it to.  But even so, I'm damned if I'm going to use "alright" and "alot."  A line has to be drawn somewhere.  And I'm gonna draw it a lot, all right?

****************************************



Tuesday, March 19, 2024

Cosmological conundrums

Three of the most vexing problems in physics -- and ones I've hit on a number of times here at Skeptophilia -- are:
  1. dark matter -- the stuff that (by its gravitational influence) seems to make up 26% of the mass/energy of the universe, and yet has resisted every effort at detection or inquiry into what other properties it might have.
  2. dark energy -- a mysterious "something" that is said to be responsible for the apparent runaway expansion of the universe, and which (like dark matter) has defied detection or explanation in any other way.  This makes up 69% of the universe's mass/energy -- meaning the ordinary matter we're made of comprises only 5% of the apparent content of the universe.
  3. the conflict between the general theory of relativity (i.e. the theory of gravitation) and quantum physics.  In the realm of the very small (or at high energies), the theory of relativity falls apart -- it's irreconcilable with the nondeterministic model of quantum mechanics.  Despite over a century of the best minds in theoretical physics trying to find a quantum theory of gravity, the two most fundamental underpinnings of our understanding of the universe just don't play well together.
A while back I was discussing this with the fiddler in my band, who also happened to be a Cornell physics lecturer.  Her comment was that the mess physics is currently in suggests we're missing something major -- the same way that the apparent constancy of the speed of light in a vacuum, regardless of reference frame, created an intractable nightmare for physicists at the end of the nineteenth century.  It took Einstein coming up with the Theories of Relativity to show that the problem wasn't a problem at all, but a fundamental reality about how space and time work, to resolve it all.

"We're still waiting for this century's Einstein," Kathy said.

[Image licensed under the Creative Commons ESA/Hubble, Collage of six cluster collisions with dark matter maps, CC BY 4.0]

There's no shortage of physicists working on stepping into those shoes -- and just last week, two papers came out suggesting possible solutions for the first two problems.

One claims to solve all three simultaneously.

Both of them start with a similar take on dark matter and dark energy as Einstein did about the luminiferous aether, the mysterious substance that nineteenth-century physicists thought was the medium through which light propagated; they simply don't exist.  

The first one, from Rajendra Gupta of the University of Ottawa, proposes that the need for both dark matter and dark energy in the model comes from a misconception about how the laws of physics change on a cosmological time scale.  The prevailing wisdom has been "they don't;" the laws now are the same as the laws thirteen billion years ago, not long after the Big Bang.  Gupta suggests that making two modifications to the model -- assuming that the strength of the four fundamental forces of nature (gravity, electromagnetism, and the weak and strong nuclear forces) have decreased over time, and that light loses energy as it travels over long distances, explain all the astrophysical observations we've made, and obviates the need for dark matter and dark energy.

"The study's findings confirm that our previous work -- JWST early-universe observations and ΛCDM cosmology -- about the age of the universe being 26.7 billion years [rather than the usually accepted value of 13.8 billion years] has allowed us to discover that the universe does not require dark matter to exist," Gupta said.  "In standard cosmology, the accelerated expansion of the universe is said to be caused by dark energy but is in fact due to the weakening forces of nature as it expands, not due to dark energy."

The second, by Jonathan Oppenheim and Andrea Russo of University College London, suggests a different solution that (if correct) not only gets rid of dark matter and dark energy, but in one fell swoop resolves the conflict between relativity and quantum physics.  They propose that the problem is the deterministic nature of gravity; if a quantum-like uncertainty is introduced into gravitational models, the whole shebang works without the need for some mysterious dark matter and dark energy that no one has ever been able to find experimentally.

The mathematics of the model -- which, I must admit up front, are beyond me -- introduce new terms to explain the behavior of gravity at low accelerations, which are (not coincidentally) the regime where the effects of dark matter become apparent.  It's a striking approach; physicist Sabine Hossenfelder, who is generally reluctant to hop on the latest Grand Unified Theory bandwagon (and whose pessimism has been, unfortunately, justified in the past) writes in an essay on the new theory, "Reading Oppenheim’s new papers—published in the journals Nature Communications and Physical Review X—about what he dubs 'Post-Quantum Gravity,' I have been impressed by how far he has pushed the approach.  He has developed a full-blown framework that combines quantum physics with classical physics, and he tells me that he has another paper in preparation which shows that he can solve the problem of infinites that plague the Big Bang and black holes."

Despite this, Hossenfelder is still dubious about Post-Quantum Gravity.  "I don’t want to withhold from you that I think Oppenheim’s theory is wrong, because it remains incompatible with Einstein’s cherished principle of locality, which says that causes should only travel from one place to its nearest neighbours and not jump over distances," she writes.  "I suspect that this is going to cause problems sooner or later, for example with energy conservation.  Still, I might be wrong...  If Oppenheim’s right, it would mean Einstein was both right and wrong: right in that gravity remained a classical, non-quantum theory, and wrong in that God did play dice indeed.  And I guess for the good Lord, we would have to be both sorry and not sorry."

So we'll just have to wait and see.  If either of these theories is right, we're talking Nobel Prize material.  If the second one is right, it'd be the physics discovery of the century.  Like Sabine Hossenfelder, I'm not holding my breath; attempts to solve definitively the three problems I started this post with are, thus far, batting zero.  And I'm hardly qualified to make a judgment about what the chances are for these two.  But like many interested laypeople, I'll be fascinated to see which way it goes -- and to see if we might, in the words of my bandmate/physicist friend, be "looking at the twenty-first century's Einstein."

****************************************



Monday, March 18, 2024

Memory boost

About two months ago I signed up with Duolingo to study Japanese.

I've been fascinated with Japan and the Japanese culture pretty much all my life, but I'm a total novice with the language, so I started out from "complete beginner" status.  I'm doing okay so far, although the fact that it's got three writing systems is a challenge, to put it mildly.  Like most Japanese programs, it's beginning with the hiragana system -- a syllabic script that allows you to work out the pronunciation of words -- but I've already seen a bit of katakana (used primarily for words borrowed from other languages) and even a couple of kanji (the ideographic script, where a character represents an entire word or concept).

[Image licensed under the Creative Commons 663highland, 140405 Tsu Castle Tsu MIe pref Japan01s, CC BY-SA 3.0]

While Duolingo focuses on getting you listening to spoken Japanese right away, my linguistics training has me already looking for patterns -- such as the fact that wa after a noun seems to act as a subject marker, and ka at the end of a sentence turns it into a question.  I'm still perplexed by some of the pronunciation patterns -- why, for example, vowel sounds sometimes don't get pronounced.  The first case of this I noticed is that the family name of the brilliant author Akutagawa Ryūnosuke is pronounced /ak'tagawa/ -- the /u/ in the second syllable virtually disappears.  I hear it happening fairly commonly in spoken Japanese, but I haven't been able to deduce what the pattern is.  (If there is one.  If there's one thing my linguistics studies have taught me, it's that all languages have quirks.  Try explaining to someone new to English why, for instance, the -ough combination in cough, rough, through, bough, and thorough are all pronounced differently.) 

Still and all, I'm coming along.  I've learned some useful phrases like "Sushi and water, please" (Sushi to mizu, kudasai) and "Excuse me, where is the train station?" (Sumimasen, eki wa doko desu ka?), as well as less useful ones like "Naomi Yamaguchi is cute" (Yamaguchi Naomi-san wa kawaii desu), which is only critical to know if you have a cute friend who happens to be named Naomi Yamaguchi.

The memorization, however, is often taxing to my 63-year-old brain.  Good for it, I have no doubt -- a recent study found that being bi- or multi-lingual can delay the onset of dementia by four years or more -- but it definitely is a challenge.  I go through my hiragana flash cards at least once a day, and have copious notes for what words mean and for any grammatical oddness I happen to notice.  Just the sheer amount of memorization, though, is kind of daunting.

Maybe what I should do is find a way to change the context in which I have to remember particular words, phrases, or characters.  That seems to be the upshot of a study I ran into a couple of days ago in Proceedings of the National Academy of Sciences, about a study by a group from Temple University and the University of Pittsburgh about how to improve retention.

I'm sure all of us have experienced the effects of cramming for a test -- studying like hell the night before, and then you do okay on the test but a week later barely remember any of it.  This practice does two things wrong; not only stuffing all the studying into a single session, but doing it all the same way.

What this study showed was two factors that significantly improved long-term memory.  One was spacing out study sessions -- doing shorter sessions more often definitely helped.  I'm already approaching Duolingo this way, usually doing a lesson or two over my morning coffee, then hitting it again for a few more after dinner.  But the other interesting variable they looked at was that test subjects' memories improved substantially when the context was changed -- when, for example, you're trying to remember as much as you can of what a specific person is wearing, but instead of being shown the same photograph over and over, you're given photographs of the person wearing the same clothes but in a different setting each time.

"We were able to ask how memory is impacted both by what is being learned -- whether that is an exact repetition or instead, contains variations or changes -- as well as when it is learned over repeated study opportunities," said Emily Cowan, lead author of the study.  "In other words... we could examine how having material that more closely resembles our experiences of repetition in the real world -- where some aspects stay the same but others differ -- impacts memory if you are exposed to that information in quick succession versus over longer intervals, from seconds to minutes, or hours to days."

I can say that this is one of the things Duolingo does right.  Words are repeated, but in different combinations and in different ways -- spoken, spelled out using the English transliteration, or in hiragana only.  Rather than always seeing the same word in the same context, there's a balance between the repetition we all need when learning a new language and pushing your brain to generalize to slightly different usages or contexts.

So all things considered, Duolingo had it figured out even before the latest research came out.  I'm hoping it pays off, because my son and I would like to take a trip to Japan at some point and be able to get along, even if we don't meet anyone cute named Naomi Yamaguchi.  But I should wind this up, so for now I'll say ja ane, mata ashita (goodbye, see you tomorrow).

****************************************