Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, May 6, 2023

Resurrecting a fossil

A year ago I wrote about linguistic isolates -- single languages, or small clusters of related languages, that have no apparent relation to any other language on Earth.  The problem, of course, is that being spoken by only a small number of people, these are some of the most endangered languages.  There are many of them for which the last native speakers are already elderly, and there's a high risk of their going extinct without ever being thoroughly studied.

And as I pointed out in my post, the sad part of that is that each one of those is a lens into a specific culture and a particular way of thinking.  Once lost, they're gone forever, or only exist in scattered remnants, like the fossils of extinct animals.  What you can reconstruct from these relics is perhaps better than nothing, but still, there's always an elegiac sense of what we've lost, and what we're still losing.

This topic comes up because of an article in Smithsonian sent to me by a friend and frequent contributor of topics to Skeptophilia, about some linguists who are trying to reconstruct the extinct indigenous Timucuan language of northern Florida.  Timucuan was a linguistic isolate, and seems to be unrelated to the languages spoken by neighboring groups (such as the Seminole, Muscogee, and Choctaw).  The Timucua people, which at the time of European contact in 1595 comprised an estimated 200,000 people in 35 chiefdoms, each of which spoke a different dialect, was decimated by war and by diseases like smallpox.  By 1700, there were only about a thousand Timucuans left, and the slave trade eradicated those few survivors.  There is currently a genetic study to see if some populations in Cuba might be the descendants of the Timucuans, but so far the results are inconclusive.

This would just be another in the long list of complete and irretrievable cultural loss, if it weren't for the efforts of linguists Alejandra Dubcovsky and Aaron Broadwell.  Working with a handful of letters written in Timucuan (using the Latin alphabet), and a rather amazing bilingual document by Spanish missionary Francisco Pareja called Confessions in the Castilian and Timucua Language, With Some Tips to Encourage the Penitent, they have assembled the first Timucuan dictionary and grammar, and reconstructed how a long-gone people spoke.

A page from the Confessions, with Spanish on the left and Timucuan on the right [Image courtesy of the John Carter Brown Library, Brown University]

Which is incredibly cool, but there's also a wryly amusing side to it, because with Dubcovsky's and Broadwell's knowledge of the Timucuan language, they're able to compare what Pareja wanted the translators to say with what they actually did say.  "Our favorite is the description of marriage," Dubcovsky said.  "The Spanish side asks very clearly, 'Have the man and a woman been joined together in front of a priest?'  And the Timucua version of that sentence is, 'Did you and another person consent to be married?'  The Timucua translation not only takes out any mention of gender, but it also removes any mention of a religious officiant.  A priest did not write this, because a priest does not forget to include himself in the story."

So the Confessions document is not only a Rosetta Stone for Timucuan, it gives us a fascinating window into how the Timucuan translators saw the Spanish Catholic culture that was being imposed upon them.

It's tragic that this language and its people were so thoughtlessly (and ruthlessly) eradicated; worse still that such tragedies are all too common.  So it's all the more important that people like Dubcovsky snd Broadwell work to resurrect these extinct languages from the scant fossils they left behind.  It can't ever repair the damage that was done, but at least allows us to glimpse the minds of an extinct culture -- and to honor their memory in whatever way we can.

****************************************



Friday, May 5, 2023

Rough neighborhood

In keeping with the stargazing topics that have been our focus this week, today we're going to start with my favorite naked-eye astronomical object: the Pleiades.

[Image is in the Public Domain courtesy of NASA/JPL]

It's also known as the Seven Sisters; in Greek mythology, the seven brightest stars (about all you can see without a telescope, even if you have good vision) represented the seven daughters of the Titan Atlas and the Oceanid nymph Pleione.  Where I live they're visible in the winter; I love seeing them glittering in the black sky on cold, clear nights.

The Pleiades are mostly hot type-B stars, and the whole group is about 444 light years from Earth, making it one of the closest star clusters.  Stars of this class are so energetic that they have relatively short life spans.  It's estimated that the Pleiades formed about a hundred million years ago from a cloud of gas and dust similar to the Orion Nebula; already the energy output of the individual stars is blowing away the shroud of material from which they were formed, resulting in the halo-like "reflection nebulae" you see surrounding them.

They're also moving away from each other, leaving the "stellar nursery" in which they were born.  In another couple of hundred million years, they will have separated widely enough that future astronomers (assuming there are any around) will have no obvious way to know they started out in the same region of space.  Plus, the biggest and brightest of them will already be approaching the ends of their lives, exploding in the violent cataclysm of a supernova, leaving behind a rapidly-rotating stellar remnant called a neutron star, spinning like a lighthouse beacon to mark the spot where a star died.

The reason all this comes up is some recent research into the composition of the stellar nursery where the Sun formed.  Because it, after all, was born the same way; along with a number of siblings, it coalesced in a massive cloud of hydrogen and helium, with a few heavier elements thrown in as well.  When you look up into the night sky, any of the stars you see could be one of the Sun's sibs.  It's impossible, from where science currently stands, to tell which ones.  They've all undoubtedly traveled a long way away from their point of origin in the 4.6 billion years since they formed.

But the research, which appeared in the Monthly Notices of the Royal Astronomical Society, uncovered a bit more about what our star's stellar nursery was like.  These formations do have some significant differences -- some are small and quiet, with only enough material to form a few stars, while others are enormous and violently active (such as the aforementioned Orion Nebula).  In particular, the models of stellar formation suggest that the two different environments would influence the quantities of heavier elements like aluminum and iron.  By measuring the amounts of these elements in meteorite fragments that are thought to be leftover material from the formation of the Solar System, the researchers concluded that the Sun formed in a high-energy intense environment like the Orion Nebula, swept by gales of dust and hammered by the shock waves of supernovae.

What a sight that would have been.  (From a safe distance.)

So next time you see the Pleiades or Orion's Belt, think about the fact that our calm and stable home star was born in a rough neighborhood.  Lucky for us, it's grown up and settled down a little.  As beautiful as the Pleiades are, I don't think I'd fancy living there.

****************************************



Thursday, May 4, 2023

Blowing bubbles

After Monday's post, about the bizarre hypergiant star Stephenson 2-18, a reader commented, "If you think that's weird, look up 'Fermi bubbles.'"

So I did.  And... yeah.

Discovered back in 2010, the Fermi bubbles -- so named because they were discovered by NASA's Fermi Gamma-ray Telescope -- are a pair of nearly perfectly symmetrical bubbles of high-intensity gamma rays positioned above and below the galactic plane of the Milky Way.  They're huge; each one has a diameter of about 23,000 light years.

False-color image of the Fermi bubbles.  The Milky Way is seen edge-on, running across the middle of the photograph.  [Image is in the Public Domain courtesy of NASA/Goddard Space Flight Center]

Back in 2015, the Fermi bubbles were still completely unexplained, and in fact made #1 in Astronomy magazine's list of "The Fifty Weirdest Objects in the Universe."  That they had something to do with Sagittarius A*, the enormous black hole at the center of the galaxy, seemed like a reasonable guess; but what could create something with such a peculiar figure-eight shape was unknown.

A team led by astrophysicist Rongmon Bordoloi of the Massachusetts Institute of Technology, however, has a model to explain them.  Something around nine million years ago -- not really that far back, in the grand scheme of things -- Sagittarius A* pulled in an enormous cloud of gas and dust.  The origin of that dust cloud is uncertain, but what happened after it got caught is all too clear.  Most of it undoubtedly took the one way trip past the event horizon, but some of it was spun so fast by the black hole's rotation and the resultant twisting of space-time that it gained enough momentum to escape along Sagittarius A*'s spin axis -- i.e., perpendicular to the galactic plane.

This not only accelerated the gas to an unimaginable two million miles an hour, it heated it -- at its edges to just shy of ten thousand degrees C, and near the point of outflow to almost ten million degrees.  It's this heating that caused it to produce gamma rays, which is how the structure was detected.

Not a phenomenon you'd want to be standing in the way of.

"We have traced the outflows of other galaxies, but we have never been able to actually map the motion of the gas," Bordoloi said, somehow resisting adding, and holy shit, this thing is amazing.  "The only reason we could do it here is because we are inside the Milky Way.  This vantage point gives us a front-row seat to map out the kinematic structure of the Milky Way outflow."

And, along the way, to figure out what's going on with the number one Weirdest Object in the Universe.  Having an explanation doesn't make it any less impressive, of course.  Gas at a temperature of ten million degrees being flung about at two million miles per hour by a ginormous black hole isn't exactly a cause for a shoulder-shrug.

Besides, there are forty-nine more weird objects (at least) left to explain.  If you're into science, it means you'll never be bored.

****************************************



Wednesday, May 3, 2023

The mind readers

In Isaac Asimov's deservedly famous short story "All the Troubles of the World," the megacomputer Multivac has so much data on each person in the world (including detailed brain scans) that it can predict ahead of time if someone is going to commit a crime.  This allows authorities to take appropriate measures -- defined, of course, in their own terms -- to prevent it from happening.

We took a step toward Asimov's dystopian vision, in which nothing you think is secret, with a paper this week in Nature Neuroscience about a new invention called a "brain activity decoder."

Developed by a team of researchers at the University of Texas at Austin, the software uses an fMRI machine to measure the neural activity in a person's brain, and is able to convert that neural activity into a continuous stream of text -- i.e., the output is what the person was thinking.

The researchers had volunteers listening to podcasts over headphones while the fMRI watched how their brains responded.  This allowed them to compare the actual text the test subjects were hearing with what the brain activity decoder picked up from them.  After only a short span of training the software, the results were scary good.  One listener heard, "I don't have my driver's license yet," and the decoder generated the output "She has not even started to learn to drive yet."  Another had the input, "I didn’t know whether to scream, cry or run away. Instead, I said, 'Leave me alone!'", which resulted in the output, "Started to scream and cry, and then she just said, 'I told you to leave me alone.'"

Not perfect, but as a proof-of-concept, it's jaw-dropping.

[Image licensed under the Creative Commons © Nevit Dilmen, Brain MRI 131058 rgbca, CC BY-SA 3.0]

The researchers touted its possible use for people who have lost the ability to communicate, in situations like locked-in syndrome.  However, I don't think it takes an overactive imagination to come up with ways such a device could be abused.  What would happen to the concept of privacy, if a machine could read your thoughts?  What about the Fifth Amendment right not to self-incriminate?  Like in Asimov's story, how could the authorities separate what a person had done from what they were contemplating doing?

Or would they?

Jerry Tang, who led the research, emphasizes that the decoder had to be trained on the person whose thoughts were going to be read; if it were trained on me, it couldn't immediately be used to figure out what you were thinking.  My response to that is: yet.  This is already leaps and bounds past previous attempts at thought-reading, which was only able to output single words and short sentences.  Given more time and further refinements, this technique will only get better.

Or scarier, as the case may be.

Tang also pointed out that even with improvements, the software would be defeated by someone putting up resistance (e.g., deliberately thinking other things to block the fMRI from getting the correct output).  He also is aware of the possibility of abuse.  "We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that," he said.  "We want to make sure people only use these types of technologies when they want to and that it helps them."

Well, maybe.  I'm not a conspiracy-theory type, nor someone who thinks that all government is inherently bad.  Here, though, it seems like the potential for Orwellian thought-crime is a short step away.

Keep in mind, too, how generally inaccurate our brain's storage system is.  As we've seen over and over here at Skeptophilia, what we remember is an amalgam of what actually happened, what we were told happened, what we imagine happened, and a good dollop of falsehood.  False memories can be as convincingly real as accurate ones.  If the brain activity decoder were used on an unwilling person to extract his/her thoughts, there is no guarantee that the output would be at all reflective of reality.  In fact, it's almost certain not to be.

But since eyewitness testimony -- in other words, recall -- is considered one of the highest forms of evidence in a court of law, it's no stretch to wonder if a person's thoughts would be given the same undeserved weight.

I'm not sure what the right step is, honestly.  There are some who believe that a potential for misuse shouldn't stop scientific progress; anything, they argue, can be used for harm.  Others feel like the hazards can sometimes outweigh the benefits, and trusting the powers-that-be to do the right thing with technology this powerful is foolish.

I don't have an answer.  But I will say that my mind was forced back to the prescient quote from another seminal science fiction writer, Michael Crichton: "Scientists are preoccupied with accomplishment.  So they are focused on whether they can do something.  They never stop to ask if they should."

****************************************



Tuesday, May 2, 2023

Off the chart

Way back around 1910, Danish astronomer Ejnar Hertzsprung and American astronomer Henry Norris Russell independently found a curious pattern when they did a scatterplot correlation between stars' luminosities and temperatures.

The graph, now called the Hertzsprung-Russell Diagram in their honor, looks like this:

[Image licensed under the Creative Commons Richard Powell, HRDiagram, CC BY-SA 2.5]

Most stars fall on the bright swatch running from the hot, bright stars in the upper left to the cool, dim stars in the lower right; the overall trend for these stars is that the lower the temperature, the lower the luminosity.  Stars like this are called main-sequence stars.  (If you're curious, the letter designations along the top -- O, B, A, F, G, K, and M -- refer to the spectral class the star belongs to.  These classifications were the invention of the brilliant astronomer Antonia Maury, whose work in spectrography revolutionized our understanding of stellar evolution.)

There is also a sizable cluster of stars off to the upper right -- relatively low temperatures but very high luminosities.  These are giants and supergiants.  In the other corner are white dwarfs, the exposed cores of dead stars, with very high temperatures but low luminosity, which as they gradually cool slip downward to the left and finally go dark.

So there you have it; just about every star in the universe is either a main-sequence star, in the cluster with the giants and supergiants, or in the curved streak of dwarf stars at the bottom of the diagram.

Emphasis on the words "just about."

One star that challenges what we know about how stars evolve is the bizarre Stephenson 2-18, which is in the small, dim constellation Scutum ("the shield"), between Aquila and Sagittarius.  At an apparent magnitude of +15, it is only visible through a powerful telescope; it was only discovered in 1990 by American astronomer Charles Bruce Stephenson, after whom it is named.

Its appearance, a dim red point of light, hides how weird this thing actually is.

When Stephenson first analyzed it, he initially thought what he was coming up with couldn't possibly be correct.  For one thing, it is insanely bright, estimated at a hundred thousand times the Sun's luminosity.  Only its distance (19,000 light years) and some intervening dust clouds make it look dim.  Secondly, it's enormous.  No, really, you have no idea how big it is.  If you put Stephenson 2-18 where the Sun is, its outer edge would be somewhere near the orbit of Saturn.  You, right now, would be inside the star.  Ten billion Suns would fit inside Stephenson 2-18. 

If a photon of light circumnavigated the surface of the Sun, it would take a bit less than fifteen seconds.  To circle Stephenson 2-18 would take nine hours.

This puts Stephenson 2-18 almost off the Hertzsprung-Russell Diagram -- it's in the extreme upper right corner.  In fact, it's larger than what what stellar evolution says should be possible; the current model predicts the largest stars to have radii of no more than 1,500 times that of the Sun, and this behemoth is over 2,000 times larger.

Astronomers admit that this could have a simple explanation -- it's possible that the measurements of Stephenson 2-18 are overestimates.  But if not, there's something significant about stellar evolution we're not understanding.

Either way, this is one interesting object.

There's also a question about what Stephenson 2-18 will do next.  Astrophysicists suspect it might be about to blow off its outer layers and turn either into a luminous blue variable or a Wolf-Rayet star (the latter are so weird and violent I wrote about them here a while back).  So it may not be done astonishing us.

Puts me in mind of the quote from Richard Dawkins: "The feeling of awed wonder that science can give us is one of the highest experiences of which the human psyche is capable."

****************************************



Monday, May 1, 2023

The kludge factory

Know what a kludge is?

Coined by writer Jackson Granholm in 1962, a kludge is "an ill-assorted collection of poorly-matching parts, forming a distressing whole."  Usually created when a person is faced with fixing something and lacks (1) the correct parts, (2) the technical expertise to do it right, or (3) both, kludges fall into the "it works well enough for the time being" category.

[Image licensed under the Creative Commons Zoedovemany, Screen Shot 2015-11-19 at 11.54.48 AM, CC BY-SA 4.0]

Evolution is essentially a giant kludge factory.

At its heart, it's the "law of whatever works."  It's why the people who advocate Intelligent Design Creationism always give me a chuckle -- because if you know anything about biology, "intelligently designed" is the last thing a lot of it is.  Here are a few examples:

  • Animals without hind legs -- notably whales and many snakes -- that have vestigial hind leg bones.
  • Primates are some of the only mammals that cannot synthesize their own vitamin C -- yet we still carry the gene for making it.  It just doesn't work because it has a busted promoter.
  • Human sinuses.  Yeah, you allergy sufferers know exactly what I'm saying.
  • The recurrent laryngeal nerve in fish follows a fairly direct path, from the brain past the heart to the gills.  However, when fish evolved into land-dwelling forms and their anatomy changed -- their necks lengthening and their hearts moving lower into the body -- the recurrent laryngeal nerve got snagged on the circulatory system and had to lengthen as its path became more and more circuitous.  Now, in giraffes (for example), rather than going from the brain directly to the larynx, it goes right past its destination, loops under the heart, and then back up the neck to the larynx -- a distance of almost five meters.
  • Our curved lower spines were clearly not "designed" to support a vertically-oriented body.  Have you ever seen a weight-bearing column with an s-bend?  No wonder so many of us develop lower back issues.
  • One of the kludgiest of kludges is the male genitourinary tract.  Not only does the vas deferens loop way upward from the testicles (not quite as far as the giraffe's laryngeal nerve, admittedly), along the way it joins the urethra to form a single tube through the penis, something about which a friend of mine quipped, "There's intelligent design for you.  Routing a sewer pipe through a playground."  It also passes right through the prostate, a structure notorious for getting enlarged in older guys.  C'mon, God, you can do better than that.

The reason all this comes up is that the kludging goes all the way down to the molecular level.  A study from a team at Yale, Harvard, and MIT that appeared last week in the journal Science looked at the fact that when you compare the human genome to that of our nearest relatives, you find that one of the most significant differences is that our DNA has deleted sections.

That's right; some of why humans are human comes from genes that got knocked out in our ancestors.

The researchers found that there are about ten thousand bits of DNA, a lot of them consisting only of a couple of base pairs, that chimps and bonobos have and we don't.  A lot of these genetic losses were in regions involved in cognition, speech, and the development of the nervous system, all areas in which our differences are the most obvious.

The reason seems to have to do with gene switching.  Deleting a bit of switch that is intended to shut a gene off can leave the gene functioning for longer, with profound consequences.  Often these consequences are bad, of course.  There are some types of cancer (notably retinoblastoma) that are caused by a developmental gene having a faulty set of brakes.

But sometimes these changes in developmental patterns have a positive result, and therefore a selective advantage -- and we may owe our large brains and capacity for speech to kludgy switches.

"Often we think new biological functions must require new pieces of DNA, but this work shows us that deleting genetic code can result in profound consequences for traits make us unique as a species," said Steven Reilly, senior author of the paper.  "The deletion of this genetic information can have an effect that is the equivalent of removing three characters -- n't -- from the word isn't to create the new word is...  [Such deletions] can tweak the meaning of the instructions of how to make a human slightly, helping explain our bigger brains and complex cognition."

So yet another nail in the coffin of Intelligent Design Creationism, if you needed one.  Of course, I doubt it will convince anyone who wasn't already convinced; as I've observed more than once, you can't logic your way out of a belief you didn't logic your way into.

But at least it's good to know the science is unequivocal.  And, as astrophysicist Neil deGrasse Tyson said, "The wonderful thing about science is that it's true whether or not you believe in it."

****************************************



Saturday, April 29, 2023

Pitch perfect

Consider the simple interrogative English sentence, "She gave the package to him today?"

Now, change one at a time which word is stressed:

  • "She gave the package to him today?"
  • "She gave the package to him today?"
  • "She gave the package to him today?"
  • "She gave the package to him today?"
  • "She gave the package to him today?"

English isn't a tonal language -- where patterns of rise and fall of pitch change the meaning of a word -- but stress (usually as marked by pitch and loudness changes) sure can change the connotation of a sentence.  In the above example, the first one communicates incredulity that she was the one who delivered the package (the speaker expected someone else to do it), while the last one clearly indicates that the package should have been handed over some other time than today.

In tonal languages, like Mandarin, Thai, and Vietnamese, pitch shifts within words completely change the word's meaning.  In Mandarin, for example,  (the vowel spoken with a high level tone) means "mother," while  (the vowel spoken with a dip in tone in the middle, followed by a quick rise) means "horse."  While this may sound complex to people -- like myself -- who don't speak a tonal language, if you learn it as a child it simply becomes another marker of meaning, like the stress shifts I gave in my first example.  My guess is that if you're a native English speaker, if you heard any of the above sentences spoken aloud, you wouldn't even have to think about what subtext the speaker was trying to communicate.

What's interesting about all this is that because most of us learn spoken language when we're very little, which language(s) we're exposed to alters the wiring of the language-interpretive structures in our brain.  Exposed to distinctive differences early (like tonality shifts in Mandarin), and our brains adjust to handle those differences and interpret them easily.  It works the other way, too; the Japanese liquid consonant /ɾ/, such as the second consonant in the city name Hiroshima, is usually transcribed into English as an "r" but the sound it represents is often described as halfway between an English /r/ and and English /l/.  Technically, it's an apico-alveolar tap -- similar to the middle consonant in the most common American English pronunciation of bitter and butter.  The fascinating part is that monolingual Japanese children lose the sense of a distinction between /r/ and /l/, and when learning English as a second language, not only often have a hard time pronouncing them as different phonemes, they have a hard time hearing the difference when listening to native English speakers.

All of this is yet another example of the Sapir-Whorf hypothesis -- that the language(s) you speak alter your neurology, and therefore how you perceive the world -- something I've written about here before.

The reason all this comes up is a study in Current Biology this week showing that the language we speak modifies our musical ability -- and that speakers of tonal languages show an enhanced ability to remember melodies, but a decreased ability to mimic rhythms.  Makes sense, of course; if tone carries meaning in the language you speak, it's understandable your brain pays better attention to tonal shifts.

The rhythm thing, though, is interesting.  I've always had a natural rhythmic sense; my bandmate once quipped that if one of us played a wrong note, it was probably me, but if someone screwed up the rhythm, it was definitely her.  Among other styles, I play a lot of Balkan music, which is known for its oddball asymmetrical rhythms -- such wacky time signatures as 7/8, 11/16, 18/16, and (I kid you not) 25/16:


I picked up Balkan rhythms really quickly.  I have no idea where this ability came from.  I grew up in a relatively non-musical family -- neither of my parents played an instrument, and while we had records that were played occasionally, nobody in my extended family has anywhere near the passion for music that I do.  I have a near-photographic memory for melodies, and an innate sense of rhythm -- whatever its source.

In any case, the study is fascinating, and gives us some interesting clues about the link between language and music, and that the language we speak remodels our brain and changes how we hear and understand the music we listen to..  The two are deeply intertwined, there's no doubt about that; singing is a universal phenomenon.  And making music of other sorts goes back to our Neanderthal forebears, on the order of forty thousand years ago, to judge by the Divje Babe bone flute.

I wonder how this might be connected to what music we react emotionally to.  This is something I've wondered about for ages; why certain music (a good example for me is Stravinsky's Firebird) creates a powerful emotional reaction, and other pieces generate nothing more than a shoulder shrug.

Maybe I need to listen to Firebird and ponder the question further.

****************************************