Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, December 13, 2022

Timey-wimey light

I don't always need to understand things to appreciate them.

In fact, there's a part of me that likes having my mind blown.  I find it reassuring that the universe is way bigger and more complex than I am, and the fact that I actually can parse a bit of it with my little tiny mind is astonishing and cool.  How could it possibly be surprising that there's so much more out there than the fragment of it I can comprehend?

This explains my love for twisty, complicated fiction, in which you're not handed all the answers and everything doesn't get wrapped up with a neat bow at the end.  It's why I thoroughly enjoyed the last season of Doctor Who, the six-part story arc called "Flux."  Apparently it pissed a lot of fans off because it had a quirky, complicated plot that left a bunch of loose ends, but I loved that.  (I'm also kind of in love with Jodie Whittaker's Thirteenth Doctor, but that's another matter.)

I don't feel like I need all the answers.  I'm not only fine with having to piece together what exactly happened to whom, but I'm okay that sometimes I don't know.  You just have to accept that even with all the information right there in front of you, it's still not enough to figure everything out.

Because, after all, that's how the universe itself is.

[Nota bene: Please don't @ me about how much you hated Flux, or how I'm crediting Doctor Who showrunner Chris Chibnall with way too much cleverness by comparing his work to the very nature of the universe.  For one thing, you're not going to change my mind.  For another, I can't be arsed to argue about a matter of taste.  Thanks.]

In any case, back to actual science.  That sense of reality being so weird and complicated that it's beyond my grasp is why I keep coming back to the topic of quantum physics.  It is so bizarrely counterintuitive that a lot of laypeople hear about it, scoff, and say, "Okay, that can't be real."  The problem with the scoffers is that although sometimes we're not even sure what the predictions of quantum mechanics mean, they are superbly accurate.  It's one of the most thoroughly tested scientific models in existence, and it has passed every test.  There are measurements made using the quantum model that have been demonstrated to align with the predictions to the tenth decimal place.

That's a level of accuracy you find almost nowhere else in science.

The reason all this wild stuff comes up is because of a pair of papers (both still in peer review) that claim to have demonstrated something damn near incomprehensible -- the researchers say they have successfully split a photon and then triggered half of it to move backwards in time.

One of the biggest mysteries in physics is the question of the "arrow of time," a conundrum about which I wrote in some detail earlier this year.  The gist of the problem -- and I refer you to the post I linked if you want more information -- is that the vast majority of the equations of physics are time-reversible.  They work equally well backwards and forwards.  A simple example is that if you drop a ball with zero initial velocity, it will reach a speed of 9.8 meters per second after one second; if you toss a ball upward with an initial velocity of 9.8 meters per second, after one second it will have decelerated to a velocity of zero.  If you had a film clip of the two trajectories, the first one would look exactly like the second one running backwards, and vice versa; the physics works the same forwards as in reverse.

The question, then, is why is this so different from our experience?  We remember the past and don't know the future.  The physicists tell us that time is reversible, but it sure as hell seems irreversible to us.  If you see a ball falling, you don't think, "Hey, you know, that could be a ball thrown upward with time running backwards."  (Well, I do sometimes, but most people don't.)  The whole thing bothered Einstein no end.  "The distinction between past, present, and future," he said, "is only an illusion, albeit a stubbornly persistent one."

This skew between our day-to-day experience and what the equations of physics describe is why the recent papers are so fascinating.  What the researchers did was to take a photon, split it, and allow the two halves to travel through a crystal.  During its travels, one half had its polarity reversed.  When the two pieces were recombined, it produced an interference pattern -- a pattern of light and dark stripes -- only possible, the physicists say, if the reversed-polarity photon had actually been traveling backwards in time as it traveled forwards in space.

The scientists write:

In the macroscopic world, time is intrinsically asymmetric, flowing in a specific direction, from past to future.  However, the same is not necessarily true for quantum systems, as some quantum processes produce valid quantum evolutions under time reversal.  Supposing that such processes can be probed in both time directions, we can also consider quantum processes probed in a coherent superposition of forwards and backwards time directions.  This yields a broader class of quantum processes than the ones considered so far in the literature, including those with indefinite causal order.  In this work, we demonstrate for the first time an operation belonging to this new class: the quantum time flip.

This takes wibbly-wobbly-timey-wimey to a whole new level.


Do I really understand what happened here on a technical level?  Hell no.  But whatever it is, it's cool.  It shows us that our intuition about how things work is wildly and fundamentally incomplete.  And I, for one, love that.  It's amazing that not only are there things out there in the universe that are bafflingly weird, we're actually making some inroads into figuring them out.

To quote the eminent physicist Richard Feynman, "I can live with doubt and uncertainty and not knowing.  I think it's much more interesting to live not knowing than to have answers which might be wrong.  I have approximate answers and possible beliefs and different degrees of certainty about different things, but I'm not absolutely sure about anything."

To which I can only say: precisely.  (Thanks to the wonderful Facebook pages Thinking is Power and Mensa Saskatchewan for throwing this quote my way -- if you're on Facebook, you should immediately follow them.  They post amazing stuff like this every day.)

I'm afraid I am, and will always be, a dilettante.  There are only a handful of subjects about which I feel any degree of confidence in my depth of comprehension.  But that's okay.  I make up for my lack of specialization by being eternally inquisitive, and honestly, I think that's more fun anyhow.

 Three hundreds years ago, we didn't know atoms existed.  It was only in the early twentieth century that we figured out their structure, and that they aren't the little solid unbreakable spheres we thought they were.  (That concept is still locked into the word "atom" -- it comes from a Greek word meaning "can't be cut.")  Since then, we've delved deeper and deeper into the weird world of the very small, and what we're finding boggles the mind.  My intuition is that if you think it's gotten as strange as it can get, you haven't seen nothin' yet.

I, for one, can't wait.

****************************************


Monday, December 12, 2022

The origins of Thule

There's a logical fallacy called appeal to authority, and it's trickier than it sounds at first.

Appeal to authority occurs when you state that a claim is correct solely because it was made by someone who has credentials, prestige, or fame.  Authorities are, of course, only human, and make mistakes just like the rest of us, so the difficulty lies in part with the word "solely."  If someone with "M.S., Ph.D." after their name makes a declaration, those letters alone aren't any kind of argument that what they've said is correct, unless they have some hard evidence to back them up.

There's a subtler piece of this, though, and it comes in two parts.  The first is that because scientific research has become increasingly technical, jargon-dense, and specialized, laypeople sometimes are simply unqualified to evaluate whether a claim within a field is justified.  If Kip Thorne, Lee Smolin, or Steven Weinberg were to tell me about some new discovery in theoretical physics, I would be in well over my head (despite my B.S. in physics) and ridiculously out of line to say, "No, that's not right."  At that point, I don't have much of a choice but to accept what they say for the time -- and hope that if it is incorrect, further research and the peer-review process will demonstrate that.  This isn't so much avoiding appeal to authority as it is accepting that bias as an inevitable outcome of my own incomplete knowledge.

The second problem is that sometimes, people who are experts in one field will make statements in another, cashing in on their fame and name recognition to give unwarranted credence to a claim they are unqualified to make.  A good, if disquieting, example of this is the famous molecular geneticist James Watson.  As the co-discoverer of both the double-helical structure of the DNA molecule and the genetic code, anything he had to say about genetic biochemistry should carry considerable gravitas.  On the other hand, he's moved on to making pronouncements about (for example) race that are nothing short of repellent -- including, "I am inherently gloomy about the prospect of Africa [because] all our social policies are based on the fact that their intelligence is the same as ours, whereas all the testing says not really."  Believing this statement "because James Watson said it, and he's a famous scientist" is appeal to authority at its worst.  In fact, he is wildly unqualified to make any such assessment, and the statement reveals little more than the fact that he's an asshole.  (In fact, in 2019 that statement and others like it, including ones reflecting blatant sexism, resulted in Watson being stripped of all his honorary titles by Cold Springs Harbor Laboratory.)

My point here is that appeal to authority is sometimes difficult to pin down, which is why we have to rely on knowledgeable people policing each other.  Which brings us to philologist Andrew Charles Breeze.

Breeze has been a professor of philology at the University of Navarra for thirty-five years, and is a noted scholar of the classics.  His knowledge of Celtic languages, especially as used in ancient Celtic literature, is superb.  But he's also, unfortunately, known for his adherence to hypotheses based on evidence that is slim at best.  One example is his claim that the beautiful Welsh legend cycle The Mabinogion was written by a woman, Gwenllian ferch Gruffydd, daughter of Gruffydd ap Cynan, Prince of Gwynedd.  This claim has proven controversial to say the least.  He also has championed the idea that King Arthur et al. lived, fought, and died in Strathclyde rather than in southwestern England, a claim that has been roundly scoffed at.  Even Arthur's existence is questionable, given that his earliest mention in extant literature is Nennius's Historia Brittonum, which was written in 830 C.E., four hundred years after Arthur was allegedly King of the Britons.  As far as where he lived -- well, it seems to me that establishing if he lived is the first order of business.  

But even making the rather hefty assumption that the accounts of Nennius are true, we still have a problem with Breeze's claim.  Arthur's enemies the Saxons didn't really make any serious incursions into Strathclyde until the early seventh century, so an Arthur in Strathclyde would be in the position of fighting the Battle of Badon Hill against an enemy who wasn't there at the time. 

Awkward.

Anyhow, my point is that Breeze kind of has a reputation for putting himself out on the edge.  Nothing wrong with that; that's why we have peer review.  But I also have to wonder about people who keep making claims with flimsy evidence.  You'd think they'd become at least a little more cautious.

Why this comes up is that Breeze just made yet another claim, and this one is on a topic about which I'm honestly qualified to comment in more detail.  It has to do with the origin of the word "Thule."  You probably know that Thule is the name given in classical Greek and Roman literature to the "most northern place."  It was written in Greek as Θούλη, and has been identified variously as the Faeroe Islands, the Shetland Islands, northern Scotland, Greenland, Iceland, Norway, Finnish Lapland, an "area north of Scythia," the island of Saaremaa (off the coast of Estonia), and about a dozen other places.  The problem is -- well, one of many problems is -- there's no archaeological or linguistic evidence that the Greeks ever went to any of those places.  In the absence of hard evidence, you could claim that Thule was on Mars and your statement would carry equivalent weight.

Another difficulty is that even in classical times, the first source material mentioning Thule, written by Pytheas of Massalia, was looked at with a dubious eye.  The historian Polybius, writing only a century and a half after Pytheas's time, scathingly commented, "Pytheas... has led many people into error by saying that he traversed the whole of Britain on foot, giving the island a circumference of forty thousand stadia, and telling us also about Thule, those regions in which there was no longer any proper land nor sea nor air, but a sort of mixture of all three of the consistency of a jellyfish in which one can neither walk nor sail, holding everything together, so to speak."

Well, Breeze begs to differ.  In a recent paper, he said that (1) Thule is for sure Iceland, and (2) the Greeks (specifically Pytheas and his pals) got to Iceland first, preceding the Vikings by a thousand years.

[Image is in the Public Domain]

Bold claim, but there are a number of problems with it.

First, he seems to be making this claim based on one thing -- that the Greek word for Thule (Θούλη) is similar to the Greek word for altar (θῠμέλη), and that the whole thing was a transcription error in which the vowel was changed (ού substituted for ῠ) and the middle syllable (μέ) dropped.  Well, this is exactly the kind of thing I specialized in during my graduate studies, and I can say unequivocally that's not how historical linguistics works.  You can'd just jigger around syllables in a couple of words and say "now they're the same, q.e.d."  

He says his idea is supported by the fact that from the sea, the southern coast of Iceland looks kind of like an altar:

The term Thymele may have arisen from the orographic features of the south of the island, with high cliffs of volcanic rock, similar to that of Greek temple altars.  Probably, when Pytheas and his men sighted Iceland, with abundant fog, and perhaps with columns of smoke and ashes from volcanoes like Hekla, he thought of the altar of a temple.

This is what one of my professors used to call "waving your hands around in the hopes of distracting the audience into thinking you have evidence."  Also, the geologists have found evidence of only one major eruption in Iceland during Pytheas's lifetime -- the Mývatn eruption in around 300 B.C.E. -- and it occurred in the north part of Iceland, over three hundred kilometers from the southern coast of the island.

Oops.

Another thing that makes me raise an eyebrow is where the paper is published -- the Housman Society Journal, which is devoted to the study of the works of British classicist and poet A. E. Housman.  If Breeze's claim was all that and a bag of crisps, why hasn't it been published in a peer-reviewed journal devoted to historical linguistics?

Third, there's another classical reference to Thule that puts Breeze's claim on even thinner ice, which is from Strabo's Geographica, and states that when Pytheas got to Thule, he found it already thickly inhabited.  There is zero evidence that Iceland had any inhabitants prior to the Vikings -- it may be that the Inuit had summer camps in coastal western Iceland, but that is pure speculation without any hard evidential support.  The earliest Norse writings about Iceland describe it as "a barren and empty land, devoid of people."  Despite all this, Strabo writes:

The people [of Thule] live on millet and other herbs, and on fruits and roots; and where there are grain and honey, the people get their beverage, also, from them.  As for the grain, he says, since they have no pure sunshine, they pound it out in large storehouses, after first gathering in the ears thither; for the threshing floors become useless because of this lack of sunshine and because of the rains.

Oops again.

I can say from experience that establishing linguistic evidence for contact between two cultures is difficult, requires rigorous evidence, and can easily be confounded by chance similarities between words.  My own work, which involved trying to figure out the extent to which Old Norse infiltrated regional dialects of Old English and Archaic Gaelic, was no easy task (and was made even more difficult by the fact that two of the languages, Old Norse and Old English, share relatively recent a common root language -- Proto-Germanic -- so if you see similarities, are they due to borrowing or parallel descent?  Sometimes it's mighty hard to tell).

I'm not in academia and I'm in no position to write a formal refutation of Breeze's claim, but I sure as hell hope someone does.  Historical linguistics is not some kind of bastard child of free association and the game of Telephone.  I've no doubt that Breeze's expertise in the realm of ancient Celtic literature is far greater than mine -- but maybe he should stick to that subject.

****************************************


Saturday, December 10, 2022

Christmas cheer

It sometimes comes as a shock to my friends and acquaintances when they find out that even though I'm a staunch unbeliever in anything even resembling organized religion, I love Christmas music.

Well, some Christmas music.  There are modern Christmas songs that make me want to stick any available objects in my ears, even if those objects are fondue forks.  Abominations like "I Saw Mommy Kissing Santa Claus" leave me leery of entering any public spaces with ambient music from November 15 to December 25.  In my opinion, there should be jail time associated with writing lines like, "Little tin horns and little toy drums, rooty-toot-toot and rummy-tum-tum," and whoever wrote "Let It Snow" should be pitched, bare-ass naked, head-first into a snowdrift.

Each year I participate in something called the Little Drummer Boy Challenge, which is a contest to see if you can make it from Thanksgiving to Christmas without once hearing "The Little Drummer Boy."  So far, I'm still in the game this year, although it must be said that I've done this for nine years and have hardly ever survived.  I've never been taken out as ignominiously, though, as I was a few years ago, when I made it all the way to the week before Christmas, and stopped by a hardware store to pick some stuff up.  And while I was waiting to check out, a stock clerk walked by jauntily singing the following:

Come, they LA LA pah-rum-puh-pum-pum
A newborn LA LA LA pah-rum-puh-pum-pum
LA LA LA gifts we bring pah-rum-puh-pum-pum
LA LA before the king pah-rum-puh-pum-pum, rum-puh-pum-pum, rum-puh-pum-pum

Dude didn't even know all the damn lyrics, but I had to play fair and admit I'd been felled by the Boy one more time.  Before I could stop myself, I glared at him and said, "Are you fucking kidding me right now?" in a furious voice, which led to a significant diminishment of the Christmas cheer in the store, but I maintain to this day I had ample justification.  The alarmed stock clerk scurried off, clearly afraid that if he stuck around much longer, the Batshit Crazy Scruffy Blond Customer was going to Deck his Halls but good.

I know this makes me sound like a grumpy curmudgeon.  I can accept that, because I am a grumpy curmudgeon.  But even so, I absolutely love a lot of Christmas music.  I think "O Holy Night" is a stunning piece of music, and "Angels We Have Heard On High" is incredible fun to sing (as long as it's not sung like a dirge, but as the expression of joy consistent with the lyrics).  Speaking of doing things the right way, check out Annie Lennox's stupendous music video of "God Rest Ye Merry, Gentlemen:"


Despite the impression I probably gave at the start of this post, the list of Christmas songs I like is way longer than the list of ones I don't.  I grew up singing wonderful French carols like "Il Est Né, Le Divin Enfant" and "Un Flambeau, Jeanette Isabella," and to this day hearing those songs makes me smile.

And I can include not only seasonal religious music, but religious music in general, in this discussion; one of my favorite genres of music is Renaissance and Baroque religious music, especially the works of William Byrd, Henry Purcell, J. S. Bach, William Cornysh, Giovanni de Palestrina, and Thomas Tallis.  If you want to hear something truly transcendent, listen to this incredible performance of Tallis's Spem in Alium ("Hope in Another"), a forty-part motet here sung by seven hundred people:


I know it might seem like a contradiction for a non-religious person to thoroughly enjoy such explicitly religious music, but in my opinion, beauty is beauty wherever you find it.  I can be moved to tears by Bach's Mass in B Minor without necessarily believing the story it tells.  And it also pleases me that it gives me common ground with my friends who do believe, for whom the lovely "Mary's Boy Child" isn't just a cool calypso tune, but a joyous expression of something near and dear to them.

I guess I'm a bit of a contradiction in terms sometimes, but that's okay.  I still deeply resent any attempt to force belief on others (or lack of belief, for that matter), and my anger runs deep at the damage done, and still being done, by the religious to members of the LGBTQ community.  The likelihood of my ending up back in church is minuscule at best.

Even so, I still love the holiday season.  It's a chance to give gifts and express my appreciation for my friends and family, and to enjoy the pretty decorations and sweet music.  Honestly, I think a lot of us godless heathens feel the same way, which is why I'm glad to see that this year -- so far, at least -- the Religious Right has backed off on the whole idiotic "War On Christmas" nonsense.  After all, it's been what, fifteen years or so? -- since Bill O'Reilly gave the clarion call that the Atheists Were Comin' For Your Christmas Trees, and if you'll look around you'll notice that everyone's still saying "Merry Christmas" and giving gifts and everything else just like they've always done, so the whole trope has finally fallen a little flat.  It couldn't have gone any other way, honestly.  A great many of us atheistic types are also pretty dedicated to live-and-let-live, and most of us don't care if you have Christmas displays in your front yard so bright they disrupt nearby air traffic, as long as you're not going to pull out your AR-15 when a non-believer says "Happy Holidays" instead of "Merry Christmas."

I do, however, draw the line at piping in "The Little Drummer Boy" over mall loudspeakers.  That's just a bridge too far.  I mean, what kind of stupid song is that, anyhow?  It's about a kid who sees a mom and dad with a quietly sleeping newborn baby, and thinks, "You know what these people need?  A drum solo."

In my opinion, Mary would have been well in her rights to smack him over the head with the frankincense.  Pah-rum-puh-pum-pow, you odious little twerp.

****************************************


Friday, December 9, 2022

It's a bird, it's a plane... no, it's both

One topic I've come back to over and over again here at Skeptophilia is how flawed our sensory/perceptive apparatus is.  Oh, it works well enough; most of the time, we perceive the external world with sufficient clarity not to walk into walls or get run over by oncoming trains.  But our impression that we experience the world as it is -- that our overall ambient sense of everything around us, what the brilliant neurophysiologist David Eagleman calls our umwelt, is a crystal-clear reflection of the real universe -- simply is false.

All it takes is messing about with optical illusions to convince yourself how easy our brains and sensory organs are to fool.  For example, in the following drawing, which is darker; square A or square B?


They're exactly the same.  Don't believe me?  Here's the same drawing, with a pair of gray lines superimposed on it:



Because your brain decided that B was in the shadow and A wasn't, then it concluded that A had to be intrinsically darker.  What baffles me still about this illusion is that even once you know how the trick works, it's impossible to see it any other way.

As astronomer Neil deGrasse Tyson put it, "Our brains are rife with ways of getting it wrong.  You know optical illusions?  That's not what they should call them.  They should call them brain failures.  Because that's what they are.  A few cleverly drawn lines, and your brain can't handle it."

Well, we just got another neat hole shot in our confidence that what we're experiencing is irrefutable concrete reality with a study that appeared in the journal Psychological Science this week.  What the researchers did was attempt to confound the senses of sight and hearing by showing test subjects a photograph of one object morphing into another -- say, a bird into an airplane.  During the time they studied the photograph, they were exposed to a selection from a list of sounds, two of which were relevant (birdsong and the noise of a jet engine) and a number of which were irrelevant distractors (like a hammer striking a nail).

They were then told to use a sliding scale to estimate where in the transformation of bird-into-airplane the image was (e.g. seventy percent bird, thirty percent airplane).  What the researchers found was that people were strongly biased by what they were hearing; birdsong biased the test subjects to overestimate the birdiness of the photograph, and reverse happened with the sound of a jet engine.  The irrelevant noises didn't effect choice (and thus, when exposed to the irrelevant noises, their visual perceptions of the image were more accurate).

"When sounds are related to pertinent visual features, those visual features are prioritized and processed more quickly compared to when sounds are unrelated to the visual features," said Jamal Williams, of the University of California - San Diego, who led the study, in an interview with Science Daily.  "So, if you heard the sound of a birdsong, anything bird-like is given prioritized access to visual perception.  We found that this prioritization is not purely facilitatory and that your perception of the visual object is actually more bird-like than if you had heard the sound of an airplane flying overhead."

I guess it could be worse; at least hearing birdsong didn't make you see a bird that wasn't there.  But it does once again make me wonder how eyewitness testimony is still considered to carry the most weight in a court of law when experiment after experiment has demonstrated not only how incomplete and easily biased our perceptions are, but how flawed our memories are.

Something to keep in mind next time you are tempted to say "I know it happened that way, I saw it with my own eyes."

****************************************


Thursday, December 8, 2022

Death metal bat

My favorite wild animals are bats.

I think the flying fox -- a large diurnal species of fruit bat -- has got to be one of the coolest animals in the world.  Think about how amazing it would be, being a flying fox.  You have great big wings and can fly anywhere you want, you get to eat figs and dates all day, and you're cute as the dickens.  What could be better than that?

Fruit-eating sky puppies, is what they are.

[Image licensed under the Creative Commons Trikansh sharma, Eye contact with flying fox, CC0 1.0]

Unfortunately, bats in general have gotten a bad name, even though they're unequivocally beneficial.  (The insectivorous kinds can eat up to a thousand small flying insects -- including disease-carrying mosquitoes -- in an hour.)   The negative reputation comes from two sources: first, an association with drinking blood (only three out of the thousand species of bats do that; all three live in South America and almost never bite humans); and second, that they carry rabies (which can happen -- but so do raccoons, foxes, skunks, feral cats and dogs, and even deer).

Bats are good guys.  They're also incredibly cool.  I did a piece last year about the wild adaptations for echolocating in nocturnal bats, an ability I still find mind-boggling.  Which is why I was so psyched to run across a paper this week in PLOS-Biology about the fact that their ability to produce such an amazing array of sounds is due to the same feature death metal singers use to get their signature growl. 

In "Bats Expand Their Vocal Range By Recruiting Different Laryngeal Structures for Echolocation and Social Communication," biologists Jonas Håkonsson, Cathrine Mikkelsen, Lasse Jakobsen, and Coen Elemans, of the University of Southern Denmark, write:

Echolocating bats produce very diverse vocal signals for echolocation and social communication that span an impressive frequency range of 1 to 120 kHz or 7 octaves.  This tremendous vocal range is unparalleled in mammalian sound production and thought to be produced by specialized laryngeal vocal membranes on top of vocal folds.  However, their function in vocal production remains untested. By filming vocal membranes in excised bat larynges (Myotis daubentonii) in vitro with ultra-high-speed video (up to 250,000 fps) and using deep learning networks to extract their motion, we provide the first direct observations that vocal membranes exhibit flow-induced self-sustained vibrations to produce 10 to 95 kHz echolocation and social communication calls in bats.  The vocal membranes achieve the highest fundamental frequencies (fo’s) of any mammal, but their vocal range is with 3 to 4 octaves comparable to most mammals.  We evaluate the currently outstanding hypotheses for vocal membrane function and propose that most laryngeal adaptations in echolocating bats result from selection for producing high-frequency, rapid echolocation calls to catch fast-moving prey.  Furthermore, we show that bats extend their lower vocal range by recruiting their ventricular folds—as in death metal growls—that vibrate at distinctly lower frequencies of 1 to 5 kHz for producing agonistic social calls.  The different selection pressures for echolocation and social communication facilitated the evolution of separate laryngeal structures that together vastly expanded the vocal range in bats.

NPR did a story on the research, and followed it up by talking to some death metal singers, all of whom were pretty fascinated to find out bats can do it, too.  "In a [masochistic] sort of way ... I think that when I can feel that my vocal cords are getting kind of shredded or beat up, that it sounds better," said Chase Mason, lead singer of the band Gatecreeper.  "You know, like, if there's a little taste of blood in the back of my throat, I think that I'm doing a good job...  A lot of people will compare you to sounding like a bear or something like that, like an animal growling or roaring even... I think it's cool.  It's very dark and gothic.  The imagery of a bat is always associated with the darker sort of things, like vampires and stuff.  So it definitely makes sense."

I'm still more favoring the Sky Puppy model of bats, but hey, I'm not arguing with a guy who can make noises like Chase Mason can.

In any case, add one more thing to the "cool" column for bats, which was pretty lengthy already.  It's incredible that however much we learn about nature, there are always ways it'll come back and surprise you.  That's why if you have a curious side, learn some science -- you'll never be short of new things to wonder at.

****************************************


Wednesday, December 7, 2022

Swearing off

I've been fascinated with words ever since I can remember.  It's no real mystery why I became a writer, and (later) got my master's degree in historical linguistics; I've lived in the magical realm of language ever since I first learned how to use it.

Languages are full of curiosities, which is my impetus for doing my popular daily bit called #AskLinguisticsGuy on TikTok.  And one of the posts I've done that got the most views was a piece on "folk etymology" -- stories invented (with little or no evidence) to explain word origins -- specifically, that the word "fuck" does not come from the acronym for "Fornication Under Consent of the King."

The story goes that in bygone years, when a couple got married, if the king liked the bride's appearance, he could claim the right of "prima nocta" (also called "droit de seigneur"), wherein he got to spend the first night of the marriage with the bride.  (Apparently this did occasionally happen, but wasn't especially common.)  Afterward -- and now we're in the realm of folk etymology -- the king gave his official permission for the bride and groom to go off and amuse themselves as they wished, at which point he stamped the couple's marriage documents "Fornication Under Consent of the King," meaning it was now legal for the couple to have sex with each other.

This bit, of course, is pure fiction.  The truth is that the word "fuck" probably comes from a reconstructed Proto-Germanic root *fug meaning "to strike."  There are cognates (same meaning, different spelling) in just about every Germanic language there is.  The acronym explanation is one hundred percent false, but you'll still see it claimed (which is why I did a TikTok video on it).

The whole subject of taboo words is pretty fascinating, and every language has 'em.  Most cultures have some levels of taboo surrounding sex and other private bodily functions, but there are some odd ones.  In Québecois French, for example, the swear word that will get your face slapped by your prudish aunt is tabernacle!, which is the emotional equivalent of the f-bomb, but comes (obviously) from religious practice, not sex.  Interestingly, in Québecois French, the English f-word has been adopted in the phrase j'ai fucké ça, which is considered pretty mild -- an English equivalent would be "I screwed up."  (The latter phrase, of course, derives from the sexual definition of "to screw," so maybe they're not so different after all.)

[Image licensed under the Creative Commons Juliescribbles, Money being put in swear jar, CC BY-SA 4.0]

Linguists are not above studying such matters.  I found this out when I was in graduate school and was assigned the brilliant 1982 paper by John McCarthy called "Prosodic Structure and Expletive Infixation," which considers the morphological rules governing the placement of the word "fucking" into other words -- why, for example, we say "abso-fucking-lutely" but never "ab-fucking-solutely."  (The rule has to do with stress -- you put "fucking" before the primary stressed syllable, as long as there is a secondary stressed syllable that comes somewhere before it.)  I was (and am) delighted by this paper.  It might be the only academic paper I ever read in grad school from which I simultaneously learned something and had several honest guffaws.

The reason this whole sweary subject comes up is because of a paper by Shiri Lev-Ari and Ryan McKay that came out just yesterday in the journal Psychonomic Bulletin & Review, called, "The Sound of Swearing: Are There Universal Patterns in Profanity?"  Needless to say, I also thought this paper was just fan-fucking-tastic.  And the answer is: yes, across languages, there are some significant patterns.  The authors write:

Why do swear words sound the way they do?  Swear words are often thought to have sounds that render them especially fit for purpose, facilitating the expression of emotion and attitude.  To date, however, there has been no systematic cross-linguistic investigation of phonetic patterns in profanity.  In an initial, pilot study we explored statistical regularities in the sounds of swear words across a range of typologically distant languages.  The best candidate for a cross-linguistic phonemic pattern in profanity was the absence of approximants (sonorous sounds like l, r, w and y).  In Study 1, native speakers of various languages judged foreign words less likely to be swear words if they contained an approximant.  In Study 2 we found that sanitized versions of English swear words – like darn instead of damn – contain significantly more approximants than the original swear words.  Our findings reveal that not all sounds are equally suitable for profanity, and demonstrate that sound symbolism – wherein certain sounds are intrinsically associated with certain meanings – is more pervasive than has previously been appreciated, extending beyond denoting single concepts to serving pragmatic functions.

The whole thing put me in mind of my dad, who (as befits a man who spent 29 years in the Marine Corps) had a rather pungent vocabulary.  Unfortunately, my mom was a tightly-wound prude who wrinkled her nose if someone said "hell" (and who couldn't even bring herself to utter the word "sex;" the Good Lord alone knows how my sister and I were conceived).  Needless to say, this difference in attitude caused some friction between them.  My dad solved the problem of my mother's anti-profanity harangues by making up swear words, often by repurposing other words that sounded like they could be vulgar.  His favorite was "fop."  When my mom would give him a hard time for yelling "fop!" if he smashed his thumb with a hammer, he would patiently explain that it actually meant "a dandified gentleman," and after all, there was nothing wrong with yelling that.  My mom, in desperate frustration not to lose the battle, would snarl back something like, "It doesn't mean that the way you say it!", but in the end my dad's insistence that he'd said nothing inappropriate was pretty unassailable.

Interesting that "fop" fits into the Lev-Ari/McKay phonetic pattern like a hand in a glove.

Anyhow, as regular readers of Skeptophilia already know, I definitely inherited my dad's salty vocabulary.  But -- as one of my former principals pointed out -- all they are is words, and what really matters is the intent behind them.  And like any linguistic phenomenon, it's an interesting point of study, if you can get issues of prudishness well out of the damn way.

****************************************


Tuesday, December 6, 2022

Art, haiku, and Lensa

The injection of AI technology into art has opened up a serious can of worms.

I ran into two examples of this in rapid succession a couple of days ago.  The first came to me by way of a friend who is an artist and writer, and is about the Lensa app -- a wildly-popular AI art interface that can take an image of your face, spruce it up a bit (if it needs it -- mine certainly would), and then create digital art of you as a superhero, model, mythological creature, Renaissance painting, or dozens of other reimaginings of you.  Someone I follow on TikTok posted a sequence of Lensa art based on his face -- and I have to say, they were pretty damn cool-looking.

Yes, but.

The hitch is where all the imagery Lensa is using comes from.  There are credible allegations that the owners of the app are basically shrugging their shoulders at the question.  Artist Rueben Medina had the following to say about it:

I hate being a party pooper but please stop using Lensa and posting your AI art images from it.  I understand if you don't care about the blatant theft of your data the app is doing, lots of things do that.  What you should care about is this: 
The Lensa app uses the Stable Diffusion model to create those AI images.  That model is trained on the Laion database.  That database is full of stolen artwork and sensitive images.  Using Lensa hurts illustrators/photographers in two major ways: 
1. This database was built without consent nor compensation.  That means the work is stolen. 
2. The proliferation of cheap AI art is culturally devaluing the work of illustrators which is already at rock bottom. 
Is there an ethical way to create AI art?  Absolutely.  Databases built on images that artists have opted into and are being compensated for is the first step.  Pretty much none of these AI art apps do that because it would make their business model (Lensa wants $40/yr) unprofitable.

This one hits hard for me because my wife is an artist who shows all over the Northeast, and it has become increasingly difficult for her to sell her pieces at a price that fairly compensates her for her time, skill, and talent -- in part because it's so easy to get mass-produced digital art that gives the impression of high quality at a far lower price.  Carol's work is stunningly original -- you seriously should check out her website -- and while she still has very successful shows, the game is a lot harder than it used to be.

Part of the problem is how good the AI has gotten.  And it's not just visual art that is under attack.  Right after I ran into the Lensa sequence on TikTok and saw Rueben Medina's impassioned plea not to use it, I stumbled across a paper in the journal Computers in Human Behavior describing an AI program that can produce haiku, a stylized seventeen-syllable form originating in Japan that often deals with finding beauty in nature, and evokes the emotions of serenity, peace, wistfulness, and nostalgia.

The authors write:

To determine the general characteristics of the beauty experience across object kinds, Brielmann et al. (2021) proposed eleven dimensions that have been considered by prominent philosophers of aesthetics (pleasure, wishing to continue the experience, feeling alive, feeling that the experience is beautiful to everyone, number of felt connections to the experience, longing, feeling free of desire, mind wandering, surprise, wanting to understand the experience more, and feeling that the experience tells a story) and eight dimensions conveyed by psychologists (complexity, arousal or excitement, learning from the experience, wanting to understand, harmony in variety, meaningfulness, exceeding one's expectation, and interest).  In accordance with [this scheme], these dimensions were used to identify factors that delineate the experience of beauty in human-made and AI-generated haiku.

It is both fascinating and disquieting that the software produced haiku so authentic-sounding that a panel of readers couldn't tell them apart from ones written by humans.

"It was interesting that the evaluators found it challenging to distinguish between the haiku penned by humans and those generated by AI," said Yoshiyuki Ueda, who co-authored the paper, in an interview with Science Daily.  "Our results suggest that the ability of AI in the field of haiku creation has taken a leap forward, entering the realm of collaborating with humans to produce more creative works. Realizing [this] will lead people to re-evaluate their appreciation of AI art."

Yes, but.


I am very much of the opinion that the perception of beauty in any art form -- be it visual arts, writing, music, dance, theater, or anything else -- occurs because of the establishment of a link between the producer of the art and the consumer.  (I dealt with this a while back, in a post called "The Creative Relationship," about our unstoppable tendency to read our own experience into what we see and hear.)  But what happens when one side of that relationship is a piece of software?  Does that matter?  As a writer, I find this a troubling prospect, to say the least.  I know we're not nearly there yet; haiku is a simple, highly rule-based form, which novels are clearly not.  (I don't mean haiku is simple to do well, just that the rules governing the form are simple.)  Having an AI write a creditable haiku is bound to be a lot easier than having it write a novel.  But as we've seen so many times before, once we have proof of concept, the rest is just tinkering; the software tends to improve really quickly once it's shown that the capability is there.

As a novelist, I would have a serious concern about being superseded by a story-generating computer that could create novels as well as I can.

The whole thing raises questions not only about the ethics of using human creators' work as a springboard for AI-based mass production, but about what exactly creativity means, and whether it matters who -- or what -- is doing the creating.  I don't have any easy answers; my emotional reaction against the possibility of what my wife and I both do being supplanted by computer-generated content may not mean very much.

But I think all of us -- both creators and consumers -- better think long and hard about these issues, and soon.

****************************************