Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Wednesday, December 7, 2022

Swearing off

I've been fascinated with words ever since I can remember.  It's no real mystery why I became a writer, and (later) got my master's degree in historical linguistics; I've lived in the magical realm of language ever since I first learned how to use it.

Languages are full of curiosities, which is my impetus for doing my popular daily bit called #AskLinguisticsGuy on TikTok.  And one of the posts I've done that got the most views was a piece on "folk etymology" -- stories invented (with little or no evidence) to explain word origins -- specifically, that the word "fuck" does not come from the acronym for "Fornication Under Consent of the King."

The story goes that in bygone years, when a couple got married, if the king liked the bride's appearance, he could claim the right of "prima nocta" (also called "droit de seigneur"), wherein he got to spend the first night of the marriage with the bride.  (Apparently this did occasionally happen, but wasn't especially common.)  Afterward -- and now we're in the realm of folk etymology -- the king gave his official permission for the bride and groom to go off and amuse themselves as they wished, at which point he stamped the couple's marriage documents "Fornication Under Consent of the King," meaning it was now legal for the couple to have sex with each other.

This bit, of course, is pure fiction.  The truth is that the word "fuck" probably comes from a reconstructed Proto-Germanic root *fug meaning "to strike."  There are cognates (same meaning, different spelling) in just about every Germanic language there is.  The acronym explanation is one hundred percent false, but you'll still see it claimed (which is why I did a TikTok video on it).

The whole subject of taboo words is pretty fascinating, and every language has 'em.  Most cultures have some levels of taboo surrounding sex and other private bodily functions, but there are some odd ones.  In Québecois French, for example, the swear word that will get your face slapped by your prudish aunt is tabernacle!, which is the emotional equivalent of the f-bomb, but comes (obviously) from religious practice, not sex.  Interestingly, in Québecois French, the English f-word has been adopted in the phrase j'ai fucké ça, which is considered pretty mild -- an English equivalent would be "I screwed up."  (The latter phrase, of course, derives from the sexual definition of "to screw," so maybe they're not so different after all.)

[Image licensed under the Creative Commons Juliescribbles, Money being put in swear jar, CC BY-SA 4.0]

Linguists are not above studying such matters.  I found this out when I was in graduate school and was assigned the brilliant 1982 paper by John McCarthy called "Prosodic Structure and Expletive Infixation," which considers the morphological rules governing the placement of the word "fucking" into other words -- why, for example, we say "abso-fucking-lutely" but never "ab-fucking-solutely."  (The rule has to do with stress -- you put "fucking" before the primary stressed syllable, as long as there is a secondary stressed syllable that comes somewhere before it.)  I was (and am) delighted by this paper.  It might be the only academic paper I ever read in grad school from which I simultaneously learned something and had several honest guffaws.

The reason this whole sweary subject comes up is because of a paper by Shiri Lev-Ari and Ryan McKay that came out just yesterday in the journal Psychonomic Bulletin & Review, called, "The Sound of Swearing: Are There Universal Patterns in Profanity?"  Needless to say, I also thought this paper was just fan-fucking-tastic.  And the answer is: yes, across languages, there are some significant patterns.  The authors write:

Why do swear words sound the way they do?  Swear words are often thought to have sounds that render them especially fit for purpose, facilitating the expression of emotion and attitude.  To date, however, there has been no systematic cross-linguistic investigation of phonetic patterns in profanity.  In an initial, pilot study we explored statistical regularities in the sounds of swear words across a range of typologically distant languages.  The best candidate for a cross-linguistic phonemic pattern in profanity was the absence of approximants (sonorous sounds like l, r, w and y).  In Study 1, native speakers of various languages judged foreign words less likely to be swear words if they contained an approximant.  In Study 2 we found that sanitized versions of English swear words – like darn instead of damn – contain significantly more approximants than the original swear words.  Our findings reveal that not all sounds are equally suitable for profanity, and demonstrate that sound symbolism – wherein certain sounds are intrinsically associated with certain meanings – is more pervasive than has previously been appreciated, extending beyond denoting single concepts to serving pragmatic functions.

The whole thing put me in mind of my dad, who (as befits a man who spent 29 years in the Marine Corps) had a rather pungent vocabulary.  Unfortunately, my mom was a tightly-wound prude who wrinkled her nose if someone said "hell" (and who couldn't even bring herself to utter the word "sex;" the Good Lord alone knows how my sister and I were conceived).  Needless to say, this difference in attitude caused some friction between them.  My dad solved the problem of my mother's anti-profanity harangues by making up swear words, often by repurposing other words that sounded like they could be vulgar.  His favorite was "fop."  When my mom would give him a hard time for yelling "fop!" if he smashed his thumb with a hammer, he would patiently explain that it actually meant "a dandified gentleman," and after all, there was nothing wrong with yelling that.  My mom, in desperate frustration not to lose the battle, would snarl back something like, "It doesn't mean that the way you say it!", but in the end my dad's insistence that he'd said nothing inappropriate was pretty unassailable.

Interesting that "fop" fits into the Lev-Ari/McKay phonetic pattern like a hand in a glove.

Anyhow, as regular readers of Skeptophilia already know, I definitely inherited my dad's salty vocabulary.  But -- as one of my former principals pointed out -- all they are is words, and what really matters is the intent behind them.  And like any linguistic phenomenon, it's an interesting point of study, if you can get issues of prudishness well out of the damn way.

****************************************


Tuesday, December 6, 2022

Art, haiku, and Lensa

The injection of AI technology into art has opened up a serious can of worms.

I ran into two examples of this in rapid succession a couple of days ago.  The first came to me by way of a friend who is an artist and writer, and is about the Lensa app -- a wildly-popular AI art interface that can take an image of your face, spruce it up a bit (if it needs it -- mine certainly would), and then create digital art of you as a superhero, model, mythological creature, Renaissance painting, or dozens of other reimaginings of you.  Someone I follow on TikTok posted a sequence of Lensa art based on his face -- and I have to say, they were pretty damn cool-looking.

Yes, but.

The hitch is where all the imagery Lensa is using comes from.  There are credible allegations that the owners of the app are basically shrugging their shoulders at the question.  Artist Rueben Medina had the following to say about it:

I hate being a party pooper but please stop using Lensa and posting your AI art images from it.  I understand if you don't care about the blatant theft of your data the app is doing, lots of things do that.  What you should care about is this: 
The Lensa app uses the Stable Diffusion model to create those AI images.  That model is trained on the Laion database.  That database is full of stolen artwork and sensitive images.  Using Lensa hurts illustrators/photographers in two major ways: 
1. This database was built without consent nor compensation.  That means the work is stolen. 
2. The proliferation of cheap AI art is culturally devaluing the work of illustrators which is already at rock bottom. 
Is there an ethical way to create AI art?  Absolutely.  Databases built on images that artists have opted into and are being compensated for is the first step.  Pretty much none of these AI art apps do that because it would make their business model (Lensa wants $40/yr) unprofitable.

This one hits hard for me because my wife is an artist who shows all over the Northeast, and it has become increasingly difficult for her to sell her pieces at a price that fairly compensates her for her time, skill, and talent -- in part because it's so easy to get mass-produced digital art that gives the impression of high quality at a far lower price.  Carol's work is stunningly original -- you seriously should check out her website -- and while she still has very successful shows, the game is a lot harder than it used to be.

Part of the problem is how good the AI has gotten.  And it's not just visual art that is under attack.  Right after I ran into the Lensa sequence on TikTok and saw Rueben Medina's impassioned plea not to use it, I stumbled across a paper in the journal Computers in Human Behavior describing an AI program that can produce haiku, a stylized seventeen-syllable form originating in Japan that often deals with finding beauty in nature, and evokes the emotions of serenity, peace, wistfulness, and nostalgia.

The authors write:

To determine the general characteristics of the beauty experience across object kinds, Brielmann et al. (2021) proposed eleven dimensions that have been considered by prominent philosophers of aesthetics (pleasure, wishing to continue the experience, feeling alive, feeling that the experience is beautiful to everyone, number of felt connections to the experience, longing, feeling free of desire, mind wandering, surprise, wanting to understand the experience more, and feeling that the experience tells a story) and eight dimensions conveyed by psychologists (complexity, arousal or excitement, learning from the experience, wanting to understand, harmony in variety, meaningfulness, exceeding one's expectation, and interest).  In accordance with [this scheme], these dimensions were used to identify factors that delineate the experience of beauty in human-made and AI-generated haiku.

It is both fascinating and disquieting that the software produced haiku so authentic-sounding that a panel of readers couldn't tell them apart from ones written by humans.

"It was interesting that the evaluators found it challenging to distinguish between the haiku penned by humans and those generated by AI," said Yoshiyuki Ueda, who co-authored the paper, in an interview with Science Daily.  "Our results suggest that the ability of AI in the field of haiku creation has taken a leap forward, entering the realm of collaborating with humans to produce more creative works. Realizing [this] will lead people to re-evaluate their appreciation of AI art."

Yes, but.


I am very much of the opinion that the perception of beauty in any art form -- be it visual arts, writing, music, dance, theater, or anything else -- occurs because of the establishment of a link between the producer of the art and the consumer.  (I dealt with this a while back, in a post called "The Creative Relationship," about our unstoppable tendency to read our own experience into what we see and hear.)  But what happens when one side of that relationship is a piece of software?  Does that matter?  As a writer, I find this a troubling prospect, to say the least.  I know we're not nearly there yet; haiku is a simple, highly rule-based form, which novels are clearly not.  (I don't mean haiku is simple to do well, just that the rules governing the form are simple.)  Having an AI write a creditable haiku is bound to be a lot easier than having it write a novel.  But as we've seen so many times before, once we have proof of concept, the rest is just tinkering; the software tends to improve really quickly once it's shown that the capability is there.

As a novelist, I would have a serious concern about being superseded by a story-generating computer that could create novels as well as I can.

The whole thing raises questions not only about the ethics of using human creators' work as a springboard for AI-based mass production, but about what exactly creativity means, and whether it matters who -- or what -- is doing the creating.  I don't have any easy answers; my emotional reaction against the possibility of what my wife and I both do being supplanted by computer-generated content may not mean very much.

But I think all of us -- both creators and consumers -- better think long and hard about these issues, and soon.

****************************************


Monday, December 5, 2022

New jaws in an old bird

One difficulty in building evolutionary trees of life from fossil evidence is the fact that "simpler" doesn't necessarily mean "older."

It's an understandable enough mistake.  Taken as a whole, from life's first appearance some 3.7 billion years ago until today, there has been an overall increase in complexity.  The problem occurs when you try to apply that overarching trend to individual lineages -- and find that over time, some species have actually become less complex.

A good example is Subphylum Tunicata, less formally known as tunicates or sea squirts.  At a glance, tunicates look a little like sponges (to which they are only very distantly related); simple, sessile filter feeders.  It was only when biologists discovered their larvae that they realized the truth.  Tunicates are much more closely related to vertebrates than they are to simple invertebrates like the sponges and corals they superficially resemble.  The larvae look a bit like tadpoles, but as they develop the sequentially lose structures like the notochord (the flexible rod that supports the dorsal nerve cord; in us, it ends up becoming the discs between our vertebral bones), most of the muscle blocks, and in fact, just about all their internal organs except the ones involved in processing food and reproducing.

As evolutionary biologist Richard Dawkins put it, evolution is "the law of whatever works."  It doesn't always lead to becoming bigger, stronger, faster, and smarter.  If being small, weak, slow, and dumb works well enough to allow a species to have more surviving offspring -- well, they'll do just fine.

The reason this topic comes up is because of a paper in Nature about a re-analysis of a bird fossil found in a Belgian quarry two decades ago.  The comprehensive study found that one of the bones had been misidentified as a shoulder bone, but was actually the pterygoid bone -- part of the bony palate.  And that bone showed that the species it came from, a heron-sized toothed bird Janavis finalidens, had been misplaced on the avian family tree.

And that single rearrangement might restructure the entire genealogy of birds.

Artist's reconstruction of Janavis finalidens [Image courtesy of artist Philip Krzeminski]

There are two big groups of modern birds; neognaths, which have jaws with free plates allowing the bills to move independently of the skull, and paleognaths, whose jaw bones are fused to the skull.  The paleognaths -- including emus, cassowaries, tinamous, and kiwis -- were thought to be "primitive" in the sense of "more like the ancestral species."  (If you know some Greek, you might have figured this out from the names; paleognath means "old jaw" and neognath means "new jaw.")

But the new analysis of Janavis, a species dating to 67 million years ago -- right before the Chicxulub Meteorite hit and ended the Cretaceous Period and the reign of the dinosaurs -- shows that it was a neognath, at a time prior to the split between the two groups.

Meaning the neognaths might actually have the older body plan.

If this is true -- if the paleognaths evolved from the neognaths, not the other way around -- the puzzle is why.  The flexible beaks of neognaths seems to be better tools than the fused jaws of the paleognaths.  This, though, brings us back to our original point, which is that evolution doesn't necessarily drive species toward complexity.  It also highlights the fact that if a structure works well enough not to provide an actual survival or reproductive disadvantage, it won't be actively selected against.  A good example, all too familiar to the males in the audience, is the structure of the male reproductive organs -- with the urethra passing through the prostate gland (leading to unfortunate results for many of us as we age), and the testicles outside the abdominal cavity, right at the perfect height to sustain an impact from a knee, the corner of a table, or the head of a large and enthusiastic dog.  (If this latter example seems oddly specific, I can assure you there's a galumphing galoot of a pit bull currently asleep on my couch who is the reason it came to mind.)

Anyhow, it looks like we might have to rethink the whole "paleognath" and "neognath" thing.  Makes you wonder what else on the family tree of life might need some jiggering.  

****************************************


Saturday, December 3, 2022

The arms of the ancestors

My maternal grandmother was born Flora Meyer-Lévy, in the little town of Chackbay, Louisiana, in 1893.  I never knew her -- she died fourteen years before I was born, at the young age of 53 -- but I have photographs of her that show a striking woman with auburn hair and a serious expression (consistent with my mom's description of her mother as being a no-nonsense type).


Flora's grandfather, Solomon Meyer-Lévy, was an Ashkenazi Jew, born in the village of Dauendorf, in Alsace.  He emigrated to the United States in the 1850s, only to get caught up in the Civil War -- he fought for a time on the Confederate side, and after the war came home and gave a go at raising horses.  He never made much of a success of it.  One of his grandchildren told me, somewhat euphemistically, that "he made bad deals while drunk."  Solomon, like his granddaughter, died young, at the age of 44.  His widow -- a French Creole woman named Florida Perilloux -- outlived him by over forty years.  She never remarried, and lived most of that time in poverty, converting their home into an inn just to make ends meet.

When I had my DNA tested a couple of years ago, I was fascinated to find that it detected my Ashkenazi great-great grandfather's contribution to my genetic makeup.  I am, the test said, about six percent Ashkenazi -- just about spot-on for having one Jewish ancestor four generations back.  I was surprised that my Jewish heritage was so clear; I didn't realize that Ashkenazi DNA is that distinct.

Apparently, the Ashkenazi have retained their genetic signature because of two factors -- being reproductively isolated and having experienced repeated bottlenecks.  The former, of course, is due to the taboo (on both sides) against Jews marrying non-Jews.  (My great-great grandparents are an interesting counterexample; he was a devout Jew, she was a devout Catholic, and neither one ever changed their religion.  They apparently lived together completely amicably despite their religious differences.  All seven of their children were raised Jewish -- and every single one converted to Catholicism to marry.  Evidently such tolerance was not the rule in nineteenth century Louisiana.)

The latter -- a genetic bottleneck -- refers to the situation when a population has its numbers reduced drastically, and the resurgent population all descends from the small group of survivors.  The bottlenecks in the European Jewish population, of course, were due largely to the repeated pogroms (massacres) that at times looked like eradicating the Jews from Europe entirely.  In fact, this is why the topic comes up today; a paper in Science that came out this week about a genetic investigation of the remains in a Jewish cemetery in Erfurt, Germany.  Many of the dead there were victims of a pogrom in March of 1349, and their teeth -- which contain intact DNA -- confirmed that the Ashkenazi were even then a genetically distinct population, descended from a small group of people who came originally from the Middle East or the Caucasus, and settled in central Europe some time around the year 1000 C.E.

Interestingly, the DNA from Erfurt was strikingly similar to DNA from a twelfth-century Jewish cemetery in Norwich, England, the subject of a paper only four months ago.  The geographical distance, apparently, was not enough to erase the distinct Ashkenazi signature.  "Whether they’re from Israel or New York, the Ashkenazi population today is homogenous genetically," said Hebrew University geneticist Shai Carmi.

Which explains how the DNA test was able to pick up my own ancestry.

It's fascinating to me that, on that one line at least, my family tree can trace its origins to a little group of migrants from the Near East who made their way to what is now eastern France, survived repeated attempts to eradicate them, and eventually produced a branch that went to Louisiana, ultimately leading to me here in upstate New York.  I can only hope I've inherited some of the dogged tenacity these people obviously had.

It's interesting, too, to look at the stern visage of the grandmother I never met, and to know a little more about her heritage.  Even though she, like all the generations before her, now rests in the arms of the ancestors, her genetic legacy lives on in me and her other descendants -- a handful of the "countless stars in the sky" that represent the lineage of Abraham.

****************************************


Friday, December 2, 2022

Switching on humanity

Humans, chimps, and bonobos share a little over 99% of their DNA.

That remaining just-under-one-percent accounts for every physical difference between you and our nearest ape relatives.  It's natural enough to be surprised by this; we look and act pretty different from them most of the time.  (Although if you've read Desmond Morris's classic study The Naked Ape, you'll find there's a lot more overlap between humans and apes behaviorally than you might have realized.)

[Image licensed under the Creative Commons Greg Hume, Bonobo-04, CC BY-SA 3.0]

Part of that sense of differentness is from the cultural context most of us grew up in -- that "human" and "animal" are two separate categories.  In a lot of places that comes from religion, specifically the idea that the Creator fashioned humans separately from the rest of the species on Earth, and that separation persists in our worldviews even for many of us who no longer believe in a supreme deity.  The truth is we're just another branch of Kingdom Animalia, Phylum Chordata, Class Mammalia, Order Primata, albeit a good bit more intelligent and technologically capable than most of the other branches.

It's that last bit that has captured the curiosity of evolutionary geneticists for decades.  The similarities between ourselves and apes are obvious; but where did the differences come from?  How could less than one percent of our DNA be responsible for all the things that do set us apart -- our larger brains, capacity for language, upright posture, and so on?

Just last week, a paper in the journal Cell, written by a team out of Duke University, may have provided us with some answers.

The researchers found that the most striking differences between the genomes of humans and those of chimps and bonobos lay in a set of switches they dubbed Human Ancestor Quickly-Evolved Regions (HAQERs -- pronounced, as you might have guessed, like "hackers").  HAQERs are genetic regulatory switches, that control when and how long other genes are active.  The HAQER sequences the team discovered seem to mostly affect two sets of developmental genes -- the ones that influence brain complexity and the ones involved in the production of the gastrointestinal tract.

"We see lots of regulatory elements that are turning on in these tissues," said Craig Lowe, who co-authored the paper, in an interview with Science Daily.  "These are the tissues where humans are refining which genes are expressed and at what level...  Today, our brains are larger than other apes, and our guts are shorter.  People have hypothesized that those two are even linked, because they are two really expensive metabolic tissues to have around.  I think what we're seeing is that there wasn't really one mutation that gave you a large brain and one mutation that really struck the gut, it was probably many of these small changes over time."

What's most interesting of all is that the HAQER sequences provide another example of how evolution is so frequently a trade-off.  Consider, for example, our upright posture; our vertebral column evolved in animals that walked on all fours, and when we switched to being bipedal it gave us the advantage of freeing up our hands and being able to see farther, but it bequeathed a legacy of lower back problems most other mammals never have to worry about.  Here, the HAQERs that seem to be responsible for our larger and more complex brains also correlate to a variety of disorder susceptibilities.  Particular variants of HAQER sequences are associated with a higher risk of hypertension, neuroblastoma, depression, bipolar disorder, and schizophrenia.

It's just the way genetic change works.  Sometimes you can't improve one thing without screwing something else up.  And if, on balance, the change improves survival and reproductive likelihood, it's still selected for despite the disadvantages.

So we seem to finally be making some inroads into the question of why such a tiny slice of our genome creates all the differences between ourselves and our nearest relatives.  It's worth a reminder, though, that we aren't substantially different than the other species we share the planet it.  It reminds me of the famous quote from Chief Seattle: "We did not weave the web of life, we are merely one strand in it.  Whatever we do to the web, we do to ourselves."

****************************************


Thursday, December 1, 2022

The code breakers

I've always been in awe of cryptographers.

I've read a bit about the work British computer scientist and mathematician Alan Turing did during World War II regarding breaking the "unbreakable" Enigma code used by the Germans -- a code that relied on a machine whose settings were changed daily.  And while I can follow a description of how Turing and his colleagues did what they did, I can't in my wildest dreams imagine I could do anything like that myself.

I had the same sense of awe when I read Margalit Fox's fantastic book The Riddle of the Labyrinth, which was about the work of linguists Alice Kober and Michael Ventris in successfully translating the Linear B script of Crete -- a writing system for which not only did they not initially know what the symbol-to-sound correspondence was, they didn't know if the symbols represented single sounds, syllables, or entire words -- nor what language the script represented!  (Turned out it was Mycenaean Greek.)

I don't know about you, but I'm nowhere near smart enough to do something like that.

Despite my sense that such endeavors are way outside of my wheelhouse, I've always been fascinated by people who do undertake such tasks.  Which is why I was so interested in a link a friend of mine sent me about the breaking of a code that had stumped cryptographers for centuries -- the one used by King Charles V of Spain back in the sixteenth century.


Charles was a bit paranoid, so his creation of a hitherto unbreakable code is definitely in character.  When the letter was written, in 1547, he was in a weak position -- he'd signed the Treaty of Crépy tentatively ending aggression with the French, but his ally King Henry VIII of England had just died and was succeeded by his son, the sickly King Edward VI.  Charles felt vulnerable...

... and in fact, when the letter was finally decrypted, it was found that it was about his fears of an assassination plot.

As it turned out, the fears were unfounded, and he went on to rule Spain and the Holy Roman Empire for another eleven years, finally dying of malaria at age 58.

His code remained unbroken until recently, however.  But the team of Cécile Pierrot-Inria and Camille Desenclos finally was able to decipher it, thanks to a lucky find -- another letter between Charles and his ambassador to France, Jean de St. Mauris, which had a partial key scribbled in the margin.  That hint included the vital information that nine of the symbols were meaningless, only thrown in to make it more difficult to break.  (Which worked.)


Even with the partial solution in hand, it was still a massive task.  As you can see from their solution, most of the consonants can be represented by two different symbols, and double letters are represented by yet another different (single) symbol.  There are single symbols that stand for specific people. 

But even with those difficulties, Pierrot-Inria and Desenclos managed to break the code.

All of this gives hope to linguists and cryptographers working on the remaining (long) list of writing systems that haven't been deciphered yet.  (Wikipedia has a list of scripts that are still not translated -- take a look, you'll be amazed at how many there are.)  I'm glad there are people still working on these puzzles.  Even if I don't have the brainpower to contribute to the effort, I'm in awe that there are researchers who are allowing us to read writing systems that before were a closed book.

****************************************


Wednesday, November 30, 2022

A coin out of chaos

Allegedly there is a traditional Chinese curse that goes, "May you live in interesting times."

It's certainly true that the periods in history that are the most engaging to read about are often the ones no one in their right mind would want to experience first-hand.  For myself, I've always had a near-obsession with the western European "Dark Ages" -- between the collapse of Roman rule in Britain at the end of the fourth century C. E. and the consolidation of Frankish rule under Charlemagne in the middle of the eighth.  Part of the reason for my fascination is that so little is known for certain about it.  When people are fighting like hell just to stay alive, not too many of them are going to prioritize writing books about the experience, or (honestly) even bothering to learn how to read and write.  Add to that the fact that during the turmoil, a great many of the books that had been written beforehand were destroyed, and it all adds up to a great big question mark.

I riffed on this idea in my recently-completed (not yet published) novel The Scattering Winds, in which a similar crisis in the modern world propels us into a new Dark Age -- and five hundred years later, when the surviving remnants of humanity have reverted to a non-literate agrarian culture, one man discovers what's left of a modern library that has somehow survived all the intervening chaos.

The effects such a discovery would have on a people was fascinating for someone like me -- a linguist and (very) amateur historian -- to explore.  But to find out what happened, you'll need to wait till it's in print!

Anyhow, back to reality.  Only a hundred years earlier than the onset of the canonical European Dark Ages, the Roman Empire went through its own Interesting Times -- the "Crisis of the Third Century."  Things had been moving along rather nicely for the Romans (if not for all of the various people they conquered), but then a series of short-lived and completely incompetent emperors led to a period of about seventy years of utter chaos. 

The spiral began with the emperor Elagabalus, who was only fourteen when he succeeded to the throne.  Historians haven't been kind to the young man, largely because he was (1) a raging egotist, (2) completely uninterested in running the government, and (3) gay.  Elagabalus seemed to look upon his position as being not much more than a golden opportunity to find large numbers of hot-looking young men to have sex with, and it's unsurprising that his reign didn't last long.  Just under four years after he was crowned, he was assassinated by the members of his own Praetorian Guard, and was succeeded by his cousin, Severus Alexander.

Severus Alexander was only thirteen when he was crowned (222 C.E.), but for a while, it seemed like things were going to be okay.  The "interesting times" started in earnest when Rome was invaded (for the umpteenth time) by Germanic tribes from the north and then from the Sassanid Empire from the east.  Severus Alexander did a pretty good job meeting both of these threats, but there were members of the Roman army who didn't like the fact that he used both diplomacy and bribery in his peace efforts -- and in the year 235, they murdered the emperor and put one of their own, a man named Maximus Thrax, on the throne in his place.

Maximus Thrax was the first of the "barracks emperors" -- men who had been declared emperor by some faction of the military, despite having neither the skills to rule nor the hereditary claim to gain support of the people.  238 C.E. was called "the Year of Six Emperors," during which six men rose to the high throne and were one after another defeated and killed within weeks to months.  The once-mighty Roman army became a fragmented mess, where different legions supported different claimants to the throne, and spent more time fighting each other than fighting the threats on the imperial borders.

Then, in mid-century, the Plague of Cyprian struck.  No one is completely certain what the disease was, but whatever the cause, it was bad.  Here's an account of the epidemic, by one Pontius of Carthage:

Afterwards there broke out a dreadful plague, and excessive destruction of a hateful disease invaded every house in succession of the trembling populace, carrying off day by day with abrupt attack numberless people, every one from his own house.  All were shuddering, fleeing, shunning the contagion, impiously exposing their own friends, as if with the exclusion of the person who was sure to die of the plague, one could exclude death itself also.  There lay about the meanwhile, over the whole city, no longer bodies, but the carcasses of many, and, by the contemplation of a lot which in their turn would be theirs, demanded the pity of the passers-by for themselves.  No one regarded anything besides his cruel gains.  No one trembled at the remembrance of a similar event.  No one did to another what he himself wished to experience.

There are no good estimates of the death toll, but what is certain is that for the Roman Empire, it made a very bad situation a great deal worse.  Fortunately, in around 270, there were a series of barracks emperors who at least were able to garner enough popular support (and to stay alive long enough) to fight back invasions from the Goths and the Vandals, and the whole miserable period finally ended when Diocletian was crowned in 284.  While he was no one's idea of a nice guy -- Diocletian's known as the leader of the last and bloodiest persecution of the Christians the Roman Empire perpetrated -- there's no doubt that his reform of the government and military was a brilliant success.  He also did something virtually unknown amongst crowned leaders; he ruled for twenty-one years then voluntarily abdicated, preferring to spend his final years gardening.

The reason all this comes up is that the chaos in Europe in the third century leaves historians having to piece together what happened from fragmentary records, and new discoveries can sometimes generate some surprises.  Like the gold coin discovered in 1713 in Transylvania, long considered a fake, that was just demonstrated to be genuine by microscopic analysis of the material it was embedded in.  And the image and inscription on that coin turned out to be of an emperor no one even knew about -- a man named Sponsian.

Sponsian seems to have been another of the barracks emperors, and ruled at least part of the Roman province of Dacia (now part of modern Romania) some time between 260 and perhaps 270.  Given that he's only known from a single coin, we don't know much about him -- the likelihood is that he met the same end as most of the other mid-third century claimants to the Roman throne.

All of which makes me wonder why any of these people wanted to be emperor during this period.  Did they really think, "Okay, the last fourteen guys have all been brutally murdered by howling mobs, but everyone is gonna love me!"?  Myself, I think I'd pre-empt Diocletian and take up gardening from the get-go.

Be that as it may, this new analysis of an old discovery is pretty cool -- and points out that even the Dark Ages may have left behind enough traces that we can piece together what happened.  Even if we never find an intact library, like in my novel, we can still know something about an era that until now has been largely a mystery.

****************************************