Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Monday, May 27, 2024

The enduring mystery of the Huns

In 376 C.E., an enormous group of Germanic-speaking Goths, primarily from the Tervingi and Greuthungi tribes, showed up along the Danube River, which had long stood as an uneasy boundary between the Germanic peoples and the Roman Empire.

Most people are aware that the Roman Empire -- especially the western half of it -- would, for all intents and purposes, collapse completely less than a hundred years after that.  What's less well-known is that up to this point, it was doing pretty well; no one, in 375 C.E., would have looked around them and thought, "Wow, these people are doomed."  British historian Peter Heather analyzed all the usual factors cited in Rome's fourth-century troubles, including an uncontrolled and rebellious army, restive peasantry, food shortages from a drop in agricultural production, and conflicts with Persia on the eastern border.  None appear to be sufficient to account for what was about to happen.  Rome had stood for almost a thousand years -- to put that in perspective, four times longer than the United States has been a nation -- and had survived much worse, including the chaotic "Year of Five Emperors" (193 C.E.), which started with the murder of the paranoid and megalomaniacal emperor Commodus, made famous in the movie Gladiator.

The Roman Empire had dealt with border conflicts pretty much during its entire history.  Given its expansionist agenda, it was directly responsible for a good many of them.  But this time, things would be different.  No one at the time seems to have seen it coming, but the end result would write finis on the Pax Romana.

The difference was a group of people called the Huns.

Reconstruction of a Hunnic warrior [Image licensed under the Creative Commons George S. Stuart creator QS:P170,Q5544204 Photographed by Peter d'Aprix & Dee Finning; Owned by Museum of Ventura County, Attila the Hun on horseback by George S Stuart, CC BY-SA 3.0]

The Huns are a historical enigma.  For a group so widely known -- every schoolkid has heard of Attila the Hun -- their origins are pretty much a complete mystery.  (For what it's worth, they did not give their name to the nation of Hungary; the name "Hungary" comes from the Oghur-Turkic word onogur, meaning "the ten tribes of the Oghurs."  And the Magyars, the Finno-Ugric ethnic group that makes up the majority of the ancestry in modern Hungary, didn't even come into the region until the ninth century C.E.)

As far as the Huns go, we don't even know much about what language they spoke, because they left no written records.  There are a handful of words recorded in documents from the fourth and fifth centuries, and some personal names, but the evidence is so thin that linguists haven't even been able to determine what language family Hunnic belonged to -- there are arguments that it was Turkic, Iranian, Yeniseian, Mongolian, Uralic, and Indo-European, or perhaps a linguistic isolate -- but the fact is, we simply don't know.

So basically, the Huns swept into eastern Europe from we-don't-know-where.  Certainly they at least passed through the central Asian steppe, but whether that's where they originated is a matter of pure conjecture.  There's even a contention they might have come from as far away as what is now northern China, and that they're allied to the Xiongnu culture, but the evidence for that is slim at best.

Roman chronicler Ammianus Marcellinus, who witnessed many of the events of the late fourth century that were to lead to the downfall of the Roman Empire, was grudgingly impressed by what he saw of the Huns:

The people called Huns exceed every measure of savagery.  They are aflame with an inhuman desire for plundering others' property...  They enter battle drawn up in wedge-shaped masses.  And as they are lightly-equipped, for a swift motion, and unexpected in action, they purposely divide suddenly into scattered bands and attack, rushing about in disorder here and there, dealing terrific slaughter...  They fight from a distance with missiles having sharp bone, instead of their usual points, joined to the shafts with wonderful skill; then they gallop over the intervening spaces and fight hand-to-hand with swords.

Ammianus, though, didn't know any better than anyone else where the Huns had originated; his best guess was that they'd lived on "the shores of the ice-bound ocean," but never provided any reason why he thought that.

When they did explode onto the scene, though -- wherever they'd come from -- the effects were catastrophic.  The Goths, Alans, and Sarmatians of what are now the Balkan countries of eastern Europe were shoved farther and farther west, and all of a sudden, the Roman Empire had a serious problem on its hands.  The emperor at the time, Valens, was ill-equipped to deal with a hundred thousand refugees, mostly from Germanic-speaking tribes who had long been considered little more than barbarians.  (To be fair, it's hard to imagine how anyone would be well-equipped to deal with this.)  His decision to treat the Goths as enemies, rather than joining forces with them against the greater threat of the Huns, led to the Battle of Adrianople in 378.

Valens lost both the battle and his life.

While there was some attempt to come to terms with the Goths (or even turn them into allies) by Valens's successor Theodosius I, the stage was set.  The domino effect of Huns shoving the Goths and the Goths shoving the Romans continued, chipping away at the Western Roman Empire, ultimately leading to the Gothic leader Alaric sacking Rome itself in 410.  The Huns made their way into Gaul, and even into Italy, under Attila.  This forward motion continued until the Battle of the Catalaunian Plains, fought in 451 near what is now the town of Châlons, France, at which a combined force of Romans and Goths finally defeated the Huns and forced them back.

Perhaps the most curious thing about the Huns was that after that battle, they began to fall apart themselves with a speed that was just this side of bizarre.  Attila died in 453 -- from what appears to have been an esophageal hemorrhage -- and none of his many sons proved capable as a leader.  They fractured into various factions which rapidly succumbed to internecine squabbling, and their power waned as fast as it had waxed seventy years earlier.  What happened to them after that is just as much of a mystery as everything else about them; most historians believe that what was left of the Huns were absorbed into other ethnic groups in what are now Serbia, Bulgaria, and Romania, and they more or less ceased to exist as an independent culture.

So we're left with a puzzle.  One of the most familiar, instantly recognizable civilizations in history is of unknown origin and had an unknown fate, arising from obscurity and fading back into it as quickly.  But what's certain is that after they surged through Europe, the western half of the Roman Empire never recovered.  The last Emperor of the Western Roman Empire, Romulus Augustulus, abdicated in 476.  The western half of Europe fragmented into petty kingdoms ruled by various Germanic chieftains, and the power center shifted to Constantinople, where it would remain until Charlemagne came to to the throne three hundred years later.

Historical mysteries never fail to fascinate, and this is a baffling one -- a mysterious people who swept into Europe, smashed an empire that had stood for a thousand years, and then vanished, all within the span of a single century.  Perhaps one day historians will figure out who the Huns were, but for now, all we have is scanty records, the awed and fearful accounts of the people who witnessed them, and a plethora of questions.

****************************************



Saturday, May 25, 2024

The cotton-candy planet

There's a general pattern you see in astrophysics, which arises from the fact that gravity is both (1) always attractive, never repulsive, and (2) extremely weak.

It's hard to overstate the "extremely weak" bit.  The next strongest of the four fundamental forces, electromagnetism, is 36 orders of magnitude stronger; that is, the electromagnetic force is 1,000,000,000,000,000,000,000,000,000,000,000,000 times more powerful than gravity.  This may seem odd and counterintuitive, since the gravitational pull on your body seems pretty damn strong (especially when you're tired).  But think about it this way; if you use a refrigerator magnet to pick up a paper clip, that little magnet is able to overcome the force of the entire Earth pulling on the clip in the opposite direction.

The practical result of these two features of gravity is that at small scales and low masses, the effects of gravity are essentially zero.  If I'm picking up a book, I don't have to adjust for the negligible gravitational attraction between myself and the book, only the attraction between the book and the enormous mass of the Earth.  On the largest scales, too, the effects of gravity more or less even out; this is called the flatness problem, and is something I dealt with in more detail in a recent post.  (Plus, on these cosmic scales, the force of expansion of spacetime itself -- something that's been nicknamed dark energy -- takes over.)

It's at mid-range scales that gravity becomes seriously important -- objects the size of planets, stars, and galaxies.  And there, the other feature of gravity kicks in; that it always attracts and never repels.  (Whatever Lost in Space may have had to say about anti-gravity, there's never been evidence of any such thing.)  So for objects between the size of planets and galaxies, gravity always wins unless there is some other force opposing it.

This, in fact, is how stars work; the pull of gravity from their mass causes the matter to collapse inward, heating them up until the fusion of hydrogen starts in the core.  This generates heat and radiation pressure, a balancing force keeping the star in equilibrium.  Once the fuel runs out, though, and that outward force diminishes, gravitational collapse resumes -- and the result is a white dwarf, a neutron star, or a black hole, depending on how big the star is.

All of this is just a long-winded way of saying that if you've got a mass big enough to form something on the order of a planet or star, it tends to fall inward and compress until some other force stops it.  That's why the insides of planets and stars are denser than the outsides.

Well, that's how we thought it worked.

The latest wrench in the mechanism came from the discovery of a planet called WASP-193b orbiting a Sun-like star about 1,200 light years away.  On first glance, WASP-193b looks like a gas giant; its diameter is fifty percent larger than Jupiter's.  So far, nothing that odd; exoplanet studies have found lots of gas giants out there.

But... the mass of WASP-193b is only one-seventh that of Jupiter, giving it the overall density of cotton candy.

So I guess in a sense it is a gas giant, but not as we know it, Jim.  At an average density of 0.059 grams per cubic centimeter, WASP-193b would float on water if you could find an ocean big enough.  Plus, there's the problem of what is keeping it from collapsing.  A mass one-seventh that of Jupiter is still an impressive amount of matter; its gravitational pull should cause it to pull together, decreasing the volume and raising the density into something like that of the planets in our own Solar System.  So there must be something, some force that's pushing all this gas outward, keeping it... fluffy.  For want of a better word.  

But what that force might be is still unknown.

"The planet is so light that it's difficult to think of an analogous, solid-state material," said Julien de Wit of MIT, who co-authored the study, in an interview with ScienceDaily.

[Image licensed under the Creative Commons NOIRLab/NSF/AURA/J. da Silva/Spaceengine/M. Zamani, Artist impression of ultra fluffy gas giant planet orbiting a red dwarf star, CC BY 4.0]

"WASP-193b is the second least dense planet discovered to date, after Kepler-51d, which is much smaller," said Khalid Barkaoui, of the Université de Liège's EXOTIC Laboratory and first author of the paper, which was published in Nature Astronomy last week.  "Its extremely low density makes it a real anomaly among the more than five thousand exoplanets discovered to date.  This extremely-low-density cannot be reproduced by standard models of irradiated gas giants, even under the unrealistic assumption of a coreless structure."

In short, the astrophysicists still don't know what's going on.  Twelve hundred light years from here is what amounts to a planet-sized blob of cotton candy orbiting a Sun-like star.  I'm sure that like the disappearing star from my post two days ago, the theorists will be all over this trying to explain how it could possibly happen, but thus far all we have is a puzzle -- a massive cloud of matter that is somehow managing to defy gravity.

As Shakespeare famously observed, there apparently are more things in heaven and earth than are dreamt of in our philosophy.

****************************************



 

Friday, May 24, 2024

Raw deal

A friend of mine, a veterinarian in Scotland, has proven to be a wonderful source of topics for Skeptophilia, mostly on health-related issues.  Her skeptical, evidence-based approach has given her a keen eye for nonsense -- and man, in this field, there's a lot of nonsense to choose from.

Her latest contribution was so over-the-top at first I thought it was a parody.  Sadly, it's not.  So, dear readers, allow me to introduce you to:

The Raw Meat Carnivore Diet.

Once I ruled this out as an example of Poe's Law, my next guess was that it was the creation of someone like Andrew Tate to prove to us once and for all that he's the alpha-est alpha that ever alpha-ed, but again, this seems not to be the case.  Apparently, this is being seriously suggested as a healthy way to eat.  And it's exactly what it sounds like; on this diet, you're to eat only raw meat from ruminants (beef, bison, lamb, elk, etc.), salt, and water.

[Image licensed under the Creative Commons Jellaluna, Raw beef steak, 2011, CC BY 2.0]

At the risk of stating what is (I devoutly hope) the obvious, this is a really really REALLY bad idea.  Cooking your food remains the easiest and best way to sterilize it, killing pathogens like E. coli, Salmonella, Shigella, Campylobacter, and Staphylococcus aureus, as well as other special offers like various parasitic worms I'd prefer not to even think about.  The writer of the article, one Liam McAuliffe, assures us that the acidity in our stomach is perfectly capable of killing all of the above pathogens -- which leads to the question of why, then, anyone ever becomes ill from them.

Then there's a passage about an experiment back in the 1930s showing that cats fed a raw meat diet were generally healthier, which may be true, but ignores the fact that cats are damn close to obligate carnivores, and we're not.  To convince yourself that cats and humans have evolved to thrive on different diets, all you have to do is look at the teeth.  Cats have what are called carnassial molars; narrow, with sharp shearing edges, designed to cut meat up into chunks.  Our molars are flat, with cusps, typical of -- you guessed it -- an omnivore.  Citing the cat experiment as a reason we should all eat raw meat is a little like observing that cows thrive when allowed to graze in verdant fields, and deciding that henceforth humans should eat nothing but grass.

This brings up something else that Mr. McAuliffe conveniently neglects to mention; to have our digestive systems function properly, humans (and other omnivores) need to have a good bit of plant-derived cellulose in our diets -- what dietitians call "roughage" or "fiber."  Without it, our intestines clog up like a bad drain.  Eliminating all the vegetables from your diet is a good way to end up with terminal constipation.

What a way to go.  Or not go, as the case may be.

Then, there's a bit about how cooking meat reduces the amount of nutrients it contains -- specifically the B vitamins thiamine, riboflavin, and niacin.  Once again, this may well be true; but even if it is, the next question is, how many of us are deficient enough in these nutrients that the loss from cooking is actually a problem?  Let me put it this way; how many people do you know who have had beriberi (thiamine deficiency) or pellagra (niacin deficiency)?  (Riboflavin deficiency is so rare it doesn't even have a name.)  The fact is, if you're eating a normal diet, you are almost certainly getting more of these vitamins than you need, and the small amount of loss from cooking your t-bone steak is far offset by the benefit of not dying from an E. coli infection.

Not to beat the point unto death, but McAuliffe's contention -- that we are, in his words, "hypercarnivorous apex predators" -- is nonsense.  Our closest relatives, chimps and bonobos, are thoroughgoing omnivores, who will certainly eat meat when they can get it but also love fruit, and will chow down on starch-rich roots and stems without any apparent hesitation.  What's optimal for human health, and which has been demonstrated experimentally over and over, is a varied diet including meat (or an equivalent protein source), vegetables, and fruits -- just like our jungle-dwelling cousins.

So.  Yeah.  Go easy on the moose tartare.  I'm of the opinion that a steak with a glass of fine red wine is a nice treat, but let's avoid eating it raw, okay?

****************************************



Thursday, May 23, 2024

Vanishing act

In Madeleine L'Engle's seminal young-adult fantasy novel The Wind in the Door, there's something that is making the stars go out.

Not just stop shining, but disappear entirely.  Here's the scene where the protagonist, Meg Murry, first witnesses it happening:
The warm rose and lavender of sunset faded, dimmed, was extinguished.  The sky was drenched with green at the horizon, muting upwards into a deep, purply blue through which stars began to appear in totally unfamiliar constellations.

Meg asked, "Where are we?"

"Never mind where.  Watch."

She stood beside him, looking at the brilliance of the stars.  Then came a sound, a violent, silent, electrical report, which made her press her hands in pain against her ears.  Across the sky, where the stars were clustered as thickly as in the Milky Way, a crack shivered, slivered, became a line of nothingness.

Within that crack, every star that had been there only a moment ago winked out of existence.
A central point in the story is that according to the laws of physics, this isn't supposed to happen.  Stars don't just vanish.  When they end their lives, they do so in an obvious and violent fashion -- even small-mass stars like the Sun swell into a red giant, and eventually undergo core collapse and blow off their outer atmospheres, creating a planetary nebula.  

The Cat's Eye Nebula [Image is in the Public Domain courtesy of NASA/JPL and the ESO]

Larger stars end their lives even more dramatically, as supernovas which lead to the formation of a neutron star or a black hole depending on how much matter is left over once the star blows up.

Well, that's what we thought always happened.

A study out of the University of Copenhagen has found that like in A Wind in the Door, sometimes stars simply... vanish.  A team of astrophysicists has found that instead of the usual progression of Main Sequence > Giant or Supergiant > BOOM! > White Dwarf, Neutron Star, or Black Hole, there are stars that undergo what the astrophysicists are (accurately if uncreatively) calling "complete collapse."  In a complete collapse, the gravitational pull is so high that even considering the power of a supernova, there's just not enough energy available for the outer atmosphere to achieve escape velocity.  So instead of exploding, it just kind of goes...

... pfft.

Unlike what Meg Murry witnessed, though, the matter that formed those stars is still there somewhere; the Law of Conservation of Matter and Energy is strictly enforced in all jurisdictions.  The star that was the focus of the study, VFTS 243, is part of a binary system -- and its companion star continued in its original orbit around their mutual center of mass without so much as a flutter, so the mass of its now-invisible partner is still there.  But the expected cataclysmic blast that usually precedes black hole formation never happened.

"We believe that the core of a star can collapse under its own weight, as happens to massive stars in the final phase of their lives," said Alejandro Vigna-Gómez, who co-authored the study.  "But instead of the contraction culminating into a bright supernova explosion that would outshine its own galaxy, expected for stars more than eight times as massive as the Sun, the collapse continues until the star becomes a black hole.  Were one to stand gazing up at a visible star going through a total collapse, it might, just at the right time, be like watching a star suddenly extinguish and disappear from the heavens.  The collapse is so complete that no explosion occurs, nothing escapes and one wouldn't see any bright supernova in the night sky.  Astronomers have actually observed the sudden disappearance of brightly shining stars in recent times.  We cannot be sure of a connection, but the results we have obtained from analyzing VFTS 243 has brought us much closer to a credible explanation."

You can see why I was immediately reminded of the scene in L'Engle's book.  And while I'm sure the answer isn't evil beings called Echthroi who are trying to extinguish all the light in the universe, the actual phenomenon is still a little on the unsettling side.

Once again showing that we are very far from understanding everything there is out there.  This sort of vanishing act has been high on the list of Things That Aren't Supposed To Happen.  It'll be interesting to see what the theorists propose with when they've had a shot at analyzing the situation, and if they can come up with some sort of factor that determines whether a massive star detonates -- or simply disappears.

****************************************



Wednesday, May 22, 2024

Hallucinations

If yesterday's post -- about creating pseudo-interactive online avatars for dead people -- didn't make you question where our use of artificial intelligence is heading, today we have a study out of Purdue University that found an application of ChatGPT to solving programming and coding problems resulted in answers that half the time contained incorrect information -- and 39% of the recipients of these answers didn't recognize the answers as incorrect.

The problem of an AI system basically just making shit up is called a "hallucination," and it's proven to be extremely difficult to eradicate.  This is at least partly because the answers are still generated using real data, so they can sound plausible; it's the software version of a student who only paid attention half the time and then has to take a test, and answers the questions by taking whatever vocabulary words he happens to remember and gluing them together with bullshit.  Google's Bard chatbot, for example, claimed that the James Webb Space Telescope had captured the first photograph of a planet outside the Solar System (a believable lie, but it didn't).  Meta's AI Galactica was asked to draft a paper on the software for creating avatars, and cited a fictitious paper by a real author who works in the field.  Data scientist Teresa Kubacka was testing ChatGPT and decided to throw in a reference to a fictional device -- the "cycloidal inverted electromagnon" -- just to see what the AI would do with it, and it came up with a description of the thing so detailed (with dozens of citations) that Kubacka found herself compelled to check and see if she'd by accident used the name of something obscure but real.

It gets worse than that.  A study of an AI-powered mushroom-identification software found it only got the answer right fifty percent of the time -- and, frighteningly, provided cooking instructions when presented with a photograph of a deadly Amanita mushroom.  Fall for that little "hallucination" and three days later at your autopsy they'll have to pour your liver out of your abdomen.  Maybe the AI was trained on Terry Pratchett's line that "All mushrooms are edible.  Some are only edible once."

[Image licensed under the Creative Commons Marketcomlabo, Image-chatgpt, CC BY-SA 4.0]

Apparently, in inventing AI, we've accidentally imbued it with the very human capacity for lying.

I have to admit that when the first AI became widely available, it was very tempting to play with it -- especially the photo modification software of the "see what you'd look like as a Tolkien Elf" type.  Better sense prevailed, so alas, I'll never find out how handsome Gordofindel is.  (A pity, because human Gordon could definitely use an upgrade.)  Here, of course, the problem isn't veracity; the problem is that the model is trained using art work and photography that is (to put not too fine a point on it) stolen.  There have been AI-generated works of "art" that contained the still-legible signature of the artist whose pieces were used to train the software -- and of course, neither that artist nor the millions of others whose images were "scrubbed" from the internet by the software received a penny's worth of compensation for their time, effort, and skill.

It doesn't end there.  Recently actress Scarlett Johansson announced that she actually had to sue Sam Altman, CEO of OpenAI, to get him to discontinue the use of a synthesized version of her voice that was so accurate it fooled her family and friends.  Here's her statement:


Fortunately for Ms. Johansson, she's got the resources to sue Altman, but most creatives simply don't.  If we even find out that our work has been lifted, we really don't have any recourse to fight the AI techbros' claims that it's "fair use." 

The problem is, the system is set up so that it's already damn near impossible for writers, artists, and musicians to make a living.  I've got over twenty books in print, through two different publishers and a handful that are self-published, and I have never made more than five hundred dollars a year.  My wife, Carol Bloomgarden, is an astonishingly talented visual artist who shows all over the northeastern United States, and in any given show it's a good day when she sells enough to pay for her booth fees, lodging, travel expenses, and food.

So throw a bunch of AI-insta-generated pretty-looking crap into the mix, and what happens -- especially when the "artist" can sell it for one-tenth of the price and still turn a profit? 

I'll end with a plea I've made before; until lawmakers can put the brakes on AI to protect safety, security, and intellectual property rights, we all need to stop using it.  Period.  This is not out of any fundamental anti-tech Luddite-ism; it's simply from the absolute certainty that the techbros are not going to police themselves, not when there's a profit to be made, and the only leverage we have is our own use of the technology.  So stop posting and sharing AI-generated photographs.  I don't care how "beautiful" or "precious" they are.  (And if you don't know the source of an image with enough certainty to cite an actual artist or photographer's name or Creative Commons handle, don't share it.  It's that simple.)

As a friend of mine put it, "As usual, it's not the technology that's the problem, it's the users."  Which is true enough; there are a myriad potentially wonderful uses for AI, especially once they figure out how to debug it.  But at the moment, it's being promoted by people who have zero regard for the rights of human creatives, and are willing to steal their writing, art, music, and even their voices without batting an eyelash.  They are shrugging their shoulders at their systems "hallucinating" incorrect information, including information that could potentially harm or kill you.

So just... stop.  Ultimately, we are in control here, but only if we choose to exert the power we have.

Otherwise, the tech companies will continue to stomp on the accelerator, authenticity, fairness, and truth be damned.

****************************************



Tuesday, May 21, 2024

Memento mori

In this week's episode of the current season of Doctor Who, entitled "Boom," the body of a soldier killed in battle is converted into a rather creepy-looking cylinder that has the capacity for producing a moving, speaking hologram of the dead man, which has enough of his memory and personality imprinted on it that his friends and family can interact with it as if he were still alive.


I suspect I'm not alone in having found this scene rather disturbing, especially when his daughter has a chat with the hologram and seems completely unperturbed that her dad had just been brutally killed.  

Lest you think this is just another wild trope dreamed up by Steven Moffat and Russell T. Davies, there are already (at least) two companies that do exactly this -- Silicon Intelligence and Super Brain.  Both of them have models that use generative AI that scour your photos, videos, and written communication to produce a convincing online version of you, that then can interact with your family and friends in (presumably) a very similar fashion to how you did when you were alive.

I'm not the only one who is having a "okay, just hold on a minute" reaction to this.  Ethicists Katarzyna Nowaczyk-Basińska and Tomasz Hollanek, both of Cambridge University, considered the implications of "griefbots" in a paper last week in the journal Philosophy & Technology, and were interviewed this week in Science News, and they raise some serious objections to the practice.

The stance of the researchers is that at the very least there should be some kind of safeguard to protect the young from accessing this technology (since, just as in Doctor Who, there's the concern that children wouldn't be able to recognize that they weren't talking to their actual loved one, with serious psychological repercussions), and that it be clear to all users that they're communicating with an AI.  But they bring up a problem I hadn't even thought of; what's to stop companies from monetizing griefbots by including canny advertisements for paying sponsors?  "Our concern," said Nowaczyk-Basińska, "is that griefbots might become a new space for a very sneaky product placement, encroaching upon the dignity of the deceased and disrespecting their memory."

Ah, capitalism.  There isn't anything so sacred that it can't be hijacked to make money.

But as far as griefbots in general go, my sense is that the entire thing crosses some kind of ethical line.  I'm not entirely sure why, other than the "it just ain't right" arguments that devolve pretty quickly into the naturalistic fallacy.  Especially given my atheism, and my hunch that after I die there'll be nothing left of my consciousness, why would I care if my wife made an interactive computer model of me to talk to?  If it gives her solace, what's the harm?

I think one consideration is that by doing so, we're not really cheating death.  To put it bluntly, it's deriving comfort from a lie.  The virtual-reality model inside the computer isn't me, any more than a photograph or a video clip is.  But suppose we really go off the deep end, here, and consider what it would be like if someone could actually emulate the human brain in a machine -- and not just a random brain, but yours?

There's at least a theoretical possibility that you could have a computerized personality that would be completely authentic, with your thoughts, memories, sense of humor, and emotions.  (The current ones are a long way from that -- but even so, they're still scarily convincing.)  Notwithstanding my opinions on the topic of religion and the existence of the soul, there's a part of me that simply rebels at this idea.  Such a creation might look and act like me, but it wouldn't be me.  It might be a convincing facsimile, but that's about it.

But what about the Turing test?  Devised by Alan Turing, the idea of the Turing test for artificial intelligence is that because we don't have direct access to what any other sentient being is experiencing -- each of us is locked inside his/her own skull -- the only way to evaluate whether something is intelligent is the way it acts.  The sensory experience of the brain is a black box.  So if scientists made a Virtual Gordon, who acted on the computer screen in a completely authentic Gordonesque manner, would it not only be intelligent and alive, but... me?

In that way, some form of you might achieve immortality, as long as there was a computer there to host you.

This is moving into some seriously sketchy territory for most of us.  It's not that I'm eager to die; I tend to agree with my dad, who when he was asked what he wanted written on his gravestone, responded, "He's not here yet."  But as hard as it is to lose someone you love, this strikes me as a cheat, a way to deny reality, closing your eyes to part of what it means to be human.

So when I die, let me go.  Give me a Viking funeral -- put me on my canoe, set it on fire, and launch it out into the ocean.  Then my friends and family need to throw a huge party in my honor, with lots of music and dancing and good red wine and drunken debauchery.  And I think I want my epitaph to be the one I created for one of my fictional characters, also a science nerd and a staunch atheist: "Onward into the next great mystery."

For me, that will be enough.

****************************************



Monday, May 20, 2024

Rules for miracles

In today's News of the Surreal, we have: the Vatican is tightening the rules on what it's willing to call divine supernatural phenomena.

It's tricky business, isn't it?  In science, there's a well-established protocol for evaluating the strength of a claim, involving stuff like evidence and logic (and, if possible, a statistical analysis of the data).  But how do you do that in religion, where the only real rule is God does whatever the hell he wants?  Most of the claims of miracles are, by definition, one-offs; after all, if the same sort of thing kept happening over and over, it wouldn't be a miracle.  It's not like when Moses saw the Burning Bush, he was able to say, "Okay, let's compare this to other times we've had booming voices speak out of a flaming shrubbery, and see if this is a real phenomenon or if maybe I shouldn't have eaten those suspicious-looking mushrooms at dinner." 

So now, according to the new rules, bishops are being given the unenviable task of deciding whether a given apparition or miraculous healing or whatnot is real.  The first hurdle, apparently, is to determine if it is an outright lie to make money -- and the problem is these sorts of claims are ridiculously lucrative, so such scams abound.  The apparition of the Virgin Mary in the little village of Medjugorje, Bosnia and Herzegovina, wherein six adults were supposedly blessed for their faith and told such surprising revelations as "don't have an abortion" and "same-sex marriage is naughty in God's sight," led to it becoming the third most popular pilgrimage site in Europe (after Fátima in Portugal and Lourdes in France).  Over a million people visit the shrine every year, bringing in huge amounts of revenue; in 2019, sixty thousand young Catholics from all over the world descended on the village, accompanied by fourteen archbishops and bishops and over seven hundred priests -- despite the Vatican making the rather equivocal statement that such pilgrimages were okay "as long as there is no assumption the [apparitions of Mary] are confirmed to have a supernatural origin."

One of the many gift shops in Medjugorje [Image licensed under the Creative Commons Sean MacEntee, Virgin Mary Statues (5778409684), CC BY 2.0]

Don't try to tell me that religion isn't big business.

Once the bishops determine that any given claim isn't simply fraudulent, they issue a nihil obstat ("there is no obstacle") decree, which is the religious version of "Whatever floats your boat, dude."  Nihil obstat effectively says, "Okay, fine, we can't stop you from worshiping this thing, but we're not saying it's real, either."  In the new guidelines, bishops are warned against going from there to stating outright that the phenomenon is divine in origin; issued prematurely, the Vatican says, jumping from nihil obstat to "this is a message from God" can lead to "damage to the unity of the Church" and could "cause scandals and undermine the credibility of the Church."

Well, yeah, that's the problem, isn't it?  There is no good evidence-based litmus test for differentiating between a "real" supernatural event (whatever that means) and a mere delusion; if there was, the event wouldn't be supernatural, it would simply be natural.  So we're still down to the sketchy grounds of having a bishop say, "I prayed to God and God said it was so," which then hinges on whether the bishop himself is telling the truth.

Because I can't think of any times bishops have been involved in hinky stuff, can you?

So the new rules don't really solve anything, just kick the can down the road to give the impression that there are now hard-and-fast rules for determining the veracity of something that by definition doesn't obey the laws of nature.  The BBC article where I learned about this story (linked above) ends with what has to be my favorite line I've read in a news source in months, to wit: "And so the Vatican, an institution peppered with mysticism, and which still communicates via smoke signals when electing a new pope, will be hoping its new rules can regulate claims of the supernatural."

Heh.  Yeah.  The Catholic Church, of course, is kind of in an awkward position, because they do more or less accept science most of the time, as long as the science doesn't fly in the face of the status quo.  The Big Bang Model was actually the brainchild of an astronomer who was also an ordained priest (Monseigneur Georges Lemaître) and the Vatican stated outright that the Big Bang was completely compatible with Catholic theology in 1951.  They officially pardoned Galileo in 1992 (better late than never), and have at least refused to condemn biological evolution.  But the fact remains that -- as the writer for the BBC News stated -- the entire institution is rooted in mysticism, which is a deeply unscientific approach to understanding the world.  I suppose I'd prefer this sort of waffling to (say) the views of the fundamentalists, who pretty well reject science in toto, but it still strikes me that trying to play it both ways is not gonna turn out to be a winning strategy.  Once you accept any kind of evidence-based criteria for establishing the truth, you're solidly in science's wheelhouse, and -- despite the "non-overlapping magisteria" stance of people like evolutionary biologist Stephen Jay Gould -- the result for religious claims has almost always been a solid thumbs-down.

In any case, there you have it.  New rules for miracles.  I guess it's a step up from the bumper sticker I saw a while back that said, "The Bible said it, I believe it, and that settles it," but given the other options, I'm still going with the laws of scientific induction any day of the week.

****************************************