Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Saturday, September 6, 2025

The lure of the unknown

Carl Sagan once said, "Somewhere, something incredible is waiting to be known."

I think that's one of the main things that attracted me to science as a child; its capacity to astonish.  I still remember reading the kids' books on various scientific stuff and being astounded to find out things like:

  • dinosaurs, far from being the "failed experiment" they're often characterized as, "ruled the Earth" (as it were) for about five hundred times longer than humans have even existed.  (I only much later found out that dinosaurs still exist; we call 'em birds.)
  • when supergiant stars end their lives, they detonate in a colossal explosion called a supernova that gives off in a few seconds as much energy as the Sun will emit in its entire lifetime.  What's left is called a black hole, where the gravitational pull is so powerful even light can't escape.
  • bats can hear in a frequency range far above humans, and are so sensitive to their own vocalizations that they can hear the echoes of their own voices and distinguish them from the cacophony their friends and relatives are making.
  • when an object moves, its vertical and horizontal velocities are completely independent of each other.  If you shoot a gun horizontally on a level surface, and simultaneously drop a bullet from the gun's muzzle height, the shot bullet and the dropped bullet will hit the ground at the same time.

And that's all stuff we've known for years, because (not to put too fine a point on it) I'm so old that when I was a kid, the Dead Sea was just sick.  In the intervening fifty years since I found out all of the above (and lots of other similar tidbits) the scientists have discovered tons of new, and equally amazing, information about our universe and how it works.  We've even found out that some of what we thought we understood was wrong, or at least incomplete; a good example is photoperiodism, the ability of flowering plants to keep track of day length and thus flower at the right time of year.  It was initially thought that they had a system that worked a bit like a chemical teeter-totter.  A protein called phytochrome has a "dark form" and a "light form" -- the dark form changes to the light form during the day, and the reverse happens at night, so the relative amounts of the two might allow plants to keep track of day length.  But it turns out that all it takes is a flash of red light in the middle of the night to completely upend the plant's biological clock -- so whatever is going on is more complex that we'd understood.

This sudden sense of "wow, we don't know as much as we thought!", far from being upsetting, is positively thrilling to scientists.  Scientists are some of the only people in the world who love saying, "I don't understand."  Mostly because they always follow it up with "... yet."  Take, for example, the discovery announced this week by the National Radio Astronomy Observatory of a huge cloud of gas and dust in our own Milky Way Galaxy that prior to this we hadn't even known was there.

It's been named the Midpoint Cloud, and it's about two hundred light years across.  It's an enormous whirlpool centered on Sagittarius A*, the supermassive black hole at the galaxy's center, and seems to act like a giant funnel drawing material inward toward the accretion disk.

"One of the big discoveries of the paper was the giant molecular cloud," said Natalie Butterfield, lead author of the paper on the phenomenon, which appeared this week in The Astrophysical Journal.  "No one had any idea this cloud existed until we looked at this location in the sky and found the dense gas.  Through measurements of the size, mass, and density, we confirmed this was a giant molecular cloud.  These dust lanes are like hidden rivers of gas and dust that are carrying material into the center of our galaxy.  The Midpoint Cloud is a place where material from the galaxy's disk is transitioning into the more extreme environment of the galactic center and provides a unique opportunity to study the initial gas conditions before accumulating in the center of our galaxy."

[Image credit: NSF/AUI/NSF NRAO/P.Vosteen]

Among the amazing features of this discovery is that it contains a maser -- an intense, focused microwave source, in this case thought to be caused by compression and turbulence in the ammonia-rich gas of the cloud.  Additionally, there are several sites that seem to be undergoing collapse; we might be witnessing the birth of new stars.

What's astonishing to me is that this cloud is (1) humongous, (2) in our own galaxy, and (3) glowing like crazy in the microwave region of the spectrum, yet no one had any idea it was there until now.  How much more are we overlooking because we haven't tuned into the right frequency or turned our telescopes to the right coordinates?

The universe is a big place.  And, I suspect, it's absolutely full of surprises.  Hell, there are enough surprises lying in wait right here on the Earth; to give just one example, I've heard it said that we know more about the near side of the Moon than we do about the deep oceans.

How could anyone not find science fascinating?

This is also why I've never understood how people think that science's progress could be turned into a criticism -- I used to hear it from students phrased as, "why do we have to learn all this stuff when it could all be proven wrong tomorrow?"  Far from being a downside, science's capacity to update and self-correct is its most powerful strength.  How is it somehow better to cling to your previous understanding in the face of evidence to the contrary?

That, I don't think I'll ever come close to comprehending.

I'll end with another quote from a scientific luminary -- the brilliant physicist Richard Feynman -- that I think sums it all up succinctly: "I'd much rather questions that cannot be answered than answers that cannot be questioned."

****************************************


Tuesday, July 22, 2025

Weathering the storm

Something that really grinds my gears is how quick people can be to trumpet their own ignorance, seemingly with pride.

I recall being in a school board budget meeting some years ago, and the science department line items were being discussed.  One of the proposed equipment purchases that came up was an electronic weather station for the Earth Science classroom.  And a local attending the meeting said, loud enough for all to hear, "Why the hell do they need a weather station?  If I want to know what the weather is, I stick my head out the window!  Hurr hurr hurr hurr durr!"

Several of his friends joined in the laughter, while I -- and the rest of the science faculty in attendance -- sat there quietly attempting to bring our blood pressures back down to non-lethal levels.

What astonishes me about this idiotic comment is two things: (1) my aforementioned bafflement about why he was so quick to demonstrate to everyone at the meeting that he was ignorant; and (2) what it said about his own level of curiosity.  When I don't know something, my first thought is not to ridicule but to ask questions.  If I thought an electronic weather station might be an odd or a frivolous purchase, I would have asked what exactly the thing did, and how it was better than "sticking my head out the window."  The Earth Science teacher -- who was in attendance that evening -- could then have explained it to me.

And afterward, miracle of miracles, I might have learned something.

All sciences are to some extent prone to this "I'm ignorant and I'm proud of it" attitude by laypeople, but meteorology may be the worst.  How many times have you heard people say things like, "A fifty percent chance of rain?  How many jobs can you think of where you could get as good results by flipping a coin, and still get paid?"  It took me a fifteen-second Google search to find the weather.gov page explaining that the "probability of precipitation" percentages mean something a great deal more specific than the forecasters throwing their hands in the air and saying, "Might happen, might not."  A fifty-percent chance of rain means that in the forecast area, any given point has a fifty percent chance of receiving at least 0.01" of rain; from this it's obvious that if there's a fifty percent chance over a large geographical area, the likelihood of someone receiving rain in the region is much greater than fifty percent.  (These middling percentages are far more common in the northern hemisphere's summer, when much of the rain falls in the form of sporadic local thunderstorms that are extremely hard to predict precisely.  If you live in the US Midwest or anywhere in the eastern half of North America, you can probably remember times when you got rain and your friends five miles away didn't, or vice versa.)

[Image licensed under the Creative Commons Walter Baxter, The Milestone weather forecasting stone - geograph.org.uk - 1708774, CC BY-SA 2.0]

The problem is, meteorology is complex.  Computer models of the atmosphere rely on estimates of conditions (barometric pressure, temperature, humidity, air speed both vertically and horizontally, and particulate content, to name a few) along with mathematical equations describing how those quantities vary over time and influence each other.  The results are never completely accurate, and extending forward in time -- long-range forecasting -- is still nearly impossible except in the broadest-brush sense.  Add to that the fact there are weather phenomena that are still largely unexplained; one of the weirdest is the Catatumbo lightning, which occurs near where the Catatumbo River flows into Lake Maracaibo in Venezuela.  That one small region gets significant lightning 140 to 160 days a year, nine hours per day, and with lightning flashes from sixteen to forty times per minute.  The area sees the highest density of lightning in the world, at 250 strikes per square kilometer -- and no one knows why.

[Image licensed under the Creative Commons Fernando Flores, Catatumbo Lightning (141677107), CC BY-SA 3.0]

Despite the inaccuracies and the gaps in our understanding, we are far ahead of the idiotic "they're just flipping a coin" that the non-science types would have you believe.  The deadliest North American hurricane on record, the 1900 Galveston storm that took an estimated eight thousand lives, was as devastating as it was precisely because back then, forecasting was so rudimentary that almost no one knew it was coming.  Today we usually have days, sometimes weeks, of warning before major weather events -- and yet, if the prediction is off by a few hours or landfall is inaccurate by ten miles, people still complain that "the meteorologists are just making guesses."

What's grimly ironic is that we might get our chance to find out what it's like to go back to a United States where we actually don't have accurate weather forecasting, because Trump and his cronies have cut the National Weather Service and the National Oceanic and Atmospheric Administration to the bone.  The motivation was, I suspect, largely because of the Right's pro-fossil-fuels, anti-climate-change bias, but the result will be to hobble our ability to make precise forecasts and get people out of harm's way.  You think the central Texas floods in the first week of July were bad?

Keep in mind that Atlantic hurricane season has just started, as well as the western wildfire season.  The already understaffed NWS and NOAA offices are now running on skeleton crews, just at the point when skilled forecasters are needed the most.  My intuition is you ain't seen nothin' yet.

Oh, and don't ask FEMA to help you after the disaster hits.  That's been cut, too.  Following the Texas floods, thousands of calls from survivors to FEMA were never returned, because Homeland Security Secretary Kristi Noem was too busy cosplaying at Alligator Auschwitz to bother doing anything about the situation.  (She responded to criticism by stating that FEMA "responded to every caller swiftly and efficiently," following the Trump approach that all you have to do is lie egregiously and it automatically becomes true.)

Ignorance is nothing to be embarrassed about, but it's also nothing to be proud of.  And when people's ignorance impels them to elect ignorant ideologues as leaders, the whole thing becomes downright dangerous.  Learn some science yourself, sure; the whole fifteen-year run of Skeptophilia could probably be summed up in that sentence.

But more than that -- demand that our leaders base their decisions on facts, logic, science, and evidence, not ideology, bias, and who happens to have dumped the most money into the election campaign.  We're standing on a precipice right now, and we can't afford to be silent.

Otherwise I'm very much afraid we'll find out all too quickly which way the wind is blowing.

****************************************


Wednesday, April 16, 2025

Reinventing Lysenko

Trofim Lysenko was a Soviet agrobiologist during the Stalin years, whose interest in trying to improve crop yields led him into some seriously sketchy pseudoscience.  He believed in a warped version of Lamarckism -- that plants exposed to certain environmental conditions during their lives would alter what they do to adjust to those conditions, and (furthermore) those alterations would be passed down to subsequent generations.

He not only threw away everything Mendel and Darwin had uncovered, he disbelieved in DNA as the hereditary material.  Lysenko wrote:
An immortal hereditary substance, independent of the qualitative features attending the development of the living body, directing the mortal body, but not produced by the latter -- that is Weismann’s frankly idealist, essentially mystical conception, which he disguised as “Neo-Darwinism.”  Weismann’s conception has been fully accepted and, we might say, carried further by Mendelism-Morganism.
So basically, since there were no genes there to constrain the possibilities, humans could mold organisms in whatever way they chose.  "It is possible, with man’s intervention," Lysenko wrote, "to force any form of animal or plant to change more quickly and in a direction desirable to man.  There opens before man a broad field of activity of the greatest value to him."

Trofim Lysenko (1898-1976) [Image is in the Public Domain]

The Soviet agricultural industry was ordered to use Lysenko's theories (if I can dignify them by that name) to inform their practices.  Deeper plowing of fields, for example, was said by Lysenko to induce plants' roots to delve deeper for minerals, creating deeper-rooted plants in following years and increased crop yields.  Farmers dutifully began to plow fields to a depth of five feet, requiring enormous expenditure of time and labor.

Crop yields didn't change.  But that didn't matter; Lysenko's ideas were beloved by Stalin, as they seemed to give a scientific basis to the concept of striving by the sturdy peasant stock, thus improving their own lot.  Evidence and data took a back seat to ideology.  Lysenko was given award after award and rose to the post of Director of the Institute of Genetics in the USSR's Academy of Sciences.  Scientists who followed Lysenko's lead in making up data out of whole cloth to support the state-approved model of heredity got advancements, grants, and gifts from Stalin himself.  Scientists who pointed out that Lysenko's experiments were flawed and his data doctored or fabricated outright were purged -- by some estimates three thousand of them were fired, exiled, jailed, or executed for choosing "bourgeois science" (i.e. actual evidence-based research) over Lysenko.  His stranglehold on Soviet biological research and agricultural practice didn't cease until his retirement in 1965, by which time an entire generation of Soviet scientists had been hindered from making any progress at all.

He is directly responsible for policies that led to widespread famines during which millions starved.

Lately, George Santayana's famous comment about being doomed to repeat history we haven't learned from has been graphically illustrated over and over.  Donald Trump, and the fascist, anti-science ideologues he hired to run the place while he's out golfing, have in the last three months:
So just like in Stalin's day, we are moving toward a state-endorsed scientific party line, which non-scientists (and scientists in the pay of corporate interests or the politicians themselves) are enforcing using such sticks as censorship, funding cuts, and layoffs.  They're even calling the firings "purges;" how they don't cringe at using a word associated with the horrors of people like Stalin and Mao Tse Tung is beyond me.

Or maybe, given how proud people like Stephen Miller, Pete Hegseth, Kristi Noem, and Marco Rubio seem to be of their own cruelty, they have no problem with their viciousness being out on display for all to see.

Lysenko died forty years ago, but his propaganda-based, anti-science spirit lives on.  My hope is that because of the greater transparency and freedom of information afforded by the internet, this sort of behavior will at least not be shrouded in secrecy the way that Stalin's and Lysenko's actions were.  But even if people know what's happening, they have to speak up, and demand action from the spineless members of Congress who are standing idly by while one man and his neo-fascist cronies destroy decades of vital scientific research.  

It's only been three months, and the damage is already horrific.  And keep in mind Trump is, astonishingly, only one-sixteenth of the way through his term.

You do the math.

We are following the same devastating path that annihilated the USSR's position in the scientific community for a generation.  Like the Stalin regime, our nation is at the mercy of the whims of one catastrophically vain, immoral, and stupid man who has elevated a cadre of anti-science zealots to control our science policy based not only what is right or true, but what lines up with party propaganda.  And I fear that over the next three years the claws of partisan politics will sink so deeply into scientific research that it will, as it did in the USSR, take decades to repair the destruction.

****************************************


Tuesday, February 4, 2025

The riddle of the sun stones

When you think about it, it's unsurprising that our ancestors invented "the gods" as an explanation for anything they didn't understand.

They were constantly bombarded by stuff that was outside of the science of their time.  Diseases caused by the unseen action of either genes or microorganisms.  Weather patterns, driven by forces that even in the twenty-first century we are only beginning to understand deeply, and which controlled the all-important supply of food and water.  Earthquakes and volcanoes, whose root cause only began to come clear sixty years ago.

Back then, everything must have seemed as mysterious as it was precarious.  For most of our history, we've been at the mercy of forces we didn't understand and couldn't control, where they were one bad harvest or failed rainy season or sudden plague from dying en masse.

No wonder they attributed it all to gods and sub-gods -- and devils and demons and witches and evil spirits.

As much as we raise an eyebrow at the superstition and seeming credulity of the ancients, it's important to recognize that they were no less intelligent, on average, than we are.  They were trying to make sense of their world with the information they had at the time, just like we do.  That we have a greater knowledge base to draw upon -- and most importantly, the scientific method as a protocol -- is why we've been more successful.  But honestly, it's no wonder that they landed on supernatural, unscientific explanations; the natural and scientific ones were out of their reach.

The reason this comes up is a recent discovery that lies at the intersection of archaeology and geology, which (as regular readers of Skeptophilia know) are two enduring fascinations for me.  Researchers excavating sites at VasagÃ¥rd and Rispebjerg, on the island of Bornholm, Denmark, have uncovered hundreds of flat stone disks with intricate patterns of engraving, dating from something on the order of five thousand years ago.  Because many of the disks have designs of circles with branching radial rays extending outward, they've been nicknamed "sun stones."  Why, in around 2,900 B.C.E., people were suddenly motivated to create, and then bury, hundreds of these stones, has been a mystery.

Until now.

[Image credit: John Lee, Nationalmuseet, Copenhagen, Denmark]

Data from Greenland ice cores has shown a sudden spike in sulfates and in dust and ash from right around the time the sun stones were buried -- both hallmarks of a massive volcanic eruption.  The location of the volcano has yet to be determined, but what is clear is that it would have had an enormous effect on the climate.  "It was a major eruption of a great magnitude, comparable to the well-documented eruption of Alaska’s Okmok volcano in 43 B.C.E. that cooled the climate by about seven degrees Celsius," said study lead author Rune Iversen, of the Saxo Institute at the University of Copenhagen.  "The climate event must have been devastating for them."

The idea that the volcanic eruption in 2,900 B.C.E. altered the climate worldwide got a substantial boost with the analysis of tree rings from wood in Europe and North America.  Right around the time of the sulfate spike in the Greenland ice cores, there's a series of narrow tree rings -- indicative of short growing seasons and cool temperatures.  Wherever this eruption took place, it wrought havoc with the weather, with all of the results that has on human survival.

While the connection between the eruption and the sun stones is an inference, it certainly has some sense to it.  How else would you expect a pre-technological culture to respond to a sudden, seemingly inexplicable dimming of the sun, cooler summers and bitter winters with resultant probable crop failures, and even the onset of wildly fiery sunrises and sunsets?  It bears keeping in mind that our own usual fallback of "there must be a scientific explanation even if I don't know what it is" is a relatively recent development. 

So while burying engraved rocks might seem like a strange response to a climatic change, it is understandable that the ancients looked to a supernatural solution for what must have been a mystifying natural disaster.  And we're perhaps not so very much further along, ourselves, given the way a substantial fraction of people in the United States are responding to climate change even though the models have been predicting this for decades, and the evidence is right in front of our faces.  We still have plenty of areas we don't understand, and are saddled with unavoidable cognitive biases even if we do our best to fight them.  As the eminent science historian James Burke put it, in his brilliant and provocative essay "Worlds Without End":

Science produces a cosmogony as a general structure to explain the major questions of existence.  So do the Edda and Gilgamesh epics, and the belief in Creation and the garden of Eden.  Myths provide structures which give cause-and effect reasons for the existence of phenomena.  So does science.  Rituals use secret languages known only to the initiates who have passed ritual tests and who follow the strictest rules of procedure which are essential if the magic is to work.  Science operates in the same way.  Myths confer stability and certainty because they explain why things happen or fail to happen, as does science.  The aim of the myth is to explain existence, to provide a means of control over nature, and to give to us all comfort and a sense of place in the apparent chaos of the universe.  This is precisely the aim of science.

Science, therefore for all the reasons above, is not what it appears to be.  It is not objectively impartial, since every observation it makes of nature is impregnated with theory.  Nature is so complex, and sometimes so seemingly random, that it can only be approached with a systematic tool that presupposes certain facts about it.  Without such a pattern it would be impossible to find an answer to questions even as simple as "What am I looking at?"
****************************************

Thursday, December 12, 2024

The crossroads

I haven't exactly kept it a secret how completely, utterly fed up I am with media lately.

This goes from the miasmic depths of YouTube, Facebook, and TikTok right on up the food chain to the supposedly responsible mainstream media.  I still place a lot of the blame for Donald Trump's victory at the feet of the New York Times and their ilk; for months they ignored every babbling, incoherent statement Trump uttered, as well as the fascistic pronouncements he made during his more lucid moments, while putting on the front page headlines like "Will Kamala's Choice In Shoes Alienate Her From Voters?"

The idea of responsible journalism has, largely, been lost.  Instead we're drowning in a sea of slant and misinformation, generated by a deadly mix of rightward-tilted corporate control and a clickbait mentality that doesn't give a flying rat's ass whether the content is true or accurate as long as you keep reading or watching it.

While the political stuff is far more damaging, being a science nerd, it's the misrepresentation of science that torques me the the most.  And I saw a good example of this just yesterday, with a fascinating study out of the Max Planck Institute that appeared last week in the journal Astronomy and Astrophysics.

First, the actual research.

Using data from the x-ray telescope eROSITA, researchers found that the Solar System occupies a space in one of the arms of the Milky Way that is hotter than expected.  This "Local Hot Bubble" is an irregularly-shaped region that is a couple of degrees warmer than its surroundings, and is thought to have been caused by a series of supernovae that went off an estimated fourteen million years ago.  The bubble is expanding asymmetrically, with faster expansion perpendicular to the plane of the galaxy than parallel to it, for the simple reason that there is less matter in that direction, and therefore less resistance.

One curious observation is that there is a more-or-less cylindrical streamer of hotter gas heading off in one direction from the bubble, pointing in the general direction of the constellation Centaurus.  The nearest object in that direction is another hot region called the Gum Nebula, a supernova remnant, but it's unclear if that's a coincidence.

The Gum Nebula [Image licensed under the Creative Commons Meli Thev, Finkbeiner H-alpha Gum Nebula, CC BY-SA 4.0]

The researchers called this streamer an "interstellar tunnel" and speculated that there could be a network of these "tunnels" crisscrossing the galaxy, connecting warmer regions (such as the nebulae left from supernovae) and allowing for exchange of materials.  How physics allows the streamers to maintain their cohesion, and not simply disperse into the colder space surrounding them, is unknown.  This idea has been around since 1974, but has had little experimental support, so the new research is an intriguing vindication of a fifty-year-old idea.

Okay, ready to hear the headlines I've seen about this story?

  • "Scientists Find Network of Interstellar Highways in Our Own Galaxy"
  • "A Tunnel Links Us to Other Star Systems -- But Who's Maintaining It?"
  • "Mysterious Alien Tunnel Found In Our Region of Space"
  • "An Outer Space Superhighway"
  • "Scientists Baffled -- We're At The Galactic Crossroads and No One Knows Why"

*brief pause to punch a wall*

Okay, I can place maybe one percent of the blame on the scientists for calling it a "tunnel;" a tunnel, I guess, implies a tunneler.  But look, it's called quantum tunneling, and the aliens-and-spaceships crowd managed to avoid having multiple orgasms about that.  

On the other hand, given the mountains of bullshit out there about quantum resonant energy frequencies of healing, maybe I shouldn't celebrate too quickly.

But the main problem here is the media sensationalizing the fuck out of absolutely everything.  I have no doubt that in this specific case, the whole lot of 'em knew there was nothing in the research that implied a "who" that was "maintaining" these tunnels; the scientists explicitly said there was some unexplained physics here, which was interesting but hardly earthshattering.

But "streamers of gas from a local warm region in our galaxy" isn't going to get most people to click the link, so gotta make it sound wild and weird and woo-woo.

Look, I know this story by itself isn't really a major problem, but it's a symptom of something far worse, and far deeper.  There has got to be a way to impel media to do better.  Media trust is at an all-time low; a study last month estimated it at a little over thirty percent.  And what happens in that situation is that people (1) click on stuff that sounds strange, shocking, or exciting, and (2) for more serious news, gravitate toward sources that reinforce what they already believed.  The result is that the actual facts matter less than presenting people with attractive nonsense, and media consumers never find out if what they believe is simply wrong.

But saying "just don't read the news, because they're all lying" isn't the solution, either.  The likelihood of voting for Trump was strongly correlated with having low exposure to accurate information about current events, something that was exacerbated by his constant message of "everyone is lying to you except for me."

We are at a crossroads, just not the kind the headline-writer was talking about.

Honestly, I don't know that there is an answer, not in the current situation, where we no longer have a Fairness Doctrine to force journalists to be even-handed.  And the proliferation of wildly sensationalized online media sources has made the problem a million times worse.

At this point, I'm almost hoping the people who reported on the astronomy story are right, and we are in the middle of an alien superhighway.  And they'll slow down their spaceship long enough to pick me up and get me the hell off this planet.

****************************************

Tuesday, November 19, 2024

Paradoxes and pointlessness

In his 1967 short story "Thus We Frustrate Charlemagne," writer R. A. Lafferty took one of the first looks at something that since has become a standard trope in science fiction; going back into the past and doing something that changes history.

In his hilarious take on things, some time-machine-wielding scientists pick an event in history that seems to have been a critical juncture (they chose the near-miss assassination attempt on Charlemagne in 778 C.E. that immediately preceded the Battle of Roncevaux), then send an "avatar" back in time to change what happened.  The avatar kills the guy who saved Charlemagne's life, Charlemagne himself is killed, and his consolidation of power into what would become the Holy Roman Empire never happens.

Big deal, right?  Major repercussions down throughout European history?  Well, what happens is that when the change occurs, it also changes the memories of the scientists -- how they were educated, what they knew of history.  The avatar comes back, and everything is different, but the scientists are completely unaware of what's happened -- because their history now includes the change the avatar made.

So they decide that Charlemagne's assassination must have had no effect on anything, and they pick a different historical event to change.  The avatar goes back to try again -- with the same results.

Each time the avatar returns, things have become more and more different from where they started -- and still, none of the characters inside the story can tell.  They can never, in C. S. Lewis's words, "know what might have happened;" no matter what they do, those alternate timelines remain forever outside their ability to see.

In the end, the scientists give up.  Nothing, they conclude, has any effect on the course of events, so trying to change history is a complete waste of time.

One has to wonder if Harvard astrophysicist Avi Loeb has read Lafferty's story, because Loeb just authored an article in The Debrief entitled, "The Wormhole Dilemma: Could Advanced Civilizations Use Time Travel to Rewrite History?"  Which, incidentally, is a fine example of Betteridge's Law -- "any headline phrased as a question can be answered with the word 'no.'"

Before we get into what the article says, I have to say that I'm getting a little fed up with Loeb himself.  He's something of a frequent flier on Skeptophilia and other science-based skepticism websites (such as the one run by the excellent Jason Colavito), most recently for his strident claim that meteoric debris found in the Pacific Ocean was from the wreckage of an alien spacecraft.  (tl;dr: It wasn't.)  

I know we skeptical types can be a little hard to budge sometimes, and a criticism levied against us with at least some measure of fairness is that we're so steeped in doubting that we wouldn't believe evidence if we had it.  But even so, Loeb swings so far in the opposite direction that it's become difficult to take anything he says seriously.  In the article in The Debrief, he talks about how wormholes have been shown to be mathematically consistent with what we know about physics (correct), and that Kip Thorne and Stephen Hawking demonstrated that they could theoretically be kept open long enough to allow passage of something from one point in spacetime to another (also correct).  

This would require, however, the use of something with negative mass-energy to stabilize the wormhole so it doesn't snap shut immediately.  Which is a bit of a sticking point, because there's never been any proof that such a something actually exists.

Oh, but that's no problem, Loeb says; dark energy has negative (repulsive) energy, so an advanced civilization could "excavate dark energy from the cosmic reservoir and mold it into a wormhole."  He admits that we don't know if this is possible because we still have no idea what dark energy actually is, but then goes into a long bit about how we (or well-intentioned aliens) could use such a wormhole to "fix history," starting with getting rid of Adolf Hitler and preventing the Holocaust.

A laudable goal, no doubt, but let's just hang on a moment.

[Image is in the Public Domain courtesy of artist Martin Johnson]

The idea of the altering of history potentially creating intractable paradoxes is a staple of science fiction, ever since Lafferty (and Ray Bradbury in his brilliant and devastating short story "The Sound of Thunder") brought it into the public awareness.  Besides my own novel Lock & Key, in which such a paradox wipes out all of humanity except for one dubiously lucky man who somehow escapes being erased and ends up having to fix the problem, this sort of thing seemed to happen every other week on Star Trek: The Next Generation, where one comes away with the sense that the space-time continuum is as flimsy as a wet Kleenex.  It may be that there is some sort of built-in protection in the universe for preventing paradoxes -- such as the famous example of going back in time and killing your own grandfather -- but even that point is pure speculation, because the physicists haven't shown that time travel into the past is possible, much less practical.

So Loeb's article is, honestly, a little pointless.  He looks at an idea that countless fiction writers -- including myself -- have been exploring ad nauseam since at least 1967, and adds nothing to the conversation from a scientific perspective other than saying, "Hey, maybe superpowerful aliens could do it!"  As such, what he's done is really nothing more than mental masturbation.

I know I'm coming away sounding like a killjoy, here.  It's not that this stuff isn't fun to think about; I get that part of it.  But yet another article from Loeb talking about how (1) highly-advanced alien civilizations we know nothing about about might (2) use technology that requires an unknown form of exotic matter we also know nothing about to (3) accomplish something physicists aren't even sure is possible, isn't doing anything but giving new meaning to the phrase "Okay, that's a bit far-fetched."

The whole thing put me in mind of physicist Sabine Hossenfelder's recent, rather dismal, video "Science is in Trouble, and It Worries Me."  Her contention is that science's contribution to progress in our understanding of the universe, and to improving the wellbeing of humanity, has slowed way down -- that (in her words) "most of what gets published is bullshit."  Not that what gets published is false; that's not what she means.  Just that it's pointless.  The emphasis on science being on the cutting edge, on pushing the limits of what we know, on being "disruptive" (in a good sense), has all but vanished.  Instead, the money-making model -- writing papers so you get citations so you get grants so you can write more papers, and so on and so on -- has blunted the edge of what academia accomplishes, or even can accomplish.

And I can't help but throw this fluff piece by Loeb into that same mix.  As a struggling writer who has yet to exceed a three-figure income from my writing in a given year, I have to wonder how much The Debrief paid Loeb for his article.  I shouldn't be envious of another writer, I guess; and honestly, I wouldn't be if what Loeb had written had scientific merit, or even substance.

But as is, the whole thing pisses me off.  It adds to the public perception of scientists as speculative hand-wavers, gives the credulous the impression that something is possible when it probably isn't, teaches the reader nothing most of us haven't already known for years, and puts another entirely undeserved feather in Avi Loeb's cap.

My general sense is that he was doing less harm when he was looking for an alien hiding behind every tree.

****************************************


Saturday, October 5, 2024

The treadmill

I've mentioned before how my difficulties with math short-circuited my goal of becoming a researcher in physics, but the truth is, there's more to the story than that.

Even after I realized that I didn't have the mathematical ability -- nor, honestly, enough interest and focus to overcome my challenges -- I still had every intention of pursuing a career in science.  I spent some time in the graduate school of oceanography at the University of Washington, and from there switched to biology, but I found neither to be a good fit.  It wasn't a lack of interest in the disciplines; biology, in fact, is still a deep and abiding fascination to this day, and I ultimately spent over three decades teaching the subject to high schoolers.  What bothered me was the publish-or-perish atmosphere that permeated all of research science.  I still recall my shock when one of our professors said, "Scientists spend 25% of their time doing the research they're interested in, and 75% of their time trying to beat everyone else in the field to grant money so they don't starve to death."

It's hard to pinpoint an exact moment that brought me to the realization that the career I'd always dreamed of wasn't for me -- but this was certainly one of the times I said, "Okay, now, just hang on a moment."

I'm not alone in having issues with this.  The brilliant theoretical physicist Sabine Hossenfelder did a video on her YouTube channel called "My Dream Died, and Now I'm Here" that's a blistering indictment of the entire edifice of research science.  Hossenfelder has the following to say about how science is currently done:

It was a rude awakening to realize that this institute [where she had her first job in physics research] wasn't about knowledge discovery, it was about money-making.  And the more I saw of academia, the more I realized it wasn't just this particular institute and this particular professor.  It was generally the case.  The moment you put people into big institutions, the goal shifts from knowledge discovery to money-making.  Here's how this works:

If a researcher gets a scholarship or research grant, the institution gets part of that money.  It's called the "overhead."  Technically, that's meant to pay for offices and equipment and administration.  But academic institutions pay part of their staff from this overhead, so they need to keep that overhead coming.  Small scholarships don't make much money, but big research grants can be tens of millions of dollars.  And the overhead can be anything between fifteen and fifty percent.  This is why research institutions exert loads of pressure on researchers to bring in grant money.  And partly, they do this by keeping the researchers on temporary contracts so that they need grants to get paid themselves...  And the overhead isn't even the real problem.  The real problem is that the easiest way to grow in academia is to pay other people to produce papers on which you, as the grant holder, can put your name.  That's how academia works.  Grants pay students and postdocs to produce research papers for the grant holder.  And those papers are what the supervisor then uses to apply for more grants.  The result is a paper-production machine in which students and postdocs are burnt through to bring in money for the institution...

I began to understand what you need to do to get a grant or to get hired.  You have to work on topics that are mainstream enough but not too mainstream.  You want them to be a little bit edgy, but not too edgy.  It needs to be something that fits into the existing machinery.  And since most grants are three years, or five years at most, it also needs to be something that can be wrapped up quickly...

The more I saw of the foundations of physics, the more I became convinced that the research there wasn't based upon sound scientific principles...  [Most researchers today] are only interested in writing more papers...  To get grants.  To get postdocs.  To write more papers.  To get more grants.  And round and round it goes.
The topic comes up today because of two separate studies that came out in the last two weeks that illustrate a hard truth that the scientific establishment as a whole has yet to acknowledge; there's a real human cost to putting talented, creative, bright people on the kind of treadmill Hossenfelder describes.

[Image licensed under the Creative Commons Doenertier82, Phodopus sungorus - Hamsterkraftwerk, CC BY-SA 3.0]

The first study, from a group in Sweden, found that simply pursuing a Ph.D. takes a tremendous toll on mental health, and instead of there being a "light at the end of the tunnel," the toll worsens as the end of the work approaches.  By the fifth year of doctoral study, the likelihood of a student using mental-health medications rises by forty percent.  It's no surprise why; once the Ph.D. is achieved, there's the looming stress of finding a postdoc position, and then after that the savage competition for the few stable, tenure-track research positions out there in academia.  "You need to generate data as quickly as possible, and the feeling of competition for funding and jobs can be very strong, even early in your PhD.," said Rituja Bisen, a fifth-year Ph.D. student in neurobiology at the University of Würzburg.  "Afterward, many of us have to move long distances, even out of the country, to find a worthwhile position.  And even then, there's no guarantee.  It doesn’t matter how good a lab is; if it’s coming out of a toxic work culture, it isn’t worth it in the long run."

The other study, out of Poland (but involving worldwide data), is perhaps even more damning; over fifty percent of researchers leave science entirely in under ten years after publishing their first academic paper.

You spend huge amounts of money on graduate school, work your ass off to get a Ph.D, and then a position as a researcher, and after all that -- you find that (1) the stress isn't worth it, (2) you're barely making enough money to get by, and (3) the competition for grants is only going to get worse over time.  It's not surprising that people decide to leave research for other career options.

But how heartbreaking is it that we're doing this to the best and brightest minds on the planet?

And the problem is even more drastic for women and minorities; for them, the number still left publishing after ten years is more like thirty percent of the ones who started.

How far would we have advanced in our understanding of how the universe works if the system itself wasn't strangling the scientists?

Back when modern science got its start, in the seventeenth and eighteenth centuries, science was the province of the rich; only the people who were already independently wealthy had the wherewithal to (1) get a college education, and afterward (2) spend their time messing about in laboratories.  There are exceptions -- Michael Faraday comes to mind -- but by and large, scientific inquiry was confined to the gentry.

Now, we have the appearance of a more open, egalitarian model, but at its basis, the whole enterprise still depends on institutions competing for money, and the people actually doing the research (i.e. the scientists) being worked to the bone to keep the whole superstructure running.

It's a horrible problem, and one I don't see changing until our attitudes shift -- until we start prioritizing the advancement of knowledge over academia-for-profit.  Or, perhaps, until our governments recognize how absolutely critical science is, and fund that over the current goals of fostering corporate capitalism to benefit the extremely wealthy and developing newer and better ways to kill those we perceive as our enemies.

I've heard a lot of talk about how prescient Star Trek was -- we now have something very like their communicators and supercomputers, and aren't far away from tricorders.  But we won't actually get there until we develop one other thing, and I'm not talking about warp drives or holodecks.

I'm talking about valuing science, and scientists, as being the pinnacle of what we as a species can achieve, and creating a system to provide the resources to support them instead of doing everything humanly possible to drive them away.

****************************************


Saturday, September 7, 2024

Hype detector

There's a problem with online science directed at laypeople.

I was discussing this with a friend yesterday.  Although he and I both have a decent science background, we're both very much generalists by nature.  We're interested in many different topics, we're each kinda sorta vaguely good at maybe a dozen of them, but we're actual experts in none.  It's not that I think this is an inherently bad thing; having a broad knowledge base is part of why I was a good high school teacher.  I did a decent job teaching biology, but could still field the occasional pop fly into deep right about, say, the ancient history of Norway.

The issue centers around the fact that curious people like myself are attracted to what we don't know, so when we see something unusual and attention-grabbing, we want to click on it.  Couple this tendency with a second issue -- that when sites like YouTube, TikTok, and Instagram are monetized, it's based on the number of clicks (or the minutes watched) -- and you have what amounts to an attractive nuisance.

There are some sites that do their level best to present science as accurately and fairly as possible; two excellent examples are Neil deGrasse Tyson's YouTube channel StarTalk and astrophysicist Becky Smethurst's outstanding channel Dr. Becky.  (If you're interested in astronomy, you should subscribe to both of these immediately.)  But intermingled with those are hundreds of others that mix a smidgen of science with a heaping handful of sensationalized hype, designed to get you to say "WTF?" and click the link -- because that's how they get revenue.


I'm not going to give you any links -- they don't deserve it -- but a quick perusal of my "Recommended For You" YouTube videos this morning included the following:

  • One claiming that Betelgeuse is ABOUT TO GO SUPERNOVA (capitalization theirs), with a caption of "Life on Earth Will Be Wiped Out?" and a photo of physicist Michio Kaku looking worried.
  • "If You See the Sky Turn This Color, Run!" -- turns out it's the "green sky = tornado" thing, and when you strip away all the excess verbiage it boils down to the rather well-known fact that tornadoes are scary and you should avoid being in the middle of one.
  • "99% of Humans Die -- Could It Happen Again?"  This one is about the Toba Eruption, the effects of which are far from settled in scientific circles, and the answer to the question is "I guess so, but it's not likely any time soon."
  • "Why an Impossible Paradox Inside Black Holes Appears to Break Physics!"  This is about the "information paradox," which is certainly curious, but it (1) obviously isn't impossible because it exists, (2) isn't about the inside of black holes because by definition we don't know what happens in there, and (3) hasn't "broken physics" (although it did demonstrate that our knowledge of black holes is incomplete, which is hardly surprising).
  • "Yellowstone Volcano Simulation!" -- heavy on the catastrophizing and AI-generated footage of people being vaporized, light on the science.  As I've pointed out here at Skeptophilia, there is no sign that the Yellowstone Supervolcano is anywhere near an eruption.

And so on and so forth.

The trouble is, science videos and webpages exist on a spectrum, with wonderful sites like Veritasium on one end and outright lunacy like the subject of yesterday's post (about people who have allegedly jumped through time and space and ended up back in the Carboniferous Period) on the other.  It's usually pretty obvious when you find one that's straight-up science; the total wackos are also generally easy to spot.

It's the ones in the middle that are troublesome.  They mix in just enough science to give them the façade of reliability, but stir it into a ton of flashy, sensationalized speculation.  Since minutes watched = dollars earned, these videos generally draw out the message; they're often way longer than the topic warrants, and are characterized by endless repetition.  (I watched one twenty-minute video on Cretaceous dinosaurs that must have said eight times, "a fearsome predator unlike anything we currently have on Earth"!)

It's hard to know what to do about this.  Even people who are intellectually curious and want to learn actual science like to be entertained; and there's nothing wrong with framing scientific content in a way that's engaging to the audience, something that the three outstanding sites I mentioned certainly do.  But the monetized social media model feeds into the practice of using science as clickbait, and therefore encourages content creators to exaggerate (or outright fabricate) the story to make it seem more exciting or edgy or dangerous than it actually is, with the result that people come away less well-informed than they went in.

Which is frustrating, but isn't going to change any time soon.  And I guess this sort of sensationalized garbage is nothing new; all that's changed is the delivery mode.  Growing up, every time I went through a grocery store checkout line I was assaulted by The Weekly World News, which featured headlines about BatBoy and the Lost Continent of Atlantis and Elvis Is Still Alive And Was Spotted In Tokyo, and I came away mostly unscathed.

So the important thing is teaching people how to tease apart the good science from the hype.  But honestly, it always has been.

****************************************


Tuesday, September 3, 2024

The problem with research

If there's one phrase that torques the absolute hell out of me -- and just about every actual scientist out there -- it's, "Well, I did my research."

Oh, you did, did you?  What lab did you do your research in?  Or was it field work?  Let's see your data!  Which peer-reviewed journal published your research?  How many times has it been cited in other scientific journals?

Part of the problem, of course, is like a lot of words in the English language -- "theory" and "proof" are two examples that come to mind -- the word "research" is used one way by actual researchers and a different way by most other people.  We were taught the alternate definition of "research" in grade school, with being assigned "research papers," which meant "go out and look up stuff other people have found out on the topic, and summarize that in your own words."  There's a value to doing this; it's a good starting place to understanding a subject, and is honestly where we all began with scholarship.

The problem is -- and it exists even at the grade-school level of inquiry -- this kind of "research" is only as good as the sources you choose.  When I was a teacher, one of the hardest things to get students to understand was that all sources are not created equal.  A paper in Science, or even the layperson's version of it in Scientific American or Discover, is head-and-shoulders above the meanderings of Some Random Guy in his blog.  (And yes, I'm well aware that this pronouncement is being made by Some Random Guy in his blog.)

That doesn't mean those less-reputable sources are necessarily wrong, of course.  It's more that they can't be relied upon.  While papers in Science (and other comparable journals) are occasionally retracted for errors or inaccuracies, there is a vetting process that makes their likelihood of being correct vastly higher.  After all, any oddball with a computer can create a website, and post whatever they want on it, be it brilliant posts about cutting-edge science or the looniest of wingnuttery.

The confusion between the two definitions of the word research has the effect of increasing people's confidence in the kind we were all doing in middle school, and giving that low-level snooping about an undeserved gloss of reputability.  This was the upshot of a paper in Nature (peer-reviewed science, that), by Kevin Aslett of the University of Central Florida et al., entitled, "Online Searches to Evaluate Misinformation Can Increase Its Perceived Veracity."  Their results are kind of terrifying, if not unexpected given the "post-truth society" we've somehow slid into.  The authors write:

Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it... across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them...  We find that the search effect is concentrated among individuals for whom search engines return lower-quality information.  Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. 

The tendency appears to be that when someone is "doing their research" on a controversial subject, what they do is an online search, pursued until they find two or three hits on sources that corroborate what they already believed, and that strengthens their conviction that they were right in the first place.  The study found that very little attention was usually given to the quality of those sources, or where those sources got the information themselves.  If it makes the "researcher" nod sagely and say, "Yeah, that's what I thought," it doesn't matter if the information came from NASA -- or from QAnon.

The problem is, a lot of those bogus sources can look convincing. 

Other times, of course, all you have to be able to do is add two-digit numbers to realize that they're full of shit.

People see data in some online source, and rarely consider (1) who collected the data and why, (2) how it was analyzed, (3) what information wasn't included in the analysis, and (4) whether it was verified, and if so how and by whom.  I first ran into the old joke about "73.4% of all statistics are made up on the spot" years ago, and it's still funny, even if our laughs are rather wry these days.  Sites like Natural News, Food Babe, Before It's News, Breitbart.com, Mercola.com, InfoWars, One America News, and even a few with scholarly-sounding names -- like The Society for Scientific Exploration, Evolution News, and The American College of Pediatricians are three examples -- are clearinghouses for fringe-y and discredited ideas, often backed up by data that's either cherry-picked and misrepresented, or from sources even further down the ladder of sketchy credibility.

Given how much bullshit is out there,  a lot of it well-hidden behind facts, figures, and fancy writing, it can be a challenge for laypeople (and I very much count myself amongst their numbers) to discern truth from fiction.  It's also an uphill struggle to fight against the very natural human tendency of confirmation bias; we all would love it if our cherished notions of how the world works were one hundred percent correct.  But if we want to make smart decisions, we all need to stop saying "I did my research" when all that "research" involved was a twenty-minute Google search to find the website of some random crank who confirmed what we already believed.

Remember, as the brilliant journalist Kathryn Schulz points out, that one of the most mind-expanding and liberating things we can say is, "I don't know.  Maybe I'm wrong."  And to start from that open-minded perspective and find out what the facts really are -- from the actual researchers.

****************************************