Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, April 7, 2020

All in the family

Over my nearly thirty years of teaching AP Biology, one of the topics that changed the most was taxonomy.

This might come as a surprise, given the changes in fields such as genetics, but honestly the two are closely related.  When I started my career, classification of species was done primarily by morphology (shape and structure), and the identification of which characteristics of an organism were plesiomorphies (structures inherited from, and therefore shared with, the ancestral species) and which were apomorphies (structures that were innovations unique to a single branch of the family tree).

One of many difficulties with this approach is that useful innovations can evolve more than once, and therefore aren't necessarily indicative of common ancestry.  This process, called convergent or parallel evolution, can generate some amazingly similar results, the most striking of which is the flying squirrel (a rodent) and the sugar glider (a marsupial), which look nearly identical at a quick glance (even a longer one, honestly).  To be fair, the fact that the two are not very closely related would be evident on any kind of moderately careful analysis, where giveaways like tooth structure and the presence of a pouch in the female sugar gliders would be enough to show they weren't on the same branch of the mammalian family tree.

Southern flying squirrel (top) [Image is in the Public Domain] and sugar glider (bottom) [Image licensed under the Creative Commons Joseph C Boone, Sugar Glider JCB, CC BY-SA 4.0]

But sometimes it's more difficult than that, and more than once taxonomists have created arrangements of the descent of groups of species only to find out that further study shows the original placement to be wrong.  As one of many examples, take the two groups of large-eyed nocturnal primates from southern and southeastern Asia, the lorises and tarsiers.  Based on habits and range, it's understandable that they were lumped together as "prosimians" on the same branch of the primate tree, but recent study has found the lorises are closely related to lemurs, and tarsiers are closer to monkeys and apes -- despite the superficial similarity.

Slow loris [Image licensed under the Creative Commons David Haring / Duke Lemur Center, Sublingua of a slow loris 001CC BY-SA 3.0]

 
Tarsier [Image licensed under the Creative Commons yeowatzup, Tarsier Sanctuary, Corella, Bohol (2052878890), CC BY 2.0]

These revisions, and the sometimes surprising revelations they provide, have largely come from a change in how taxonomy is done.  Nearly all classification is now based upon genetics, not structure (although certainly structure plays a role in who we might initially hypothesize is related to whom).  But when it comes down to a fight between morphology and genetics, genetics always wins.  And this has forced us to change how we look at biological family trees -- especially when genetic evidence is obtained where it was previously absent.

This all comes up because of a discovery of intact DNA in a fossil of a primate much closer to us than the tarsiers and lorises -- a species from our own genus called Homo antecessor.  The species name suggests it was one of our direct ancestors, which is a little alarming because there's good evidence it was cannibalistic -- bones of the species found in Spain showed clear evidence of butchering for meat.

Now, however, the recovery of DNA from a tooth of an H. antecessor fossil -- at 800,000 years of age, the oldest DNA ever recovered from a hominid fossil -- has shown that it probably wasn't our ancestor after all, but a "sister clade," one that left no descendants.  (Bigfoot and the Yeti notwithstanding.)  The study was the subject of a paper in Nature last week, authored by a team led by Frido Welker of the University of Copenhagen, and required yet another reconfiguring of our own family tree.  The authors write:
The phylogenetic relationships between hominins of the Early Pleistocene epoch in Eurasia, such as Homo antecessor, and hominins that appear later in the fossil record during the Middle Pleistocene epoch, such as Homo sapiens, are highly debated.  For the oldest remains, the molecular study of these relationships is hindered by the degradation of ancient DNA.  However, recent research has demonstrated that the analysis of ancient proteins can address this challenge.  Here we present the dental enamel proteomes of H. antecessor from Atapuerca (Spain) and Homo erectus from Dmanisi (Georgia), two key fossil assemblages that have a central role in models of Pleistocene hominin morphology, dispersal and divergence.  We provide evidence that H. antecessor is a close sister lineage to subsequent Middle and Late Pleistocene hominins, including modern humans, Neanderthals and Denisovans.  This placement implies that the modern-like face of H. antecessor—that is, similar to that of modern humans—may have a considerably deep ancestry in the genus Homo, and that the cranial morphology of Neanderthals represents a derived form.
I find that last bit the most interesting, because it turns on its head our usual sense of being the Pinnacles of Evolution, clearly the most highly evolved (whatever the hell that actually means) species on the planet, definitely more advanced in all respects than those brute Neanderthals.  What this study suggests is that the flatter face of the Neanderthals is actually the apomorphy -- the more recently-evolved, "derived" characteristic -- and our narrower, more protruding faces are a plesiomorphy, inherited from our older ancestors.

This kind of stuff is why I'm endlessly interested in evolutionary biology -- as we find more data and develop new techniques, we refine our models, and in some cases have to overturn previously accepted conventional wisdom.  But that's what science is about, isn't it?  Basing your model on the best evidence you've got, and revising it if you get new and conflicting evidence.

 Just as well in this case.  One less cannibal in the family tree.  Not that there aren't probably others, but my genealogy already contains some sketchy enough characters.  No need to add more.

********************************

This week's Skeptophilia book recommendation of the week is brand new -- only published three weeks ago.  Neil Shubin, who became famous for his wonderful book on human evolution Your Inner Fish, has a fantastic new book out -- Some Assembly Required: Decoding Four Billion Years of Life, from Ancient Fossils to DNA.

Shubin's lucid prose makes for fascinating reading, as he takes you down the four-billion-year path from the first simple cells to the biodiversity of the modern Earth, wrapping in not only what we've discovered from the fossil record but the most recent innovations in DNA analysis that demonstrate our common ancestry with every other life form on the planet.  It's a wonderful survey of our current state of knowledge of evolutionary science, and will engage both scientist and layperson alike.  Get Shubin's latest -- and fasten your seatbelts for a wild ride through time.




Monday, April 6, 2020

The planet detectors

Are you looking for something to occupy you while you're stuck home?  For the month of April, I'm putting my online course An Introduction to Critical Thinking on sale for $12.99 (it's ordinarily $49.99).  It includes an hour and a half of video lectures, some (fun) problem sets and readings, and you'll come away with a better ability to detect such things as hoaxes, pseudoscience, ripoffs, and fake news.  Use the coupon code STUCKATHOME4APR to get your discount!

**********************************

Long-time readers of Skeptophilia know I'm kind of obsessed with the idea of extraterrestrial life.  I guess it's natural enough; I'm a biologist who's also an amateur astronomer, and grew up on Lost in Space and Star Trek and The Invaders, and later The X Files and Star Wars.  (Although I'm aware this is kind of a chicken-and-egg situation, so what the ultimate origin of my obsession is, I'm not certain.)

Until fairly recently, there was no particularly good way to determine the likelihood of life on other worlds.  Decades ago astronomer Frank Drake came up with the famous Drake Equation, which uses the statistical principle that if you know the probabilities of various independent occurrences, to find the probability of all of them happening, you multiply them together.  Here's the Drake Equation:

[Image is in the Public Domain courtesy of NASA/JPL]

The problem, of course, is that the more uncertainty there is in the individual probabilities, the more uncertainty there is in the product.  And there was no way known to get even a close value -- or, worse, to know if the value you had reflected reality or was just a wild guess.

What's cool for us alien enthusiasts, though, is that our research and techniques have improved to the point where we do have decent estimates for the values of some of these.  Even better, every time one of them is revised, it's revised upward.  Today I'd like to look at two of them -- f(p) and n(e) -- respectively, the fraction of stars that have planetary systems, and the fraction of those systems that have at least one planet in the habitable zone.

Given that we started out with a sample size of one (1) solar system, no one knew whether the coalescence of stellar debris into planets was likely, or simply a lucky fluke.  Same for planets in the habitable zone; here we have only a single planet that is habitable for organisms like ourselves.  Again, is that some kind of happy accident, or would most planetary systems have at least one potentially habitable planet?

Once we started to find exoplanets, though, they seemed to be everywhere we looked.  The earliest ones were massive (probably Jupiter-like) planets, often in fast, close orbit, so they'd be pretty hostile places from our perspective.  (Although, as I dealt with in a recent post, what we're finding out about the resilience of life may mean we'll have to revise our definition of what constitutes the "habitable zone.")

So the estimates for f(p) and n(e) crept upward, but still, it was hard to get reliable numbers.  But just last week, two studies have suggested that f(p) -- the percentage of stars with multiple-planet systems -- may be very close to 100%.

In the first, we hear about a recently-developed technique to improve our ability to detect exoplanets even at great distances.  Before this, most exoplanets were discovered using one of two methods -- looking for stellar wobble as a planet and its star circle their mutual center of gravity (which only works for nearby stars with massive planets capable of generating a detectable wobble), and luminosity dips as a planet occludes (passes in front of) its host star (which only works if the orbital plane is lined up in such a way that the planet passes in front of the star as seen from Earth).  As you might imagine, those restrictions mean that we might well be missing most of the exoplanets out there.

Now, a new orbiting telescope developed by NASA -- called WFIRST (Wide Field Infrared Survey Telescope) -- has the capability to detect microlensing.  Microlensing occurs because of the warping of the fabric of space-time by massive objects.  As a planet rotates around its host star, that warp moves, creating a ripple -- and the light from any stars behind the planet gets deflected.  An analogy is when you're looking down to the bottom of a clear pond and a ripple on the surface passes you; the image of the pebbles on the bottom appears to waver.  That wavering of light from distant stars is what WFIRST is designed to detect.

The nice thing is that WFIRST isn't dependent on visible wobbles or planets with precisely-aligned orbital planes; it can see pretty much any planet out there with sufficient mass.  And it can detect them from much farther away than previous telescopes -- the Kepler Space Telescope could detect planets up to around a thousand light years away, while WFIRST extends that reach by a factor of ten.  It's also capable of scanning a great many more stars; the estimate is that the first sweep will look at two hundred million stars, which is a thousand times the number Kepler studied.

So chances are, we're going to see an exponential jump in the number of exoplanets we know of, and a corresponding uptick in the estimate for f(p).

The second study is much closer to home -- about as close as you can get without being in our own Solar System.  Proxima Centauri is the nearest star to us other than the Sun, at 4.244 light years away.  In 2016 we were all blown away by the announcement that not only did Proxima Centauri have a planet, it was (1) Earth-sized, and (2) in the habitable zone.  (Anyone want to board the Jupiter 2?)

Now, astronomers have discovered a second planet around Proxima, at a distance about 1.5 times the orbit of the Earth, and a mass of about twelve times Earth's.  This means it's probably something like Neptune, and very cold -- Proxima is a dim star, so the habitable zone is a lot closer to it than the Sun's is -- the estimate is that its average temperature is -200 C.

Even though it probably doesn't host life, it's exciting from the standpoint that Proxima's planetary system is looking more and more like ours.  As astronomer Phil Plait put it, over at his fantastic blog Bad Astronomy, "I hope this new planet candidate turns out to be real.  Having one planet orbiting the star is already pretty amazing, but having two?  In my mind that makes it a solar system.  And if two, why not more?  How about moons orbiting the planets, or asteroids and comets around the star, too?"

The impression I'm getting is that f(p) (the fraction of stars with planetary systems) and n(e) (the fraction of stars with at least one planet in the habitable zone) are both extremely high.  This bodes well for our search for life -- and as the techniques improve, my sense is that we'll find planets like ours pretty much everywhere we look.  So life is looking more and more likely to be plentiful out there.  Now, intelligent life that is sufficiently technological to communicate across interstellar space... that's another matter entirely.

But in my opinion, any time we can revise some part of the Drake Equation upward, it's a good thing.

********************************

This week's Skeptophilia book recommendation of the week is brand new -- only published three weeks ago.  Neil Shubin, who became famous for his wonderful book on human evolution Your Inner Fish, has a fantastic new book out -- Some Assembly Required: Decoding Four Billion Years of Life, from Ancient Fossils to DNA.

Shubin's lucid prose makes for fascinating reading, as he takes you down the four-billion-year path from the first simple cells to the biodiversity of the modern Earth, wrapping in not only what we've discovered from the fossil record but the most recent innovations in DNA analysis that demonstrate our common ancestry with every other life form on the planet.  It's a wonderful survey of our current state of knowledge of evolutionary science, and will engage both scientist and layperson alike.  Get Shubin's latest -- and fasten your seatbelts for a wild ride through time.




Saturday, April 4, 2020

Unicorn survival

One of the arguments you'll hear from cryptid enthusiasts is that the various critters they claim are real are survivals.  Nessie, Mokele-Mbémbé, and the Bunyip are modern-day brachiosaurs or plesiosaurs.  Bigfoot, the Fouke Monster, the Almas, the Florida Skunk Ape, and the Yowie are hominids, possibly australopithecenes.  The Beasts of Bodmin Moor and Exmoor, and the Mngwa of Tanzania, are related to prehistoric cats.  Mothman is supposed to be... okay, I don't know what the fuck Mothman is supposed to be.  Maybe descended from the rare saber-toothed butterfly, I dunno.

[Nota bene: if you're curious about any of these and want more information, check out the excellent cryptid list on Wikipedia, which has these and many others, along with lots of highly amusing illustrations thereof.]

The possibility of prehistoric survival is not without precedent.  The most famous is the coelacanth, one of the bizarre lobe-finned fish found in fossil form in sediments from before the Cretaceous Extinction, 66-odd million years ago.  They were allied to the lineage that led to amphibians (although that split took place a lot longer ago, so they weren't direct ancestors), and had lobe-like proto-limbs that give the group their name.  They were thought to be long extinct -- until a fisherman off the coast of Madagascar caught one in 1938.

Even that iconic mammal of prehistory, the woolly mammoth, survived a lot longer than most people thought.  The last remnant populations were thought to have been in northern North America and Siberia on the order of 25,000 years ago -- until fossils were found on Wrangell Island, off the coast of Alaska, dating to around 3,800 years ago, making them contemporaneous with the building of the Great Pyramids of Egypt.

So it's always risky to date a bunch of fossils and conclude that the most recent one marks the end of the species.  Not only is fossilization uncommon (something I've touched upon before), but there can be small remnant populations left in out-of-the-way places, and our inferences about when species became extinct can be off.

Sometimes by a lot.  Take, for example, Elasmotherium, which was the subject of a paper in The American Journal of Applied Sciences that a friend and loyal reader of Skeptophilia sent me last week.  Elasmotherium has sometimes been nicknamed "the Siberian Unicorn," which is a little misleading, because the only similarities between it and the typical graceful, fleet-footed concept of the unicorn is that it had one horn and four legs.  Here's an artist's rendition of Elasmotherium:

[Image is in the Public Domain, courtesy of artist Heinrich Harder]

If your thought is that it looks more like a rhinoceros than a one-horned horse, you're correct; the elasmotheres are cousins to the modern African rhinos.  What's interesting about them is that they were around during the Pleistocene, reaching their peak during the repeated glaciations, and were thought to have died out as the climate warmed, on the order of 350,000 years ago -- but this study found fossils from Kozhamzhar in Kazakhstan that dated to around 26,000 years ago.

"Most likely, the south of Western Siberia was a refugium, where this rhino persevered the longest in comparison with the rest of its range," said Andrey Shpanski, a paleontologist at Tomsk State University, who co-authored the paper.  "There is another possibility that it could migrate and dwell for a while in the more southern areas."

So it's a good bet that the elasmotheres -- like the woolly mammoth -- persisted a lot longer than paleontologists realized.

This is the main reason why, despite my general skepticism, I'm hesitant to discount reports of cryptids out of hand.  That most of them are either hoaxes or else misidentification of perfectly ordinary modern animals seems pretty likely, but "most" doesn't mean "all."  I'm very much in agreement on this count with what astronomer Michio Kaku said about UFOs: "Perhaps 98% of sightings can be dismissed as fabrications or as perfectly natural phenomena.  But that still leaves 2% that are unaccounted for, and to me, that 2% is well worth investigating."

So I'm all for continuing to consider claims of cryptids, as long as we evaluate them based upon the touchstone for all scientific research: hard evidence.  It's entirely possible some animals thought previously to be extinct have survived in remote areas, and have given rise to what we now call cryptozoology.  If that's the case, though, it should be accessible to the tools of science -- and, truthfully, just be zoology, no "crypto" about it.

Except for Mothman.  That mofo is scary.  I'd just as soon that one stays in the realm of legend, thank you very much.

*******************************

In the midst of a pandemic, it's easy to fall into one of two errors -- to lose focus on the other problems we're facing, and to decide it's all hopeless and give up.  Both are dangerous mistakes.  We have a great many issues to deal with besides stemming the spread and impact of COVID-19, but humanity will weather this and the other hurdles we have ahead.  This is no time for pessimism, much less nihilism.

That's one of the main gists in Yuval Noah Harari's recent book 21 Lessons for the 21st Century.  He takes a good hard look at some of our biggest concerns -- terrorism, climate change, privacy, homelessness/poverty, even the development of artificial intelligence and how that might impact our lives -- and while he's not such a Pollyanna that he proposes instant solutions for any of them, he looks at how each might be managed, both in terms of combatting the problem itself and changing our own posture toward it.

It's a fascinating book, and worth reading to brace us up against the naysayers who would have you believe it's all hopeless.  While I don't think anyone would call Harari's book a panacea, at least it's the start of a discussion we should be having at all levels, not only in our personal lives, but in the highest offices of government.





Friday, April 3, 2020

The risk of knowing

One of the hallmarks of the human condition is curiosity.  We spend a lot of our early years learning by exploring, by trial-and-error, so it makes sense that curiosity should be built into our brains.

Still, it comes at a cost.  "Curiosity killed the cat" isn't a cliché for nothing.  The number of deaths in horror movies alone from someone saying, "I hear a noise in that abandoned house, I think I'll go investigate" is staggering.  People will take amazing risks out of nothing but sheer inquisitiveness -- so the gain in knowledge must be worth the cost.

[Image is in the Public Domain]

The funny thing is that we'll pay the cost even when what we gain isn't worth anything.  This was demonstrated by a clever experiment described in a paper by Johnny King Lau and Kou Murayama (of the University of Reading (U.K.)), Hiroko Ozono (of Kagoshima University) and Asuka Komiya (of Hiroshima University) that came out two days ago.  Entitled "Shared Striatal Activity in Decisions to Satisfy Curiosity and Hunger at the Risk of Electric Shocks," we hear about a set of experiments showing that humans will risk a painful shock to find out entirely useless information (in this case, how a card trick was performed).  The cleverest part of the experiments, though, is that they told test subjects ahead of time how much of a chance there was of being shocked -- so they had a chance to decide, "how much is this information worth?"

What they found was that even when told that there was a higher than 50% of being shocked, most subjects were still curious enough to take the risk.  The authors write:
Curiosity is often portrayed as a desirable feature of human faculty.  However, curiosity may come at a cost that sometimes puts people in harmful situations.  Here, using a set of behavioural and neuroimaging experiments with stimuli that strongly trigger curiosity (for example, magic tricks), we examine the psychological and neural mechanisms underlying the motivational effect of curiosity.  We consistently demonstrate that across different samples, people are indeed willing to gamble, subjecting themselves to electric shocks to satisfy their curiosity for trivial knowledge that carries no apparent instrumental value.
The researchers added another neat twist -- they used neuroimaging techniques to see what was going on in the curiosity-driven brain, and they found a fascinating overlap with another major driver of human behavior:
[T]his influence of curiosity shares common neural mechanisms with that of hunger for food.  In particular, we show that acceptance (compared to rejection) of curiosity-driven or incentive-driven gambles is accompanied by enhanced activity in the ventral striatum when curiosity or hunger was elicited, which extends into the dorsal striatum when participants made a decision.
So curiosity, then, is -- in nearly a literal sense -- a hunger.  The satisfaction we feel at taking a big bite of our favorite food when we're really hungry causes the same reaction in the brain as having a curiosity satisfied.  And like hunger, we're willing to take significant risks to satisfy our curiosity.  Even if -- to re-reiterate it -- the person in question knows ahead of time that the information they're curious about is technically useless.

I can definitely relate to this.  In me, it mostly takes the form of wasting inordinate amounts of time going down a rabbit hole online because some weird question came my way.  The result is that my brain is completely cluttered up with worthless trivia.  For example, I can tell you the scientific name of the bird you're looking at or why microbursts are common in the American Midwest or the etymology of the word "juggernaut," but went to the grocery store yesterday to buy three things and came back with only two of them.  (And didn't realize I'd forgotten 1/3 of the grocery order until I walked into the kitchen and started putting away what I'd bought.)

Our curiosity is definitely a double-edged sword.  I'm honestly fine with it, because often, knowing something is all the reward I need.  As physicist Richard Feynman put it, "The chief prize (of science) is the pleasure of finding things out."

So I suspect I'd have been one of the folks taking a high risk of getting shocked to see how the card trick was performed.  Don't forget that the corollary to the quote we started with -- "Curiosity killed the cat" -- is "...but satisfaction brought him back."

*******************************

In the midst of a pandemic, it's easy to fall into one of two errors -- to lose focus on the other problems we're facing, and to decide it's all hopeless and give up.  Both are dangerous mistakes.  We have a great many issues to deal with besides stemming the spread and impact of COVID-19, but humanity will weather this and the other hurdles we have ahead.  This is no time for pessimism, much less nihilism.

That's one of the main gists in Yuval Noah Harari's recent book 21 Lessons for the 21st Century.  He takes a good hard look at some of our biggest concerns -- terrorism, climate change, privacy, homelessness/poverty, even the development of artificial intelligence and how that might impact our lives -- and while he's not such a Pollyanna that he proposes instant solutions for any of them, he looks at how each might be managed, both in terms of combatting the problem itself and changing our own posture toward it.

It's a fascinating book, and worth reading to brace us up against the naysayers who would have you believe it's all hopeless.  While I don't think anyone would call Harari's book a panacea, at least it's the start of a discussion we should be having at all levels, not only in our personal lives, but in the highest offices of government.





Thursday, April 2, 2020

A window on the deep past

When I was a kid, I always enjoyed going on walks with my dad.  My dad wasn't very well educated -- barely finished high school -- but was incredibly wise and had an amazing amount of solid, practical common sense.  His attitude was that God gave us reasoning ability and we had damn well better use it -- that most of the questions you run into can be solved if you just get your opinions and ego out of the way and look at them logically.

The result was that despite never having had a physics class in his life, he was brilliant at figuring things out about how the world works.  Like the mind-blowing (well, to ten-year-old kid, at least) idea he told me about because we saw a guy pounding in a fence post with a sledgehammer.

The guy was down the street from us -- maybe a hundred meters away or so -- and I noticed something weird.  The reverberating bang of the head of the sledge hitting the top of the post was out of sync with what we were seeing.  We'd see the sledge hit the post, then a moment later, bang.

I asked my dad about that.  He thought for a moment, and said, "Well, it's because it takes time for the sound to arrive.  The sound is slower than light is, so you see the hammer hit before you hear it."  He told me about how his father had taught him tell how close a thunderstorm is by counting the seconds between the lightning flash and the thunderclap, and that the time got shorter the closer it was.  He pointed at the guy pounding in the fence post, and said, "So the closer we get to him, the shorter the delay should be between seeing the hammer hit and hearing it."

Which, of course, turned out to be true.

But then, a crazy thought occurred to me.  "So... we're always hearing things in the past?"

"I suppose so," he said.  "Even if you're really close to something, it still takes some time for the sound to get to you."

Then, an even crazier thought.  "The light takes some time, too, right?  A shorter amount of time, but still some time.  So we're seeing things in the past, too?"

He shrugged.  "I guess so.  Light is always faster than sound."  Then he grinned.  "I guess that's why some people appear bright until you hear them talk."

It was some years later that I recognized the implications of this -- that the farther away something is, the further back into the past we're looking.  The Sun is far enough away that the light from it takes eight minutes and twenty seconds to get here, so you are always seeing the Sun not as it is now, but as it was, eight minutes and twenty seconds ago.  The closest star to us other than the Sun is Proxima Centauri, which is 4.3 light years away -- so here, you're looking at a star as it was 4.3 years ago.  There is, in fact, no way to know what it looks like now -- the Special Theory of Relativity showed that the speed of light is the fastest speed at which information can travel.  Any of the stars you see in the night sky might be exploding right now (not that it's likely, mind you), and not only would we have no way to know, the farther away they are, the longer it would take us to find out about it.

This goes up to some unimaginably huge distances.  Consider quasars, which are peculiar beasts to say the least.  When first discovered in the 1950s, they were such anomalies that they were nicknamed quasi-stellar radio sources mainly because no one knew what the hell they were.  Astrophysicist Hong-Yee Chiu contracted that clumsy appellation to quasar in 1964, and it stuck.

The funny thing about them was on first glance, they just looked like ordinary stars -- points of light.  Not even spectacular ones -- the brightest quasar has a magnitude just under +13, meaning it's not even visible in small telescopes.  But when the astronomers looked at the light coming from them, they found something extraordinary.

The light was wildly red-shifted.  You probably know that red-shift occurs because of the Doppler effect -- just as the sound of a siren from an ambulance moving away from you sounds lower in pitch because the sound waves are stretched out by the ambulance's movement, the light from something moving away from you gets stretched -- and the analog to pitch in sound is frequency in light.  The faster an object is moving away from you, the more its light drops in frequency (moves toward the red end of the spectrum).  And, because of Hubble's law and the expansion of space, the faster an object in deep space is moving away from you, the farther away it is.

So that meant two things: (1) if Hubble's law was being applied correctly, quasars were ridiculously far away (the nearest ones estimated at about a billion light years); and (2) if they really were that far away, they were far and away the most luminous objects in the universe (an average quasar, if placed thirty light years away, would be as bright as the Sun).

But what on earth (or outside of it, actually) could generate that much energy?  And why weren't there any nearby ones?  Whatever process resulted in a quasar evidently stopped happening a billion or more years ago -- otherwise we'd see ones closer to us (and therefore, ones that had occurred more recently; remember, farther away in space, further back in time).

Speculation ran wild, mostly because the luminosities and distances were so enormous that it seemed like there must be some other explanation.  Quasars, some said, were red-shifted not because the light was being stretched not by the expansion of space, but because it was escaping a gravity well -- so maybe they weren't far away, they were simple off-the-scale massive.  Maybe they were the output-end of a stellar wormhole.  Maybe they were some kind of chain reaction of millions of supernovas all at once.

See?  I told you they didn't look that interesting.  [Image licensed under the Creative Commons ESO, Quasar (24.5 mag ;z~4) in MS 1008 Field, CC BY 4.0]

Further observations confirmed the crazy velocities, and found that they were consistent with the expansion of space -- quasars are, in fact, billions of light years away, receding from us at what in Spaceballs would definitely qualify as ludicrous speed, and therefore had a luminosity that was unlike anything else.  But what could be producing such an energy output?

The answer, it seems, is that what we're seeing is the light emitted as gas and dust makes its last suicidal plunge into a galaxy-sized black hole -- as it speeds up, friction heats it up, and it emits light on a scale that boggles the mind.  Take that energy output and drag it out as space expands, and you get the longest-wavelength light there is -- radio waves -- but produced at at a staggering intensity.

All of this comes up because of a series of six papers last week in The Astronomical Journal about a discovery of three quasars that are the most energetic ever discovered (and therefore, the most energetic objects in the known universe).  The most luminous of the three is called SDSS J1042+1646, which brings up the issue of how astrophysicists name the objects they study.  I'm sorry, but "SDSS J1042+1646" just does not capture the gravitas and magnitude of this thing.  There should be a new naming convention that will give the interested layperson an idea of the scale we're talking about here.  I propose renaming it "Abso-fucking-lutely Enormous Glowing Thing, No, Really, You Don't Even Understand How Big It Is."  Although that's a little cumbersome, I maintain that it's better than SDSS J1042+1646.

But I digress.

Anyhow, the energy output of this thing is 5x10^30 gigawatts.  That's five million trillion trillion gigawatts.  By comparison, your average nuclear reactor puts out one gigawatt.  Even all the stars in the Milky Way put together are a hundred times less energetic than this one quasar.

See?  I told you.  Abso-fucking-lutely enormous.

These quasars have also given astrophysicists some insight into why we don't see any close by.  They are blowing radiation -- and debris -- out of the core of the quasar at such high rates that eventually they run out of gas.  The matter loss slows down star formation, and over time a quasar transforms into an ordinary, stable galaxy.

So billions of years ago, the Milky Way was probably a quasar, and to a civilization on a planet a billion light years away, that's what it would look like now.  If you wanted your mind blown further.

The universe is a big place, and we are by comparison really tiny.  Some people don't like that, but for me, it re-emphasizes the fact that our little toils and troubles down here are minor and transitory.  The glory of what's out there will always outshine anything we do -- which is, I think, a good thing.

*******************************

In the midst of a pandemic, it's easy to fall into one of two errors -- to lose focus on the other problems we're facing, and to decide it's all hopeless and give up.  Both are dangerous mistakes.  We have a great many issues to deal with besides stemming the spread and impact of COVID-19, but humanity will weather this and the other hurdles we have ahead.  This is no time for pessimism, much less nihilism.

That's one of the main gists in Yuval Noah Harari's recent book 21 Lessons for the 21st Century.  He takes a good hard look at some of our biggest concerns -- terrorism, climate change, privacy, homelessness/poverty, even the development of artificial intelligence and how that might impact our lives -- and while he's not such a Pollyanna that he proposes instant solutions for any of them, he looks at how each might be managed, both in terms of combatting the problem itself and changing our own posture toward it.

It's a fascinating book, and worth reading to brace us up against the naysayers who would have you believe it's all hopeless.  While I don't think anyone would call Harari's book a panacea, at least it's the start of a discussion we should be having at all levels, not only in our personal lives, but in the highest offices of government.





Wednesday, April 1, 2020

Hands down

One of the most frustrating arguments -- if I can dignify them by that name -- from creationists is that there are "no transitional fossils."

If evolution happened, they say, you should be able to find fossils of species that are halfway between the earlier form and the (different-looking) later form.  That's actually true; you should find such fossils, and we have.  Thousands of them.  But when informed of this, they usually retort with one of two idiotic responses: (1) that evolution predicts there should be "halfway" forms between any two species you pick, which is what gave rise to Ray "BananaMan" Comfort's stupid "crocoduck" and "doggit" (dog/rabbit blend) photoshop images you can find with a quick Google search if you're in the mood for a facepalm; or (2) that any transitional forms just makes the situation worse -- that if you're trying to find an intermediate between species A and species C, and you find it (species B), now you've gone from one missing transitional form to two, one between A and B and the other between B and C.

This always reminds me of the Atalanta paradox of the Greek philosopher Zeno of Elea.  The gist is that motion is impossible, because if the famous runner Atalanta runs a race, she must first reach the point halfway between her starting point and the finish line, then the point halfway between there and the finish line, then halfway again, and so on; and because there are an infinite number of those intermediate points, she'll never reach the end of the race.  Each little bit she runs just leaves an unending number of smaller distances to cross, so she's stuck.

Fortunately for Atalanta she spent more time training as a runner than reading philosophy, and doesn't know about this, so she goes ahead and crosses the finish line anyway.

But back to evolution.  The problem with the creationists' "transitional fossil" objection is that just about every time paleontologists find a new fossil bed, they discover more transitional fossils, and often find species with exactly the characteristics that had been predicted by evolutionary biologists before the discovery.  And that's the hallmark of a robust scientific model; it makes predictions that line up with the actual facts.  Transitional fossils are an argument for evolution, not against it.

We got another illustration of the power of the evolutionary model with a paper last week in Nature, authored by Richard Cloutier, Roxanne Noël, Isabelle Béchard, and Vincent Roy (of the Université du Québec à Rimouski), and Alice M. Clement, Michael S. Y. Lee, and John A. Long (of Flinders University).  One of the most striking homologies between vertebrates is their limbs -- all vertebrates that have limbs have essentially the same bone structure, with one upper arm bone, two lower arm bones, and a mess of carpals, metacarpals, and phalanges.  Doesn't matter if you're looking at a bat, a whale, a dog, a human, or a frog, we've all got the same limb bones -- and in fact, most of them have not only the same bones, but the same number in the same positions.  (I've never heard a creationist come up with a good explanation for why, if whales and humans don't have a common ancestor, whales' flippers encase a set of fourteen articulated finger bones -- just like we have.)

In any case, it's been predicted for a long time that there was a transitional form between fish and amphibians that would show an intermediate between a fish's fin and an amphibian's leg, but that fossil proved to be elusive.

Until now.

Readers, meet Elpistostege.  As far as why it's remarkable, allow me to quote the authors:
The evolution of fishes to tetrapods (four-limbed vertebrates) was one of the most important transformations in vertebrate evolution.  Hypotheses of tetrapod origins rely heavily on the anatomy of a few tetrapod-like fish fossils from the Middle and Late Devonian period (393–359 million years ago). These taxa—known as elpistostegalians—include Panderichthys, Elpistostege, and Tiktaalik, none of which has yet revealed the complete skeletal anatomy of the pectoral fin.  Here we report a 1.57-metre-long articulated specimen of Elpistostege watsoni from the Upper Devonian period of Canada, which represents—to our knowledge—the most complete elpistostegalian yet found.  High-energy computed tomography reveals that the skeleton of the pectoral fin has four proximodistal rows of radials (two of which include branched carpals) as well as two distal rows that are organized as digits and putative digits.  Despite this skeletal pattern (which represents the most tetrapod-like arrangement of bones found in a pectoral fin to date), the fin retains lepidotrichia (fin rays) distal to the radials.  We suggest that the vertebrate hand arose primarily from a skeletal pattern buried within the fairly typical aquatic pectoral fin of elpistostegalians.  Elpistostege is potentially the sister taxon of all other tetrapods, and its appendages further blur the line between fish and land vertebrates.
Well, that seems like a slam-dunk to me.  An amphibian-like limb bone arrangement -- with fish-like fin rays at the end of it.

No transitional forms, my ass.

[Image licensed under the Creative Commons Placoderm2, Elpistostege watsoni, CC BY-SA 4.0]

Study lead author Richard Cloutier said basically the same thing, but more politely, in an interview with Science Daily: "The origin of digits relates to developing the capability for the fish to support its weight in shallow water or for short trips out on land.  The increased number of small bones in the fin allows more planes of flexibility to spread out its weight through the fin...  The other features the study revealed concerning the structure of the upper arm bone or humerus, which also shows features present that are shared with early amphibians.  Elpistostege is not necessarily our ancestor, but it is closest we can get to a true 'transitional fossil', an intermediate between fishes and tetrapods."

So there you have it.  Evolution delivers again.  I'm not expecting this will convince the creationists -- probably nothing would -- but at least it's one more fantastic piece of evidence for anyone who's on the fence.  Now y'all'll have to excuse me, because I'm off to the kitchen to get another cup of coffee, and it's going to take me an infinite amount of time to get there, so I better get started.

*******************************

In the midst of a pandemic, it's easy to fall into one of two errors -- to lose focus on the other problems we're facing, and to decide it's all hopeless and give up.  Both are dangerous mistakes.  We have a great many issues to deal with besides stemming the spread and impact of COVID-19, but humanity will weather this and the other hurdles we have ahead.  This is no time for pessimism, much less nihilism.

That's one of the main gists in Yuval Noah Harari's recent book 21 Lessons for the 21st Century.  He takes a good hard look at some of our biggest concerns -- terrorism, climate change, privacy, homelessness/poverty, even the development of artificial intelligence and how that might impact our lives -- and while he's not such a Pollyanna that he proposes instant solutions for any of them, he looks at how each might be managed, both in terms of combatting the problem itself and changing our own posture toward it.

It's a fascinating book, and worth reading to brace us up against the naysayers who would have you believe it's all hopeless.  While I don't think anyone would call Harari's book a panacea, at least it's the start of a discussion we should be having at all levels, not only in our personal lives, but in the highest offices of government.





Tuesday, March 31, 2020

Fungus fracas

I suppose it's kind of a forlorn hope that popular media starts doing a better job of reporting on stories about science research.

My most recent example of attempting to find out what was really going on started with an article from Popular Mechanics sent to me by a friend, called "You Should Know About This Chernobyl Fungus That Eats Radiation."  The kernel of the story -- that there is a species of fungus that has evolved extreme radiation tolerance, and apparently now uses high-energy ionizing radiation to power its metabolism -- is really cool, and immediately put me in mind of the wonderful line from Ian Malcolm in Jurassic Park -- "Life finds a way."

There were a few things about the article, though, that made me give it my dubious look:


The first was that the author repeatedly says the fungus is taking radiation and "converting it into energy."  This is a grade-school mistake -- like saying "we turn our food into energy" or "plants convert sunlight into energy."  Nope, sorry, the First Law of Thermodynamics is strictly enforced, even at nuclear disaster sites; no production of energy allowed.  What the fungus is apparently doing is harnessing the energy the radiation already had, and storing it as chemical energy for later use.  The striking thing is that it's able to do this without its tissue (and genetic material) suffering irreparable damage.  Most organisms, upon exposure to ionizing radiation, either end up with permanently mutated DNA or are killed outright.

Apparently the fungus is able to pull off this trick by having huge amounts of melanin, a dark pigment that is capable of absorbing radiation.  In the melanin in our skin, the solar energy absorbed is converted to heat, but this fungus has hitched its melanin absorbers to its metabolism, allowing it to function a bit like chlorophyll does in plants.

Another thing that made me wonder was the author's comment that the fungus could be used to clean up nuclear waste sites.  This put me in mind of a recent study of pillbugs, little terrestrial crustaceans that apparently can survive in soils contaminated with heavy metals like lead, cadmium, and mercury.  Several "green living" sites misinterpreted this, and came to the conclusion that pillbugs are somehow "cleaning the soil" -- in other words, getting rid of the heavy metals entirely.  Of course, the truth is that the heavy metals are still there, they're just inside the pillbug, and when the pillbug dies and decomposes they're right back in the soil where they started.  Same for the radioactive substances in Chernobyl; the fungus's ability to use radiation as a driver for its metabolism doesn't mean it's somehow miraculously destroyed the radioactive substances themselves.

Anyhow, I thought I'd dig a little deeper into the radioactive fungus thing and see if I could figure out what the real scoop was, and I found an MSN article that does a bit of a better job at describing the radiation-to-chemical-energy process (termed radiosynthesis), and says that the scientists investigating it are considering its use as a radiation blocker (not a radiation destroyer).  Grow it on the walls of the International Space Station, where long-term exposure to cosmic rays is a potential health risk to astronauts, and it might not only shield the interior but use the absorbed cosmic rays to fuel its own growth.

Then I saw that the MSN article named the actual species of fungus, Cryptococcus neoformans.  And when I read this name, I said, "... wait a moment."

Cryptococcus neoformans is a fungal pathogen, responsible for a nasty lung infection called cryptococcosis.  It's an opportunist, most often causing problems in people with compromised immune systems, but once you've got it it's hard to get rid of -- like many fungal infections, it doesn't respond quickly or easily to medication.  And if it becomes systemic -- escapes from your lungs and infects the rest of your body -- the result is cryptococcal meningitis, which has a mortality rate of about 20%.

So not really all that sanguine about painting the stuff on the interior walls of the ISS.

Anyhow, all this is not to say the fungus and its evolutionary innovation are not fascinating.  I just wish science reporting in popular media could do a better job.  I know journalists can't put in all the gruesome details and technical jargon, but boiling something down and making it understandable does not require throwing in stuff that's downright misleading.  I probably come off as a grumpy curmudgeon for even pointing this out, but I guess that's inevitable because I am a grumpy curmudgeon.

So while they're at it, those damn journalists should get off my lawn.

*******************************

In the midst of a pandemic, it's easy to fall into one of two errors -- to lose focus on the other problems we're facing, and to decide it's all hopeless and give up.  Both are dangerous mistakes.  We have a great many issues to deal with besides stemming the spread and impact of COVID-19, but humanity will weather this and the other hurdles we have ahead.  This is no time for pessimism, much less nihilism.

That's one of the main gists in Yuval Noah Harari's recent book 21 Lessons for the 21st Century.  He takes a good hard look at some of our biggest concerns -- terrorism, climate change, privacy, homelessness/poverty, even the development of artificial intelligence and how that might impact our lives -- and while he's not such a Pollyanna that he proposes instant solutions for any of them, he looks at how each might be managed, both in terms of combatting the problem itself and changing our own posture toward it.

It's a fascinating book, and worth reading to brace us up against the naysayers who would have you believe it's all hopeless.  While I don't think anyone would call Harari's book a panacea, at least it's the start of a discussion we should be having at all levels, not only in our personal lives, but in the highest offices of government.