Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Maybe it's because I'm fundamentally a home body, but I find it really hard to understand what could have driven our distant ancestors to head out into uncharted territory -- no GPS to guide them, no guarantee of safety, no knowledge of what they might meet along the way.
A couple of years ago I wrote a piece about the extraordinary island-hopping accomplished by the ancient Polynesians -- going from one tiny speck of land to another, crossing thousands of kilometers of trackless ocean in dugout canoes. We don't know what motivated them, whether it was lack of resources on their home islands, being driven away by warfare, or simple curiosity. But whatever it was, it took no small amount of skill, courage, and willingness to accept risk.
The wanderings of the ancestral Polynesians, though, are hardly the only example of ancient humans' capacity for launching out into the unknown. Two papers this week look at other examples of our forebears' wanderlust -- still, of course, leaving unanswered the questions about why they felt impelled to leave home and safety for an uncertain destination.
The first, which appeared in PLOS-One, looks at the similarity of culture between Bronze-Age Denmark and southwestern Norway, and considers whether the inhabitants of Denmark took a longer (but, presumably, safer) seven-hundred-kilometer route, crossing the Strait of Kattegat into southern Sweden and then hugging the coast until they reached Norway, or the much shorter (but riskier) hundred-kilometer crossing over open ocean to go there directly.
While the direct route was more dangerous, it seems likely that's what they did. If they'd skirted along the coastline, you'd expect there to be more similarity in archaeological sites along the way, in the seaside areas of southern Sweden. There's not. It appears that they really did launch off in paddle-driven boats across the stormy seas between Denmark and Norway, four thousand years ago.
"These new agent-based simulations, applied with boat performance data of a Scandinavian Bronze Age type boat," the authors write, "demonstrate regular open sea crossings of the Skagerrak, including some fifty kilometers of no visible land, likely commenced by 2300 B.C.E., as indicated by archaeological evidence."
Considering people twice as far back in time, a paper this week in Nature describes evidence that seafarers from what is now Italy crossed a hundred kilometers of ocean to reach the island of Malta. A cave in Latnija, in the northern Mellieħa region of Malta, has bones of animals that show distinct signs of butchering and cooking -- and have been dated to 8,500 years ago.
"We found abundant evidence for a range of wild animals, including Red Deer, long thought to have gone extinct by this point in time," said Eleanor Scerri, of the University of Malta, who was the paper's lead author. "They were hunting and cooking these deer alongside tortoises and birds, including some that were extremely large and extinct today... The results add a thousand years to Maltese prehistory and force a re-evaluation of the seafaring abilities of Europe's last hunter-gatherers, as well as their connections and ecosystem impacts."
What always strikes me about this sort of thing is wondering not only what fueled their wanderlust, but how they even knew there was an island out there to head to. I know that patterns of clouds can tell seafarers they're nearing land, but still -- to launch off into the open ocean, hoping for the best, and trusting that there's safe landing out there somewhere still seems to me to be somewhere between brave and utterly foolhardy.
I guess my ancestors were made of sterner stuff than me. I'm okay with that. Being a bit of a coward has its advantages. As Steven Wright put it, "Eagles may soar, but weasels don't get sucked into jet engines."
So if y'all want to, you can take off in your dugout canoes for parts unknown, but I'm gonna stay right here where it's (relatively) safe. I suppose it's a good thing our forebears had the courage they did, because it's how we got here. And I hope they wouldn't be too embarrassed by my preference for sitting on my ass drinking coffee with cream and sugar rather than spending weeks at sea nibbling on dried meat and hardtack and hoping like hell those clouds over there mean there's dry land ahead.
What comes to mind when you think about the Isle of Skye?
Chances are, it's one of three things.
The first is the stunningly beautiful scenery. It's the largest of the Inner Hebrides, and is noted for its rugged, rocky hills, craggy coastline, and emerald-green meadows.
Sidney Richard Percy, Loch Coruisk (1874) [Image is in the Public Domain]
Second, history buffs will remember Skye as the place where "Bonnie Prince Charlie" (Charles Edward Stuart) fled, with the help of Flora MacDonald, after Scotland's devastating loss at the Battle of Culloden. Stuart's repeated attempts afterward to claim the thrones of England and Scotland never came to much. He died in exile in Rome in 1788 at the age of 67, depressed and miserable -- but even today, he remains a symbol to many Scots of "what might have been."
Third, if you're someone who likes to indulge in a wee dram on occasion, you probably know that it's home to the famous Talisker and Torabhaig distilleries, which produce absolutely fantastic single-malt whiskies.
I doubt, somehow, that many people would come up with a fourth thing that Skye should be famous for, and which was the subject of a paper in PLOS-One this week: it is one of the best sites for middle-Jurassic age fossils in the world.
167 million years ago, Scotland was about at the latitude of the Tropic of Cancer, and was a hot, lush swampy rainforest. Prince Charles's Point -- the place where Bonnie Prince Charlie supposedly landed after making it safely "over the sea to Skye," in the words of the Skye Boat Song -- was a shallow, sandy-bottomed lagoon.
And it was home to some big dinosaurs.
The paper describes tracks by huge, long-necked sauropods like Cetiosaurus -- and those of the carnivorous theropods that hunted them, such as Megalosaurus.
The Cetiosaurus tracks are as big around as car tires, and the study found individual trackways twelve meters long -- made, the researchers said, by dinosaurs ambling about, probably in search of the huge amounts of food it took to keep an animal that size going.
It's hard to imagine the rugged, windswept islands of the Hebrides like they were then -- something more like today's Florida Keys, and the home to the whole assemblage of mid-Mesozoic fauna. Not only the big theropods and sauropods, such as the ones that left the footprints on the Isle of Skye, but pterodactyls flying overhead, and in the seas, the superficially dolphin-like icthyosaurs -- and the long-necked plesiosaurs that still come up in conversations about Loch Ness, only a hundred miles east as the Rhamphorhynchus flies.
"O Earth, what changes hast thou seen?" Tennyson mused -- "There, where the long road roars, has been the stillness of the central sea." And those changes are still occurring. The Atlantic Ocean is still progressively widening; a complex series of faults is making all of the Anatolian region twist counterclockwise; the "Horn of Africa" is rifting away from the rest of the continent and eventually will drift off into the Indian Ocean; Australia is on a collision course with Southeast Asia. We humans leave our own footprints in the sand, but how ephemeral are they? Will paleontologists 167 million years from now know of our presence, from traces left behind on whatever configuration the continents will then have?
It recalls the haunting lines from another poet, Percy Bysshe Shelley, which seems a fitting place to end:
I met a traveller from an antique land Who said: Two vast and trunkless legs of stone Stand in the desert. Near them, on the sand, Half sunk, a shattered visage lies, whose frown, And wrinkled lip, and sneer of cold command, Tell that its sculptor well those passions read Which yet survive, stamped on these lifeless things, The hand that mocked them, and the heart that fed. And on the pedestal these words appear: "My name is Ozymandias, King of Kings: Look on my works, ye Mighty, and despair!" Nothing beside remains. Round the decay Of that colossal wreck, boundless and bare The lone and level sands stretch far away.
The last two posts have been about biological extinction; today's, to continue in the same elegiac tone, is about language extinction.
A study in Ethnologue found that of the Earth's current seven-thousand-odd languages, 3,143 -- around forty-four percent -- are in danger of going extinct. Languages become endangered from a number of factors; when children are not taught as their first language, when there's government suppression, or when a different language has become the primary means of communication, governance, and commerce.
Of course, all three of those frequently happen at the same time. The indigenous languages of North and South America and Australia, for example, have proven particularly susceptible to these forces. In all three places, English, Spanish, and Portuguese have superseded hundreds of languages, and huge swaths of cultural knowledge have been lost in the process.
In almost all cases, there's no fanfare when a language dies. They dwindle, communication networks unravel, the average age of native speakers moves steadily upward. Languages become functionally extinct when only a few people are fluent, and at that point even those last holdouts are already communicating in a different language with all but their immediate families. When those final few speakers die, the language is gone -- often without ever having been studied adequately by linguists.
Sometimes, however, we can pinpoint fairly closely when a language died. Curiously, this is the case with Ancient Egyptian. This extinct language experienced a resurgence of interest in the early nineteenth century, due to two things -- the British and French occupations of Egypt, which resulted in bringing to Europe hundreds of priceless Egyptian artifacts (causing "Egyptomania" amongst the wealthy), and the stunning decipherment of hieroglyphics and Demotic by the brilliant French linguist Jean-François Champollion.
For a millennium and a half prior to that, though, Ancient Egyptian was a dead language. And as weird as it sounds, we know not only the exact date of its rebirth, but the date it took its last breath. Champollion shouted "Je tiens mon affaire!" ("I've figured it out!") to his brother on 14 September 1822.
And the last inscription was made by the last known literate native speaker of Ancient Egyptian on 24 August 394 C.E.
It's called the "Graffito of Esmet-Akhom," and was carved on the wall of a temple in Philae. It shows the falcon-god Mandulis, who was a fairly recent invention at the time, and like the Rosetta Stone, it has an inscription in hieroglyphics and Demotic:
Before Mandulis, son of Horus, by the hand of Esmet-Akhom, son of Nesmeter, the Second Priest of Isis, for all time and eternity. Words spoken by Mandulis, lord of the Abaton, great god.
I, Esmet-Akhom, the Scribe of the House of Writings(?) of Isis, son of Nesmeterpanakhet the Second Priest of Isis, and his mother Eseweret, I performed work on this figure of Mandulis for all time, because he is fair of face towards me. Today, the Birthday of Osiris, his dedication feast, year 110.
[Nota bene: The "year 110" is not, of course, by the Julian calendar; in Egypt, governmental records were dated using the number of years since the accession of the Roman emperor Diocletian in 284 C.E. The "Birthday of Osiris" is what we now call the 24th of August.]
At this point, Greek and Latin had already superseded Egyptian in written records, so Esmet-Akhom was the last of a dying breed. Ancient Egyptian lingered on for a while as a spoken language amongst the working classes; liturgical Coptic is a direct descendant. But any final vestiges of Egyptian as a living language were eradicated in the seventh century when the Graeco-Roman state of Egypt fell to the Arabs.
So it may well be that the priests of Esmet-Akhom's family were the last people capable of reading and writing Egyptian, at least until Champollion came along.
I know change is the way of things, and given the interconnectedness of the world today, that widely-spoken languages will inevitably gain more and more of an edge over minority languages. (Consider that a full forty percent of the Earth's people speak one of eight languages -- Mandarin, Spanish, English, Arabic, Hindi, Bengali, Portuguese, and Russian.) But still, I can't help but find the loss of linguistic diversity, and the cultural information those dead languages once encoded, to be sad.
And you have to wonder how Esmet-Akhom himself felt, writing his defiantly confident inscription "for all time and eternity" at the end of the fourth century. Did he know that Egyptian was, even at that point, moribund? A means of communication that had existed for over four thousand years was on the road to extinction; what was left was only a whisper, and even that would soon be silenced.
Has to make you wonder what linguistic shifts will occur in the next thousand years.
My office window looks out over our raised-bed gardens and into our front yard. It's still chilly early spring here in upstate New York -- things won't really start greening up for another couple of weeks -- but we're seeing signs of the coming explosion of growth that tell us warm weather will soon be here. We actually got out and did some yard work this past weekend, despite clouds and a high of 45 F. Mostly clean-up that never got done last fall, but we did plant the early peas and lettuce, transplanted some clumps of chives that were taking over one corner of the vegetable garden, and moved a yucca plant that was getting a little too enthusiastic.
From where I sit right now, I can see our bit of grassy lawn, but also the bare branches of a purple lilac, a couple of still-leafless roses, the gnarled branches of a sawtooth oak, the reddish buds of peonies just starting to unfurl, the bright green spikes of daylily leaves, the stubble of the ornamental Miscanthus grass that by midsummer will be taller than I am. Clumps of brilliant daffodils, crocus, scilla, and chionodoxa already in full flower.
All cool stuff, promising lots of beauty to come. But you know what all of the plants I've mentioned have in common?
Not one is native to the United States.
Not even the grass. Just about all the lawn grasses grown in North America are European natives. Chances are, unless you have deliberately set out to do natives-only landscaping, the vast majority of the plants in your yard are imports as well. Of everything I can see from my window, only one is native to upstate New York -- a hedge of ninebark (Physocarpus opulifolius). Two others are eastern natives but originally from a good deal farther south, the Carolina silverbell (Halesia carolina) and black locust (Robinia pseudoacacia).
Thing is, like everything, the situation with exotics is complex. Not all exotics are a problem. The charming little bright-blue scilla (Scilla siberica) that pop up everywhere in the very early spring, including all over my lawn, are pretty harmless. (Contrary to the name, they're not native to Siberia, but to southwestern Russia, the Caucasus, and northern Turkey.) Garlic mustard (Alliaria petiolata), on the other hand, is an unmitigated thug -- since its introduction in the 1800s, it has spread like wildfire, each plant producing hundreds of seeds, and in many areas has crowded out native plant species. It's also toxic to a lot of native herbivores, including several species of butterflies. We've tried for years to rid our yard of this nuisance, without much success.
And don't get me started about multiflora rose (Rosa multiflora). The Wikipedia page says it's native to Asia, but I'm convinced it was imported directly from hell. It has pretty white flowers, but more than makes up for that by razor-sharp thorns borne on long, tough, wiry stems that seem to have a deliberate vicious streak. In general I love roses, but this one is an absolute hazard.
Of course, here in New York, we still have a great many native species that are doing well. Consider the situation in Hawaii, though -- where on the more populated islands, there are barely any native species left.
Oh, it looks good. On O'ahu, there are lush forests -- guava, plumeria, cinnamon, peppertree, Kahili ginger, several species of acacia and eucalyptus, banyan, satinleaf -- and flocks of showy birds like the red-billed leiothrix, red-whiskered bulbul, zebra dove, common mynah, and red-crested cardinal. But not one is native. The Hawaiian lowland ecosystems were completely destroyed for agriculture and settlement; accidental introduction of the southern house mosquito (Culex quinquefasciatus), and the avian malaria it carried, wiped out nearly all of the birds living below five hundred meters of elevation. If you want to see native Hawaiian species -- what's left of them -- you have to go up into the mountains, and even there, they're struggling to hang on.
Aarhus University ecologist Jens-Christian Svenning, who has been studying Hawaii's ecology for almost a decade, calls the current situation a "freakosystem." What's interesting, Svenning says, is that the situation has re-established a healthy, interactive community of species -- just not the ones that were there only two hundred years ago.
"These are wild but changed ecosystems," Svenning said. "They have passed some critical threshold which means they are unlikely to ever go back to how they were before. If you removed all people from the planet, Hawaii would be on a different evolutionary ecological trajectory going forward."
Hawaii's iconic plumeria trees, whose flowers are used to make leis, were introduced from the Caribbean in the 1800s [Image licensed under the Creative Commons Varun Pabrai, Plumeria rubra-4, CC BY-SA 4.0]
Hawaii, however, is just the canary in the coal mine. A study by Svenning and his colleagues indicates that between thirty and forty percent of all terrestrial ecosystems have "transformed into novel states;" that percentage is projected to rise to fifty by 2100.
It's not, of course, that these kinds of changes can't happen through natural processes. Three years ago I wrote a piece about the effect that continental collisions can have on the species that live there; and, after all, even less dramatic events than that can lead to extinction. What strikes me here is the speed with which it's happening. We've tampered with ordinary ecological succession with no forethought, and as a result, triggered what (to judge by the rates) will rank up there with the "Big Five" mass extinctions -- the Ordovician, Devonian, Permian-Triassic, Late Triassic, and Cretaceous.
So maybe it's time to start thinking about this.
It's too late to undo the silent invasion of exotic species; here in upstate New York, I'm afraid lawn grass is here to stay, as are garlic mustard and multiflora rose, and lilacs, peonies, and daffodils. At least the last three are pretty and don't seem to be especially harmful. But we'd better wise up about what we're doing, and fast. Because remember that as prideful as we get sometimes, to the biosphere we're just another animal species. We're no more guaranteed survival than anything else.
Let's hope we learn that lesson before it's too late.
It's estimated that of the five billion species of organisms that have ever existed on Earth, something like 99.99% of them are extinct. This is with allowances for the fact that -- as I pointed out in a post a couple of years ago -- the word species is one of the mushiest terms in all of science, one of those words that you think you can define rigorously until you realize that every definition you come up with has dozens of exceptions or qualifications.
Be that as it may, there's no doubt that extinction has been the fate of virtually all of the twigs on the Great Tree of Life, from charismatic megafauna like Apatosaurus and the saber-toothed cat all the way down to single-celled organisms that lived and died hundreds of millions of years ago and left no fossil record whatsoever.
Some of the more recent extinctions, though, always strike nature-loving types like myself as a tragedy. The Dodo usually comes up, and the Thylacine (or "Tasmanian wolf," although it wasn't a wolf and wasn't limited to Tasmania), and the maybe-it-still-exists, maybe-it-doesn't Ivory-billed Woodpecker. The Passenger Pigeon, which before 1850 was the most abundant bird in eastern North America, comprising flocks of tens of thousands of individuals, was hunted to extinction in only fifty years -- the last wild Passenger Pigeon was shot in Ohio in 1900.
Wouldn't it be cool, many of us have thought, to bring back some of these lost organisms? The Jurassic Park scenario is a pipe dream; amber notwithstanding, no intact DNA has ever been found from that long ago. But what about more recently-extinct species?
A company called Colossal Biosciences, run by Ben Lamm and George Church, claim to have produced three Dire Wolf pups (Aenocyon dirus) using DNA extracted from a tooth and a skull from Idaho and Ohio, respectively -- genetically altering the fertilized eggs of a gray wolf, and gestating the embryos in ordinary female dogs. Here's one of the results:
[Image credit: Colossal Biosciences]
You're looking at a photograph of an animal that hasn't lived for ten thousand years.
My initial "good lord this is cool" reaction very quickly faded, though, but not because of some sort of "We're playing God!" pearl-clutching. Lamm, who apparently has huge ambitions and an ego to match, sees no problem with any of it, and has plans to bring back the Dodo and the Woolly Mammoth, and others as well. All, of course, big flashy animals, because that's what attracts investors; no one is going to put millions of dollars into bringing back the Ouachita pebblesnail.
But even that isn't the actual problem, here. Lamm himself gave a glancing touch on the real issue in his interview with The New Yorker (linked above), when someone inevitably brought up Jurassic Park. "That was an exaggerated zoo," Lamm said. "This is letting the animals live in their natural habitats."
No. No, it's not.
Because these species' natural habitats don't exist anymore.
Even the Dodo, which went extinct in 1662, couldn't be reintroduced to Mauritius Island today; the feral cats, rats, dogs, and pigs that helped drive it to extinction in the first place still live in abundance on the island. What would the de-extinction team do? Create a fenced, guarded reserve for it?
How is that not an "exaggerated zoo?"
And the Dire Wolf is an even more extreme example. It originally lived throughout much of the continental United States and down into mountainous regions of Central America. Adults could weigh up to seventy kilograms, so they could take down good-sized prey. If you could create a breeding population of Dire Wolves, where would you put them that they wouldn't come into contact with livestock, pets... and humans?
The truth is sad but inevitable; the world the Dire Wolf lived in is gone forever. Whether what we have now is better or worse is a value judgment I'm not equipped to make. What I do know is that recreating these animals only to have them lead restricted lives in reserves for rich people to come gawk at is morally indefensible. Ultimately, they can never live in the wild again; so a fenced-in reserve -- or the only other option, to let them go extinct a second time.
As huge as the coolness factor is, we shouldn't be doing this. How about putting our time, money, and effort into not further fucking up what we still have? There are plenty of wildlife refuges worldwide that could benefit enormously from the money being sunk into this project. Or, maybe, working toward fighting Donald Trump's "cut down all the trees and strip mine the world" approach to the environment.
So after the first flush of "Wow," all Lamm and Church's accomplishment did was leave me feeling a little sick. There seems to be no end to human hubris, and it's sad that these beautiful animals have to be its showpiece.
Stars live for most of their lives in an equilibrium between two forces; the inward pull of their own gravity, and the outward pressure from the heat generated by fusion in their cores. As long as there is plenty of hydrogen left to power fusion, those forces are equal and opposing, and the star is stable.
When the hydrogen is depleted, though, the balance shifts. The core cools, and the gravitational collapse resumes. This, however, heats things up -- recall the "ideal gas law" from high school chemistry, and that temperature and pressure are inversely proportional -- and the star begins to fuse the helium "ash" left over from hydrogen burning into carbon. Eventually that runs out, too, and the process repeats -- carbon to oxygen and silicon, and on up the scale until finally it gets to iron. At that point, there's nowhere to go; after iron, fusion begins to be an endothermic (energy-requiring) reaction, and the star is pretty much out of gas.
What happens at this point depends on one thing: the star's initial mass. For a star the size of the Sun, the later stages liberate enough energy to balloon the outer atmosphere into a red giant, and when the final collapse happens, it blows off that atmosphere into a wispy bubble called a planetary nebula.
The Cat's Eye Nebula (NGC 6543) [Image is in the Public Domain courtesy of NASA]
What's left at the center is the exposed core of the star -- a white dwarf, still glowing from its residual heat. It doesn't collapse further because its mass is held up by electron degeneracy pressure -- the resistance of electrons to occupying the same quantum state, something known as the Pauli Exclusion Principle. But it's no longer capable of fusion, so it will simply cool and darken over the next few billion years.
For heavier stars -- between two and ten times the mass of the Sun -- electron degeneracy is not sufficient to halt the collapse. The electrons are forced into the nuclei of the atoms, and what's left is a densely-packed glob of neutrons called, appropriately enough, a neutron star. So much energy is liberated by this process that the result is a supernova; the atmosphere is blown away completely, and the collapsed core, which is made of matter dense enough that a teaspoonful would weigh as much as Mount Everest, spins faster and faster because of the Law of Conservation of Angular Momentum, in some cases reaching speeds of thirty rotations per second. This whirling stellar core is called a pulsar.
For stars even larger than that, though, the pressure of neutron star matter isn't enough to stop the gravitational collapse. In fact, nothing is. The supernova and subsequent collapse lead to the formation of a singularity -- a black hole.
So that's the general scheme of things, but keep in mind that this is the simplest case. Like just about everything in science, reality is more complex.
Suppose you had an ordinary star like the Sun, but it was in a binary system. The Sun-like star reaches the end of its life as a white dwarf, as per the above description. Its partner, though, is still in stable middle age. If it's close enough, the dense, compact white dwarf will begin to funnel material away from its partner, siphoning off the outer atmosphere, and -- this is the significant part -- gaining mass in the process.
Artist's conception of the white dwarf/main sequence binary AE Aquarii [Image is in the Public Domain courtesy of NASA]
The brilliant Indian physicist Subrahmanyan Chandrasekhar figured out that this process can only go on so long -- eventually the white dwarf gains enough mass that its gravity exceeds the outward pressure from electron degeneracy. At a mass of 1.4 times that of the Sun -- the Chandrasekhar Limit -- the threshold is reached, and the result is a sudden and extremely violent collapse and explosion called a type 1a supernova. This is one of the most energetic events known -- in a few seconds, it liberates 10^44 Joules of energy (that's 1, followed by 44 zeroes).
So this is why I got kind of excited when I read a paper in Nature Astronomy about a binary star system only 150 light years away made of two white dwarf stars, which are spiraling inward and will eventually collide.
Because that would be the type 1a supernova to end all type 1a supernovas, wouldn't it? No gradual addition of little bits of mass at a time until you pass the Chandrasekhar Limit; just a horrific, violent collision. And 150 light years is close enough that it will be a hell of a fireworks show. Estimates are that it will be ten times brighter than the full Moon. But at that distance, it won't endanger life on Earth, so it'll be the ideal situation -- a safe, but spectacular, event.
The two stars are currently orbiting their common center of mass at a distance of about one-sixtieth of that between the Earth and the Sun, completing an orbit every fourteen hours. Immediately before collision, that orbital period will have dropped to the frantic pace of one revolution every thirty seconds. After that...
... BOOM.
But this was the point where I started thinking, "Hang on a moment." Conservation of energy laws suggest that to go from a fourteen-hour orbit with a radius of around 2.5 million kilometers, to a thirty-second orbit with a radius of close to zero, would require an enormous loss of energy from the system. That kind of energy loss doesn't happen quickly. So how long will this process take?
And there, in the paper, I found it.
This spectacular supernova isn't going to happen for another 23 billion years.
This was my expression upon reading this:
I don't know about you, but even in my most optimistic moments I don't think I'm going to live for another 23 billion years. So this whole thing gives new meaning to the phrase "advance notice."
You know, I really think y'all astrophysicists need to step up your game, here. You get our hopes up, and then say, "Well, of course, you know, astronomical time scales..." Hell, I've been waiting for Betelgeuse to blow up since I was like fifteen years old. Isn't fifty years astronomical enough for you?
And now, I find out that this amazing new discovery of two madly-whirling white dwarf stars on an unavoidable collision course is going to take even longer. To which I say: phooey.
I know your priority isn't to entertain laypeople, but c'mon, have a heart. Down here all we have to keep our attention is the ongoing fall of civilization, and that only gets you so far. Back in the day, stuff like comets and supernovas and whatnot were considered signs and portents, and were a wonderful diversion from our ancestors' other occupations, such as starving, dying of the plague, and being tortured to death by the Inquisition. Don't you think we deserve a reason to look up, too? In every sense of the phrase?
So let's get a move on, astrophysicists. Find us some imminent stellar hijinks to watch. I'll allow for some time in the next few months. A year at most. I think that's quite generous, really.
And if you come up with something good, I might even forgive you for getting my hopes up about something amazing that won't happen for another 23 billion years.
Most likely all of you know about Sagittarius A*, the supermassive black hole that sits at the center of the Milky Way Galaxy.
It's hard to talk about it without lapsing into superlatives. It has a mass about 4.3 million times that of the Sun. It's event horizon -- the "point of no return," the closest you can get to a black hole without being trapped by its gravitational pull -- has a radius of 11.3 million kilometers. It sits at the center of a fifteen-light-year-wide whirlpool of gas and dust called the accretion disk, which we know about because the material in it is moving so fast it has heated up to as high as ten million degrees Celsius, resulting in a steady emission of high-frequency x-rays.
It's curious that something this luminous wasn't immediately obvious to astronomers. First, it doesn't emit a lot of visible light; we didn't have telescopes capable of detecting the x-rays that are its fingerprint until 1933. By the 1970s, more precise observations showed that whatever the x-ray source was, it was extremely compact. It wasn't until 1994 that Charles H. Townes and Reinhard Genzel showed that its mass and diameter were consistent with its being a black hole. Another reason it took that long is that between us and the center of the galaxy there are massive dust clouds, so any visible light it does emit (or which is emitted by the dense clouds of glowing gas near it) mostly gets blocked. (Even so, looking toward the center of the Milky Way in the constellation Sagittarius, visible where I am in late summer, is pretty damn spectacular.)
The third reason that we don't get the full luminosity of whatever electromagnetic radiation is emitted from Sagittarius A* is a fortunate one for us; because of the black hole's immense magnetic field, any bursts of light tend to get funneled away along the axis of its spin, creating jets moving perpendicularly to the galactic plane. We, luckily, are comfortably out in the stellar suburbs, in one of the Milky Way's spiral arms. Our central black hole is fairly quiet, for the most part, but even so, looking down the gun barrel of its magnetic field axis would not be a comfortable position to reside.
The reason this comes up is some new research out of the University of Colorado - Boulder, which used data from the James Webb Space Telescope to solve a long-standing question about why, given the high density of hydrogen and helium gas near the galactic center, the rate of star formation there is anomalously low. This region, called Sagittarius C, extends about two hundred light years from the central black hole (by comparison, the Solar System is twenty-six thousand light years away). And what the team of researchers found is that threading the entire region are filaments of hot, bright plasma, some of them up to several light years in length.
The reason for both the filaments and the low star formation rate is almost certainly the black hole's magnetic field, which acts to compress any gas that's present along the field lines, heating it up dramatically. This, in turn, creates an outward pressure that makes the gas resist collapsing and forming stars.
"It's in a part of the galaxy with the highest density of stars and massive, dense clouds of hydrogen, helium and organic molecules," said Samuel Crowe, who co-authored the paper, which appeared this week in The Astrophysical Journal. "It's one of the closest regions we know of that has extreme conditions similar to those in the young universe... Because of these magnetic fields, Sagittarius C has a fundamentally different shape, a different look than any other star forming region in the galaxy away from the galactic center."
It is, to put it mildly, a rough neighborhood.
It's staggering how far we've come in our understanding of what our ancestors called the "fixed stars" -- far from being eternal and unchanging, the night sky is a dynamic and ever-evolving place, and with new tools like the JWST we're finding out how much more we still have to learn. Something to think about the next time you look up on a clear, starry night. The peaceful, silent flickering, set against the velvet black background, is an illusion; the reality is far wilder -- and far more interesting.