Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Great Filter. Show all posts
Showing posts with label Great Filter. Show all posts

Saturday, September 1, 2018

The Great Filter and the three f's

In yesterday's post, we looked at how the Drake Equation predicts the number of intelligent civilizations out there in the galaxy, and that more than one of the variables has been revised upward in the last few years because of recent research in astronomy.  This suggests that life is probably super-common in the universe -- and intelligent life undoubtedly is out there, as well.

But we ended with a puzzle.  Physicist Enrico Fermi famously responded to Frank Drake with four words: "Then where is everybody?"  This was true back when it was said (1961) and is even more true now; in the intervening 57 years, we've done huge amounts of surveying of the sky, looking for any sign of an extraterrestrial intelligence, and found... nothing.

Now, to be fair, "huge amounts of surveying" still covers a minuscule fraction of the stars out there.  All that would have to happen is the radio signal saying, "Hi, y'all, here we are!" hitting Earth while our radio telescopes were aimed at a different star, or tuned to a different frequency, and we could well miss it.

Messier 51, the Whirlpool Galaxy [Image courtesy of NASA/JPL]

But there's a more sinister possibility, and that possibility goes by the nickname of "The Great Filter."

I looked at this concept in a post a while back, especially apropos of the variable "L" in the Drake Equation -- once a planet hosts intelligent life, how long does it last?  If we were to time-travel two thousand years into the future, would there still be a human civilization, or are we doomed to destroy ourselves, either by our own fondness for weaponry capable of killing large numbers of people at once, or because our rampant population growth exceeded the planet's carrying capacity, and we experienced what the ecologists somewhat euphemistically call "overshoot-and-rebound?"

But today I want to look at the Great Filter in a larger perspective.  Given that most astronomers think that the Drake Equation leads to the conclusion that life, and even intelligent life, is common out there, Fermi's quip is well taken.  And the answers to that question can be sorted into three basic categories, which have been nicknamed the "three f's":
  1. We're first.
  2. We're fortunate.
  3. We're fucked.
Could we be the first planet in our region of the galaxy to harbor intelligent life?  It's certainly possible, especially given the time gap between our developing life (four-odd-billion years ago) and our developing the technology not only to send, but to detect, signals from other planets (about fifty years ago).  Consider, for example, that if there was a civilization on Alpha Centauri at the technological stage we had two hundred years ago, they would have a thriving society made up of individuals that are highly intelligent, but to us here on Earth, they would be completely silent (and also wouldn't know it if we were talking to them).

However, considering the number of stars with planets, even in our region of the Milky Way, I think that's unlikely.  Even if we were all on a similar time table -- a contention that is not supported by what we know of stellar evolution -- it's nearly certain that there'd be someone out there at, or ahead of, our level of technology.  Add to that the fact that there are a lot of planet-hosting stars out there that are much older than the Sun, and I think option #1 is really not that likely.

Might we just be fortunate?  There are a number of hurdles we had to overcome to get where we are, none of which were at all sure bets.  The development of complex multicellular life, the evolution of symbiosis between our cells and what would eventually become our mitochondria (allowing us not only to avoid the toxic reactiveness of atmospheric oxygen, but to hitch that to our energy production systems, an innovation that improved our energy efficiency by a factor of 18).  None of those are at all guaranteed, and although it's conceivable to have intelligent life that lacks those characteristics, it's kind of hard to imagine how it would advance this much.

Then there's the evolution of sexual reproduction, which is critical not only because it's fun, but because it allows recombination of our genetic material each generation.  This allows us to avoid the dual problems of genetically-identical individuals being susceptible to the same pathogens, and also Muller's Ratchet (a problem faced by asexual species that is best understood as a genetic game of Telephone -- at each replication, mutations build up and eventually turn the DNA into nonsense).

But no one knows how likely the evolution of sexual reproduction is -- nor, honestly, if it's really as critical as I've suggested.

The last possibility, though -- "we're fucked" -- is the most alarming.  This postulates that the Great Filter lies ahead of us.  The reasons are varied, and all rather depressing.  It could be the "L" in the Drake Equation is a small number -- on the order of decades -- because we'll destroy ourselves somehow.  It could be that there are inevitable cosmic catastrophes that eventually wipe out the life on a planet, things like Wolf-Rayet stars and gamma-ray bursters, either of which would be seriously bad news if one went boom near the Solar System.

Then there's Elon Musk's worry, that intelligent civilizations eventually develop artificial intelligence, which backfires spectacularly.  In 2017 he urged a halt, or at least a slowdown, in AI research, because there's no reason to think sentient AI would consider us all that valuable.  "With artificial intelligence," Musk said, "we are summoning the demon.  You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon?  Doesn’t work out."

But by far the most sinister idea is that we're doomed because eventually, a civilization reaches the point where they're able to send out radio signals.  We've been doing this ever since radio and television were invented, so there's an expanding bubble of our transmissions zooming out into the galaxy at the speed of light.  And the idea here is that we'll eventually attract the attention of a considerably more powerful civilization, which will respond by stomping on us.  Stephen Hawking actually thought this was fairly likely -- back in 2015, he said, "We don't know much about aliens, but we know about humans.  If you look at history, contact between humans and less intelligent organisms have often been disastrous from their point of view, and encounters between civilizations with advanced versus primitive technologies have gone badly for the less advanced.  A civilization reading one of our messages could be billions of years ahead of us.  If so, they will be vastly more powerful, and may not see us as any more valuable than we see bacteria."

Which, considering that the first traces the aliens will see of us are Leave it to Beaver and The Andy Griffith Show, is an understandable reaction.

So there you have it.  If we did contact another civilization, it would be good news in one sense -- the Great Filter hasn't wiped everyone out but us -- but could be a seriously bad one in another respect.  I guess stuff like this is always a mixed bag.

Me, I still would love to live long enough to see it happen.  If an alien spaceship landed in my back yard, man, I would be thrilled.  It'd suck if it turned out to be an invasion by Daleks or Cybermen or whatnot, but man, at least for the first three minutes, it would be a hell of a rush.

******************************************

This week's Skeptophilia book recommendation is from one of my favorite thinkers -- Irish science historian James Burke.  Burke has made several documentaries, including Connections, The Day the Universe Changed, and After the Warming -- the last-mentioned an absolutely prescient investigation into climate change that came out in 1991 and predicted damn near everything that would happen, climate-wise, in the twenty-seven years since then.

I'm going to go back to Burke's first really popular book, the one that was the genesis of the TV series of the same name -- Connections.  In this book, he looks at how one invention, one happenstance occurrence, one accidental discovery, leads to another, and finally results in something earthshattering.  (One of my favorites is how the technology of hand-weaving led to the invention of the computer.)  It's simply great fun to watch how Burke's mind works -- each of his little filigrees is only a few pages long, but you'll learn some fascinating ins and outs of history as he takes you on these journeys.  It's an absolutely delightful read.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]




Monday, April 9, 2018

Dodging the Great Filter

There's a cheery idea called "The Great Filter," have you heard of it?

The whole concept came up when considering the possibility of extraterrestrial intelligence, especially vis-à-vis the Fermi Paradox, which can be summed up as, "If intelligent aliens are common in the universe, where is everyone?"  Despite fifty-odd years of intensive searching, there has never been incontrovertible evidence of someone out there.  I maintain hope, however; the universe is a big, big place, and even the naysayers admit we've only surveyed the barest fraction of it.

The Arecibo Radio Telescope [image courtesy of photographer David Broad and the Wikimedia Commons]

"The Great Filter" is an attempt to parse why this may be, assuming it's not because alien civilizations are communicating with each other (and/or sending signals to us) using a technology we don't understand yet and can't detect.  You can think of the Great Filter as being a roadblock -- where, along the way, do circumstances prevent life forming on other planets, then achieving intelligence?

There are a few candidates for the Great Filter, to wit:
  • the abiotic synthesis of complex organic molecules.  This seems unlikely, as organic molecule synthesis appears to be easy, as long as there's no nasty chemical like molecular oxygen around to rip them apart as fast as they form.  In an anoxic atmosphere -- such as the one the Earth almost certainly had five billion years ago -- organic molecules of all sorts can form with wild abandon.
  • assembly of those organic molecules into cells.  Again, this has been demonstrated in the lab to be easy.  Hydrophobic interactions make lipids (or other amphipathic molecules, ones with a polar end and a nonpolar end) form structures that look convincingly like cells with little more encouragement than occasional agitation.
  • the evolution of those cells into a complex life form.  Now we're on shakier ground; no one knows how common this may be.  Although natural selection seems to be universal, all this would do is cause the cells that are the best/most efficient at replicating themselves to become more common.  There's no particular reason that complex life forms would necessarily result from that process.  As eminent evolutionary biologist Richard Dawkins put it, "Evolution is the law of whatever works."
  • the development of intelligence.  Again, there's no reason to expect this to occur everywhere.  Intelligent life forms aren't even the most common living things on Earth -- far from it.  We are vastly outnumbered not only by insects, but bacteria -- methanogens, a group of bacteria species that live in anaerobic sediment on the ocean floor, are thought to outnumber all other living organisms on Earth put together.
  • an intelligent species surviving long enough to stand a chance of sending an identifiable signal.  That the Great Filter consists of intelligent life evolving and then proceeding to do something stupid and destroying itself has been nicknamed the "We're Fucked" model.  If all of the preceding scenarios turn out not to be serious issues -- and at least the first two seem that way -- then it could be that intelligence pops up all over the place, but only lasts a few decades before spontaneously combusting.
Most biologists think that if a Great Filter does exist, #5 is probably the best candidate.  There's nothing we know about biology that precludes any of the others; even if (for example) the evolution of intelligence is slow and arduous, given the size of the universe, there are probably millions of planets that host, or have hosted, intelligent life.

On the other hand, if they only host that life for a few years before it commits suicide en masse, it could explain why we're not getting a lot of "Hey, We're Here!" signals from the cosmos.

When people consider what could trigger an intelligent civilization to self-destruct, most people think of the development of advanced weaponry.  It's like a planet-wide application of the Principle of Chekhov's Gun (from 19th century Russian author Anton Chekhov): "If you say in the first chapter that there is a rifle hanging on the wall, in the second or third chapter it absolutely must go off.  If it's not going to be fired, it shouldn't be hanging there."  If we develop weapons of mass destruction, eventually we'll use them -- destroying ourselves in the process.

It reminds me of the Star Trek: The Next Generation episode "The Arsenal of Freedom," in which a civilization becomes the salespeople of increasingly advanced weapon systems -- until they develop one so powerful that once activated, it can't be stopped, and it proceeds to wipe out the people who made it.


Of course, there's another possibility (because one way of self-destructing isn't enough...).  This was just brought up by inventor and futurist Elon Musk, who last week declared that he wants us to put the brakes on artificial intelligence development.  Musk says that if we develop a true artificial intelligence, it will not only inevitably take over, it will eventually look at humanity as "in the way" -- and destroy us:
[I]f we’re building a road, and an anthill happens to be in the way, we destroy it.  We don’t hate ants, we’re just building a road.  So, goodbye, anthill.  
If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it.  No hard feelings...  By the time we are reactive in AI regulation, it’ll be too late.  Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.  It takes forever.  That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization...  
At least when there’s an evil dictator, that human is going to die.  But for an AI there would be no death.  It would live forever, and then you’d have an immortal dictator, from which we could never escape.
It's possible that we could fall prey not to our weapon systems, but to something few of us have considered dangerous -- a created artificial intelligence.  (Although you'd think that anyone who has watched either I, Robot or any of the Terminator movies would understand the risk.)

So do advanced civilizations inevitably develop AI systems, that then turn on them?  It would certainly explain why we're not receiving greetings from the stars.  It's possible that the Great Filter lies ahead of us -- a prospect that I consider a little terrifying.

Anyhow, sorry for being a downer.  Besides Musk's recent pronouncements, the idea has been floating around in my head given all of the idiotic things our leaders have been doing recently.  I guess if we can survive for the next few years, we might break through the suspicion and violence and parochialism that has characterized our species pretty much forever.  I'm going to try to remain optimistic -- as my dad used to say, "I'd rather be an optimist who is wrong than a pessimist who is right."

On the other hand, I think I'll end with a quote from theologian and Orthodox Rabbi Jonathan Sacks: "Science will explain how but not why. It talks about what is, not what ought to be.  Science is descriptive, not prescriptive; it can tell us about causes but it cannot tell us about purposes."

So maybe Elon Musk's adjuration to caution is well advised.