Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Friday, March 12, 2021

Worlds without end

Earlier this week, I dealt with the rather unsettling idea that when AI software capabilities improve just a little more, we may be able to simulate someone so effectively that their interactions with us will be nearly identical to the real thing.  At that point, we may have to redefine what death means -- if someone's physical body has died, but their personality lives on, emulated within a computer, are they really gone?

Well, according to a couple of recent papers, the rabbit hole may go a hell of a lot deeper than that.

Let's start with Russian self-styled "transhumanist" Alexey Turchin.  Turchin has suggested that in order to build a convincing simulated reality, we need not only much more sophisticated hardware and software, we need a much larger energy source to run it than is now available.  Emulating one person, semi-convincingly, with an obviously fake animated avatar, doesn't take much; as we saw in my earlier post, we can more or less already do that.

But to emulate millions of people, so well that they really are indistinguishable from the people they're copied from, is a great deal harder.  Turchin proposes that one way to harvest that kind of energy is to create a "Dyson sphere" around the Sun, effectively capturing all of that valuable light and heat that otherwise is simply radiated into space.

Now, I must say that the whole Dyson sphere idea isn't what grabbed me about Turchin's paper, as wonderful as the concept is in science fiction (Star Trek aficionados will no doubt recall the TNG episode "Relics," in which the Enterprise almost got trapped inside one permanently).  The technological issues presented by building a Dyson sphere that is stable seem to me to be nearly insurmountable.  What raised my eyebrows was his claim that once we've achieved a sufficient level of software and hardware sophistication -- wherever we get the energy to run it -- the beings (can you call them that?) within the simulation would proceed to interact with each other as if it were a real world.

And might not even know they were within a simulation.

"If a copy is sufficiently similar to its original to the extent that we are unable to distinguish one from the other," Turchin asks, "is the copy equal to the original?"

If that's not bad enough, there's the even more unsettling idea that not only is it possible we could eventually emulate ourselves within a computer, it's possible that it's already been done.

And we're it.

Work by Nick Bostrom (of the University of Oxford) and David Kipping (of Columbia University) has looked at the question from a statistical standpoint.  Way back in 2003, Bostrom considered the issue a trilemma.  There are three possibilities, he says:
  • Intelligent species always go extinct before they become technologically capable of creating simulated realities that sophisticated.
  • Intelligent species don't necessarily go extinct, but even when they reach the state where they'd be technologically capable of it, none of them become interested in simulating realities.
  • Intelligent species eventually become able to simulate reality, and go ahead and do it.
Kipping recently extended Bostrom's analysis using Bayesian statistical techniques.  The details of the mathematics are a bit beyond my ken, but the gist of it is to consider what it would be like if choice #3 has even a small possibility of being true.  Let's say some intelligent civilizations eventually become capable of creating simulations of reality.  Within that reality, the denizens themselves evolve -- we're talking about AI that is capable of learning, here -- and some of them eventually become capable of simulating their reality with a reality-within-a-reality.

Kipping calls such a universe "multiparous" -- meaning "giving birth to many."  Because as soon as this ball gets rolling, it will inevitably give rise to a nearly infinite number of nested universes.  Some of them will fall apart, or their sentient species will go extinct, just as (on a far simpler level) your character in a computer game can die and disappear from the "world" it lives in.  But as long as some of them survive, the recursive process continues indefinitely, generating an unlimited number of matryoshka-doll universes, one inside the other.

[Image licensed under the Creative Commons Stephen Edmonds from Melbourne, Australia, Matryoshka dolls (3671820040) (2), CC BY-SA 2.0]

Then Kipping asks the question that blows my mind: if this is true, then what is the chance of our being in the one and only "base" (i.e. original) universe, as opposed to one of the uncounted trillions of copies?

Very close to zero.

"If humans create a simulation with conscious beings inside it, such an event would change the chances that we previously assigned to the physical hypothesis," Kipping said.  "You can just exclude that [hypothesis] right off the bat.  Then you are only left with the simulation hypothesis.  The day we invent that technology, it flips the odds from a little bit better than 50–50 that we are real to almost certainly we are not real, according to these calculations.  It’d be a very strange celebration of our genius that day."

The whole thing reminded me of a conversation in my novel Sephirot between the main character, Duncan Kyle, and the fascinating and enigmatic Sphinx, that occurs near the end of the book:
"How much of what I experienced was real?" Duncan asked.

"This point really bothers you, doesn't it?"

"Of course. It's kind of critical, you know?"

"Why?" Her basso profundo voice dropped even lower, making his innards vibrate.  "Everyone else goes about their lives without worrying much about it."

"Even so, I'd like to know."

She considered for a moment.  "I could answer you, but I think you're asking the wrong question."

"What question should I be asking?"

"Well, if you're wondering whether what you're seeing is real or not, the first thing to establish is whether or not you are real.  Because if you're not real, then it rather makes everyone else's reality status a moot point, don't you think?"

He opened his mouth, stared at her for a moment, and then closed it again.

"Surely you have some kind of clever response meant to dismiss what I have said entirely," she said.  "You can't come this far, meeting me again after such a long journey, only to find out you've run out of words."

"I'm not sure what to say."

The Sphinx gave a snort, and a shower of rock dust floated down onto his head and shoulders.  "Well, say something.  I mean, I'm not going anywhere, but at some point you'll undoubtedly want to."

"Okay, let's start with this.  How can I not be real?  That question doesn't even make sense.  If I'm not real, then who is asking the question?"

"And you say you're not a philosopher," the Sphinx said, her voice shuddering a little with a deep laugh.

"No, but really.  Answer my question."

"I cannot answer it, because you don't really know what you're asking.  You looked into the mirrors of Da'at, and saw reflections of yourself, over and over, finally vanishing into the glass, yes?  Millions of Duncan Kyles, all looking this way and that, each one complete and whole and wearing the charming befuddled expression you excel at."

"Yes."

"Had you asked one of those reflections, 'Which is the real Duncan Kyle, and which the copies?' what do you think he would have said?"

"I see what you're saying.  But still… all of the reflections, even if they'd insisted that they were the real one, they'd have been wrong.  I'm the original, they're the copies."

"You're so sure?... A man who cannot prove that he isn't a reflection of a reflection, who doesn't know whether he is flesh and blood or a character in someone else's tale, sets himself up to determine what is real."  She chuckled.  "That's rich."
So yeah.  When I wrote that, I wasn't ready for it to be turned on me personally.

Anyhow, that's our unsettling science/philosophy for this morning.  Right now it's probably better to go along with Duncan's attitude of "I sure feel real to me," and get on with life.  But if perchance I am in a simulation, I'd like to appeal to whoever's running it to let me sleep better at night.

And allow me to add that the analysis by Bostrom and Kipping is not helping much.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



No comments:

Post a Comment