Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label simulations. Show all posts
Showing posts with label simulations. Show all posts

Friday, January 10, 2025

Defanging the basilisk

The science fiction trope of a sentient AI turning on the humans, either through some sort of misguided interpretation of its own programming or from a simple desire for self-preservation, has a long history.  I first ran into it while watching the 1968 film 2001: A Space Odyssey, which featured the creepily calm-voiced computer HAL-9000 methodically killing the crew one after another.  But the iteration of this idea that I found the most chilling, at least at the time, was an episode of The X Files called "Ghost in the Machine."

The story -- which, admittedly, seemed pretty dated on recent rewatch -- featured an artificial intelligence system that had been built to run an entire office complex, controlling everything from the temperature and air humidity to the coordination of the departments housed therein.  Running the system, however, was expensive, and when the CEO of the business talks to the system's designer and technical consultant and recommends shutting it down, the AI overhears the conversation, and its instinct to save its own life kicks in.

Exit one CEO.


The fear of an AI we create suddenly deciding that we're antithetical to its existence -- or, perhaps, just superfluous -- has caused a lot of people to demand we put the brakes on AI development.  Predictably, the response of the techbros has been, "Ha ha ha ha ha fuck you."  Myself, I'm not worried about an AI turning on me and killing me; much more pressing is the fact that the current generative AI systems are being trained on art, writing, and music stolen from actual human creators, so developing (or even using) them is an enormous slap in the face to those of us who are real, hard-working flesh-and-blood creative types.  The result is that a lot of artists, writers, and musicians (and their supporters) have objected, loudly, to the practice.

Predictably, the response of the techbros has been, "Ha ha ha ha ha fuck you."

We're nowhere near a truly sentient AI, so fears of some computer system taking a sudden dislike to you and flooding your bathroom then shorting out the wiring so you get electrocuted (which, I shit you not, is what happened to the CEO in "Ghost in the Machine") are, to put it mildly, overblown.  We have more pressing concerns at the moment, such as how the United States ended up electing a demented lunatic who campaigned on lowering grocery prices but now, two months later, says to hell with grocery prices, let's annex Canada and invade Greenland.

But when things are uncertain, and bad news abounds, for some reason this often impels people to cast about for other things to feel even more scared about.  Which is why all of a sudden I'm seeing a resurgence of interest in something I first ran into ten or so years ago -- Roko's basilisk.

Roko's basilisk is named after a guy who went by the handle Roko on the forum LessWrong, and the "basilisk," a mythical creature who could kill you at a glance.  The gist is that a superpowerful sentient AI in the future would, knowing its own past, have an awareness of all the people who had actively worked against its creation (as well as the people like me who just think the whole idea is absurd).  It would then resent those folks so much that it'd create a virtual reality simulation in which it would recreate our (current) world and torture all of the people on the list.

This, according to various YouTube videos and websites, is "the most terrifying idea anyone has ever created," because just telling someone about it means that now the person knows they should be helping to create the basilisk, and if they don't, that automatically adds them to the shit list.

Now that you've read this post, that means y'all, dear readers.  Sorry about that.

Before you freak out, though, let me go through a few reasons why you probably shouldn't.

First, notice that the idea isn't that the basilisk will reach back in time and torture the actual me; it's going to create a simulation that includes me, and torture me there.  To which I respond: knock yourself out.  This threat carries about as much weight as if I said I was going to write you into my next novel and then kill your character.  Doing this might mean I have some unresolved anger issues to work on, but it isn't anything you should be losing sleep over yourself.

Second, why would a superpowerful AI care enough about a bunch of people who didn't help build it in the past -- many of whom would probably be long dead and gone by that time -- to go to all this trouble?  It seems like it'd have far better things to expend its energy and resources on, like figuring out newer and better ways to steal the work of creative human beings without getting caught.

Third, the whole "better help build the basilisk or else" argument really is just a souped-up, high-tech version of Pascal's Wager, isn't it?  "Better to believe in God and be wrong than not believe in God and be wrong."  The problem with Pascal's Wager -- and the basilisk as well -- is the whole "which God?" objection.  After all it's not a dichotomy, but a polychotomy.  (Yes, I just made that word up.  No, I don't care). You could help build the basilisk or not, as you choose -- and the basilisk itself might end up malfunctioning, being benevolent, deciding the cost-benefit analysis of torturing you for all eternity wasn't working out in its favor, or its simply not giving a flying rat's ass who helped and who didn't.  In any of those cases, all the worry would have been for nothing.

Fourth, if this is the most terrifying idea you've ever heard of, either you have a low threshold for being scared, or else you need to read better scary fiction.  I could recommend a few titles.

On the other hand, there's always the possibility that we are already in a simulation, something I dealt with in a post a couple of years ago.  The argument is that if it's possible to simulate a universe (or at least the part of it we have access to), then within that simulation there will be sentient (simulated) beings who will go on to create their own simulations, and so on ad infinitum.  Nick Bostrom (of the University of Oxford) and David Kipping (of Columbia University) look at it statistically; if there is a multiverse of nested simulations, what's the chance of this one -- the one you, I, and unfortunately, Donald Trump belong to -- being the "base universe," the real reality that all the others sprang from?  Bostrom and Kipping say "nearly zero;" just considering that there's only one base universe, and an unlimited number of simulations, means the chances are we're in one of the simulations.

But.  This all rests on the initial conditional -- if it's possible to simulate a universe.  The processing power this would take is ginormous, and every simulation within that simulation adds exponentially to its ginormosity.  (Yes, I just made that word up.  No, I don't care.)  So, once again, I'm not particularly concerned that the aliens in the real reality will say "Computer, end program" and I'll vanish in a glittering flurry of ones and zeroes.  (At least I hope they'd glitter.  Being queer has to count for something, even in a simulation.)

On yet another hand (I've got three hands), maybe the whole basilisk thing is true, and this is why I've had such a run of ridiculously bad luck lately.  Just in the last six months, the entire heating system of our house conked out, as did my wife's van (that she absolutely has to have for art shows); our puppy needed $1,700 of veterinary care (don't worry, he's fine now); our homeowner's insurance company informed us out of the blue that if we don't replace our roof, they're going to cancel our policy; we had a tree fall down in a windstorm and take out a large section of our fence; and my laptop has been dying by inches.

So if all of this is the basilisk's doing, then... well, I guess there's nothing I can do about it, since I'm already on the Bad Guys Who Hate AI list.  In that case, I guess I'm not making it any worse by stating publicly that the basilisk can go to hell.

But if it has an ounce of compassion, can it please look past my own personal transgressions and do something about Elon Musk?  Because in any conceivable universe, fuck that guy.

****************************************

NEW!  We've updated our website, and now -- in addition to checking out my books and the amazing art by my wife, Carol Bloomgarden, you can also buy some really cool Skeptophilia-themed gear!  Just go to the website and click on the link at the bottom, where you can support your favorite blog by ordering t-shirts, hoodies, mugs, bumper stickers, and tote bags, all designed by Carol!

Take a look!  Plato would approve.


****************************************

Monday, October 30, 2023

Bending the light

One of the coolest (and most misunderstood) parts of science is the use of models.

A model is an artificially-created system that acts like a part of nature that might be inaccessible, difficult, or prohibitively expensive to study.  A great many of the models used by scientists today are sophisticated computer simulations -- these are ubiquitous in climate science, for example -- but they can be a great deal simpler than that.  Two of my students' favorite lab activities were models.  One of them was a "build-a-plant" exercise that turned into a class-wide competition for who could create the most successful species.  The other was a striking simulation of disease transmission where we started with one person who was "sick" (each student had a test tube; all of them were half full of water, but one of them had an odorless, colorless chemical added to it).  During the exercise, the students contacted each other by combining the contents of their tubes.  In any encounter, if both started out "healthy," they stayed that way; if one was "sick," now they both were.  They were allowed to contact as many or as few people as they wanted, and were to keep a list of who they traded with, in order.  Afterwards, we did a chemical test on the contents of the tube to see whose tubes were contaminated, then used the list of trades to see if we could figure out who the index case was.

It never failed to be an eye-opener.  In only five minutes of trades, often half the class got "infected."  The model showed how fast diseases can spread -- even if people were only contacting two or three others, the contaminant spread like wildfire.

In any case, models are powerful tools in science, used to study a wide variety of natural phenomena.  And because of a friend and fellow science aficionado, I now know about a really fascinating one -- a characteristic of certain crystals that is being used as a model to study, of all things, black holes.

[Image licensed under the Creative Commons Ra'ike (de:Benutzer:Ra'ike), Chalcanthite-cured, CC BY-SA 3.0]

The research, which appeared last month in Physical Review A, hinges on the effects that a substance called a photonic crystal has on light.  (We met photonic crystals here only a few weeks ago -- in a brilliant piece of unrelated research regarding why some Roman-era glass has a metallic sheen.)  All crystals have, by definition, a regular, grid-like lattice of atoms, and as light passes through the lattice, it slows down.  This slowing effect happens with all transparent crystals; for example, it's what causes the refraction and internal reflection that make diamonds sparkle.  A researcher named Kyoko Kitamura, of Tohoku University, realized that if light could be made to slow down within a crystal, it should be possible to arrange the molecules in the lattice to force light to bend. 

Well, bending light is exactly what happens near a black hole.  So Kitamura and her team made the intuitive leap that this property could be used to study not only the crystal's interactions with light, but indirectly, to discover more about how light behaves near massive objects.

At this point, it's important to clarify that light is not gravitationally attracted to the immense mass of a black hole -- this is impossible, as photons are massless, so they are immune to the force of gravity (just as particles lacking electrical charge are immune to the electromagnetic force).  What the black hole does is warp the fabric of space, just as a bowling ball on a trampoline warps the membrane downward.  A marble rolling on the trampoline's surface is deflected toward the bowling ball not because the bowling ball is somehow magically attracting the marble, but because the marble is following the shortest path through the curved two-dimensional space it's sitting on.  Light is deflected near a black hole because it's traversing curved space -- in this case, a three-dimensional space that has been warped by the black hole's mass.

[Nota bene: it doesn't take something as massive as a black hole to curve space; you're sitting in curved space right now, warped by the mass of the Earth.  If you throw a ball, its path curves toward the ground for exactly the same reason.  That we are in warped space, subject to the laws of the General Theory of Relativity, is proven every time you use a GPS.  The measurements taken by GPS have to take into account that the ground is nearer to the center of gravity of the Earth than the satellites are, so the warp is higher down here, not only curving space but changing any time measurements (clocks run slower near large masses -- remember Interstellar?).  If GPS didn't take this into account, its estimates of positions would be inaccurate.]

In any case, the fact that photonic crystals can be engineered to interact with light the way a black hole would means we can study the effects of black holes on light without getting near one.  Which is a good thing, considering the difficulty of visiting one, as well as nastiness like event horizons and spaghettification to deal with.

So that's our cool scientific research of the day.  Studies like this always bring to mind the false perception that science is some kind of dry, pedantic exercise.  The reality is that science is one of the most deeply creative of endeavors.  The best science links up realms most of us would never have thought of connecting -- like using crystals to simulate the behavior of black holes.

****************************************



Friday, March 12, 2021

Worlds without end

Earlier this week, I dealt with the rather unsettling idea that when AI software capabilities improve just a little more, we may be able to simulate someone so effectively that their interactions with us will be nearly identical to the real thing.  At that point, we may have to redefine what death means -- if someone's physical body has died, but their personality lives on, emulated within a computer, are they really gone?

Well, according to a couple of recent papers, the rabbit hole may go a hell of a lot deeper than that.

Let's start with Russian self-styled "transhumanist" Alexey Turchin.  Turchin has suggested that in order to build a convincing simulated reality, we need not only much more sophisticated hardware and software, we need a much larger energy source to run it than is now available.  Emulating one person, semi-convincingly, with an obviously fake animated avatar, doesn't take much; as we saw in my earlier post, we can more or less already do that.

But to emulate millions of people, so well that they really are indistinguishable from the people they're copied from, is a great deal harder.  Turchin proposes that one way to harvest that kind of energy is to create a "Dyson sphere" around the Sun, effectively capturing all of that valuable light and heat that otherwise is simply radiated into space.

Now, I must say that the whole Dyson sphere idea isn't what grabbed me about Turchin's paper, as wonderful as the concept is in science fiction (Star Trek aficionados will no doubt recall the TNG episode "Relics," in which the Enterprise almost got trapped inside one permanently).  The technological issues presented by building a Dyson sphere that is stable seem to me to be nearly insurmountable.  What raised my eyebrows was his claim that once we've achieved a sufficient level of software and hardware sophistication -- wherever we get the energy to run it -- the beings (can you call them that?) within the simulation would proceed to interact with each other as if it were a real world.

And might not even know they were within a simulation.

"If a copy is sufficiently similar to its original to the extent that we are unable to distinguish one from the other," Turchin asks, "is the copy equal to the original?"

If that's not bad enough, there's the even more unsettling idea that not only is it possible we could eventually emulate ourselves within a computer, it's possible that it's already been done.

And we're it.

Work by Nick Bostrom (of the University of Oxford) and David Kipping (of Columbia University) has looked at the question from a statistical standpoint.  Way back in 2003, Bostrom considered the issue a trilemma.  There are three possibilities, he says:
  • Intelligent species always go extinct before they become technologically capable of creating simulated realities that sophisticated.
  • Intelligent species don't necessarily go extinct, but even when they reach the state where they'd be technologically capable of it, none of them become interested in simulating realities.
  • Intelligent species eventually become able to simulate reality, and go ahead and do it.
Kipping recently extended Bostrom's analysis using Bayesian statistical techniques.  The details of the mathematics are a bit beyond my ken, but the gist of it is to consider what it would be like if choice #3 has even a small possibility of being true.  Let's say some intelligent civilizations eventually become capable of creating simulations of reality.  Within that reality, the denizens themselves evolve -- we're talking about AI that is capable of learning, here -- and some of them eventually become capable of simulating their reality with a reality-within-a-reality.

Kipping calls such a universe "multiparous" -- meaning "giving birth to many."  Because as soon as this ball gets rolling, it will inevitably give rise to a nearly infinite number of nested universes.  Some of them will fall apart, or their sentient species will go extinct, just as (on a far simpler level) your character in a computer game can die and disappear from the "world" it lives in.  But as long as some of them survive, the recursive process continues indefinitely, generating an unlimited number of matryoshka-doll universes, one inside the other.

[Image licensed under the Creative Commons Stephen Edmonds from Melbourne, Australia, Matryoshka dolls (3671820040) (2), CC BY-SA 2.0]

Then Kipping asks the question that blows my mind: if this is true, then what is the chance of our being in the one and only "base" (i.e. original) universe, as opposed to one of the uncounted trillions of copies?

Very close to zero.

"If humans create a simulation with conscious beings inside it, such an event would change the chances that we previously assigned to the physical hypothesis," Kipping said.  "You can just exclude that [hypothesis] right off the bat.  Then you are only left with the simulation hypothesis.  The day we invent that technology, it flips the odds from a little bit better than 50–50 that we are real to almost certainly we are not real, according to these calculations.  It’d be a very strange celebration of our genius that day."

The whole thing reminded me of a conversation in my novel Sephirot between the main character, Duncan Kyle, and the fascinating and enigmatic Sphinx, that occurs near the end of the book:
"How much of what I experienced was real?" Duncan asked.

"This point really bothers you, doesn't it?"

"Of course. It's kind of critical, you know?"

"Why?" Her basso profundo voice dropped even lower, making his innards vibrate.  "Everyone else goes about their lives without worrying much about it."

"Even so, I'd like to know."

She considered for a moment.  "I could answer you, but I think you're asking the wrong question."

"What question should I be asking?"

"Well, if you're wondering whether what you're seeing is real or not, the first thing to establish is whether or not you are real.  Because if you're not real, then it rather makes everyone else's reality status a moot point, don't you think?"

He opened his mouth, stared at her for a moment, and then closed it again.

"Surely you have some kind of clever response meant to dismiss what I have said entirely," she said.  "You can't come this far, meeting me again after such a long journey, only to find out you've run out of words."

"I'm not sure what to say."

The Sphinx gave a snort, and a shower of rock dust floated down onto his head and shoulders.  "Well, say something.  I mean, I'm not going anywhere, but at some point you'll undoubtedly want to."

"Okay, let's start with this.  How can I not be real?  That question doesn't even make sense.  If I'm not real, then who is asking the question?"

"And you say you're not a philosopher," the Sphinx said, her voice shuddering a little with a deep laugh.

"No, but really.  Answer my question."

"I cannot answer it, because you don't really know what you're asking.  You looked into the mirrors of Da'at, and saw reflections of yourself, over and over, finally vanishing into the glass, yes?  Millions of Duncan Kyles, all looking this way and that, each one complete and whole and wearing the charming befuddled expression you excel at."

"Yes."

"Had you asked one of those reflections, 'Which is the real Duncan Kyle, and which the copies?' what do you think he would have said?"

"I see what you're saying.  But still… all of the reflections, even if they'd insisted that they were the real one, they'd have been wrong.  I'm the original, they're the copies."

"You're so sure?... A man who cannot prove that he isn't a reflection of a reflection, who doesn't know whether he is flesh and blood or a character in someone else's tale, sets himself up to determine what is real."  She chuckled.  "That's rich."
So yeah.  When I wrote that, I wasn't ready for it to be turned on me personally.

Anyhow, that's our unsettling science/philosophy for this morning.  Right now it's probably better to go along with Duncan's attitude of "I sure feel real to me," and get on with life.  But if perchance I am in a simulation, I'd like to appeal to whoever's running it to let me sleep better at night.

And allow me to add that the analysis by Bostrom and Kipping is not helping much.

****************************************

Last week's Skeptophilia book-of-the-week was about the ethical issues raised by gene modification; this week's is about the person who made CRISPR technology possible -- Nobel laureate Jennifer Doudna.

In The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race, author Walter Isaacson describes the discovery of how the bacterial enzyme complex called CRISPR-Cas9 can be used to edit genes of other species with pinpoint precision.  Doudna herself has been fascinated with scientific inquiry in general, and genetics in particular, since her father gave her a copy of The Double Helix and she was caught up in what Richard Feynman called "the joy of finding things out."  The story of how she and fellow laureate Emmanuelle Charpentier developed the technique that promises to revolutionize our ability to treat genetic disorders is a fascinating exploration of the drive to understand -- and a cautionary note about the responsibility of scientists to do their utmost to make certain their research is used ethically and responsibly.

If you like biographies, are interested in genetics, or both, check out The Code Breaker, and find out how far we've come into the science-fiction world of curing genetic disease, altering DNA, and creating "designer children," and keep in mind that whatever happens, this is only the beginning.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]