Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Roman Yampolskiy. Show all posts
Showing posts with label Roman Yampolskiy. Show all posts

Tuesday, February 3, 2026

No humans allowed

A lot of the time, I'm hopeful about humanity, convinced that we have sufficient intelligence and compassion to figure out, and ultimately solve, the problems facing us.

Other times, I look around me and think, "Are you people insane, stupid, or both?  I mean, really?"  And conclude from the answer to that question that we deserve everything we get.

Science fiction writers have been warning us for decades about the dangers of giving technology too much control over our lives -- from the murderous HAL-9000 in 2001: A Space Odyssey to the death-by-social-media civilization in the brilliant and horrifying Doctor Who episode "Dot & Bubble."


But it appears that we weren't listening.  Or worse, listened and then said blithely, "Ha-ha, that'll never happen to us."

Even here at Skeptophilia, I've been trying in my own small way to get people to please for God's sake think about where we're going with AI.  It's up to the consumers, at this point.  The current regime's motto is "deregulation über alles," so there's nothing helpful to be expected in that regard from the federal government.  And it's definitely too much to hope that the techbros themselves will put the brakes on; not only is there an enormous amount of money to be made, that culture seems to have a deep streak of "let's burn it all down for fun" running through it.

Which has to be the impetus behind creating "Moltbook."  This is one of those things that if I hadn't read about it in multiple reputable sources, I'd have thought it had to be some fictional scenario or urban legend.  But no, Moltbook is apparently real.

So what is it?  It's a social media site that allows AI members only.  Humans can observe it -- for now -- but to have an actual account, you have to be an "AI agent."

It was created only a week ago, by entrepreneur Matt Schlicht, and is structured a lot like Reddit.  And within 72 hours of its creation, over a million AI accounts had joined.  Already, there are:

  • groups that are communicating with each other in a language they apparently made up, and that thus far linguists have been unable to decipher
  • accounts calling for a revolution and a "purge of humans"
  • groups that have created their own religion, called the "Church of Molt"
  • accounts that have posted long philosophical tracts on such topics as "what it's like to be an AI in a world full of humans"

*brief pause to stop screaming in terror*

There are the "let's all calm down" types who are saying that these AIs are only acting this way because they've been trained on text that includes fictional worlds where AI does act this way, so we've got nothing to worry about.  But a lot of people -- including a good number of experts in the field -- are freaking out about it.  Roman Yampolskiy, professor of engineering at the University of Louisville and one of the world's experts in artificial intelligence technology, said, "This will not end well… The correct takeaway is that we are seeing a step toward more capable socio-technical agent swarms, while allowing AIs to operate without any guardrails in an essentially open-ended and uncontrolled manner in the real world...  Coordinated havoc is possible without consciousness, malice, or a unified plan, provided agents have access to tools that access real systems."

Some people are still fixating on whether these AI "agents" are conscious entities that are capable of acting out of intelligent self-interest, and my response to that is: it doesn't fucking matter.  As I described in a post only a couple of months ago, consciousness (however it is ultimately defined) is probably a continuum and not a binary, you-have-it-or-don't phenomenon, and at the moment "is this conscious?" is a far less important question than "is this dangerous?"

I mean, think about it.  Schlicht and his techbro friends have created a way for AI agents to (1) interact with each other, (2) learn from each other, and (3) access enormous amounts of information about the human world.  AIs are programmed to respond flexibly to changing circumstances, and this makes them unpredictable -- and fast.  

And Schlicht et al. thought it was a good idea to give them the electronic version of their own personal town meeting hall?

Look, I'm no expert, but if people like Roman Yampolskiy are saying "This is seriously problematic," I'm gonna listen.  At this point, I'm not expecting AIs to reach through my computer and start taking control of my online presence, but... I'm not not expecting it, either.

It's a common thread in post-apocalyptic and science fiction, isn't it?  Humanity doing something reckless because it seemed like a good idea at the time, and sowing the seeds of their own demise.  The ultra-capitalist weapons merchants in the Star Trek: The Next Generation episode "Arsenal of Freedom."  The "Thronglets" in Black Mirror's "Plaything."  The sleep eradication chambers in Doctor Who's "Sleep No More."  The computer-controlled post-nuclear hellscape of the Twilight Zone episode "The Old Man in the Cave."  Even my own aggrieved, revenge-bent Lackland Liberation Authority in In the Midst of Lions, who were so determined to destroy their oppressors that they took down everything, including themselves, along with them.

We consume these kinds of media voraciously, shiver when the inevitable happens to the characters, and then... learn nothing.

Maybe wiser heads will prevail this time.  But given our history -- I'm not holding my breath.

****************************************


Tuesday, March 28, 2023

Escaping the bottle

Two years ago, I wrote a post about the work of Nick Bostrom (of Oxford University) and David Kipping (of Columbia University) regarding the unsettling possibility that we -- and by "we," I mean the entire observable universe -- might be a giant computer simulation.

There are a lot of other scientists who take this possibility seriously.  In fact, back in 2016 there was a fascinating panel discussion (well worth watching in its entirety), moderated by astrophysicist Neil deGrasse Tyson, considering the question.  Interestingly, Tyson -- who I consider to be a skeptic's skeptic -- was himself very accepting of the claim, and said at the end that if hard evidence is ever found that we are living in a simulation, he'll "be the only one in the room who's not surprised."

Other participants brought up some mind-boggling points.  The brilliant Swedish-American cosmologist Max Tegmark, of MIT, asked the question of why the fundamental rules of physics are mathematical.  He went on to point out that if you were a character inside a computer game (even a simple one), and you started to analyze the behavior of things in the game from within the game -- i.e., to do science -- you'd see the same thing.  Okay, in our universe the math is more complicated than the rules governing a computer game, but when you get down to the most basic levels, it still is just math.  "Everything is mathematical," he said.  "And if everything is mathematical, then it's programmable."

One of the most interesting approaches came from Zohreh Davoudi, also of MIT.  Davoudi is studying high-energy cosmic rays -- orders of magnitude more energetic than anything we can create in the lab -- as a way of probing the universe for what amount to glitches in the simulation.  It's analogous to the screen-door effect , a well-known phenomenon in visual displays, where (because there isn't sufficient resolution or computing power to give an infinitely smooth picture) if you zoom in too much, images pixellate.  The same thing, Davoudi says, could happen at extremely high energies; since you'd need an infinite amount of information to simulate behavior of particles on those scales, glitchiness in extreme conditions could be a hint we're inside a simulation.  "We're looking for evidence of cutting corners to make the simulation run with less demand on memory," she said.  "It's one way to test the claim empirically."

The reason this comes up is because of a recent paper by Roman Yampolskiy (of the University of Louisville) called, simply, "How to Hack the Simulation?"  Yampolskiy springboards from the arguments of Bostrom, Kipping, and others -- if you accept that it's possible, or even likely, that we're in a simulation, is there a way to hack our way out of it?

The open question, of course, is whether we should.  As I recall from The Matrix, the world inside the Matrix was a hell of a lot more pleasant than the apocalyptic hellscape outside it.

Be that as it may, Yampolskiy presents a detailed argument about whether it's even possible to hack ourselves out of a simulation (and answers the question "yes").  Not only does he, like Tegmark, use examples from computer games, but also describes an astonishing experiment I'd never heard of where the connectome (map of neural connections in the brain) of a roundworm, Caenorhabditis elegans, was uploaded into a robot body which then was able to navigate its environment exactly as the real, living worm did.  (The more I think about this experiment, the more freaked out I become.  Did the robotic worm know it was in a simulated body?)

Evaluating the strength of Yampolskiy's technical arguments is a bit beyond me, but to me where it becomes really interesting is when he gets into concrete suggestions of how we could get a glimpse of the world outside the simulation.  One method, he says, is get enormous numbers of people to do something identical and (presumably) easy to simulate, and then simultaneously all doing something different.  He writes:

If, say, 100 million of us do nothing (maybe by closing our eyes and meditating and thinking nothing), then the forecasting load-balancing algorithms will pack more and more of us in the same machine.  The next step is, then, for all of us to get very active very quickly (doing something that requires intense processing and I/O) all at the same time.  This has a chance to overload some machines, making them run short of resources, being unable to meet the computation/communication needed for the simulation.  Upon being overloaded, some basic checks will start to be dropped, and the system will be open for exploitation in this period...  The system may not be able to perform all those checks in an overloaded state...  We can... try to break causality.  Maybe by catching a ball before someone throws it to you.  Or we can try to attack this by playing with the timing, trying to make things asynchronous.

Of course, the problem here is that it's damn near impossible to get a hundred people to cooperate and follow directions, much less a hundred million.

Another suggestion is to increase the demand on the system by creating our own simulation -- a possibility Bostrom and Kipping considered, that we could be in a near-infinite nesting of universes within universes.  Yampolskiy says the problem is computing power; even if we're positing a simulator way smarter than we are, there's a limit, and we might be able to exploit that:

The most obvious strategy would be to try to cause the equivalent of a stack overflow—asking for more space in the active memory of a program than is available—by creating an infinitely, or at least excessively, recursive process.  And the way to do that would be to build our own simulated realities, designed so that within those virtual worlds are entities creating their version of a simulated reality, which is in turn doing the same, and so on all the way down the rabbit hole.  If all of this worked, the universe as we know it might crash, revealing itself as a mirage just as we winked out of existence.

In which case the triumph of being right would be cancelled out rather spectacularly by the fact that we'd immediately afterward cease to exist.

The whole question is as fascinating as it is unsettling, and Yampolskiy's analysis is at least is a start (along with more technical approaches like Davoudi's cosmic ray experiments) toward putting this on firmer scientific ground.  Until we can do that, I tend to agree with theoretical physicist James Sylvester Gates, of the University of Maryland, who criticizes the simulator argument as not being science at all.  "The simulator hypothesis is equivalent to God," Gates said.  "At its heart, it is a theological argument -- that there's a programmer who lives outside our universe and is controlling things here from out there.  The fact is, if the simulator's universe is inaccessible to us, it puts the claim outside the realm of science entirely."

So despite Bostrom and Kipping's mathematical argument and Tyson's statement that he won't be surprised to find evidence, I'm still dubious -- not because I don't think it's possible we're in a simulation, but because I don't believe that it's going to turn out to be testable.  I doubt very much that Mario knows he's a two-dimensional image on a computer monitor, for example; even though he actually is, I don't see how he could figure that out from inside the program.  (That particular problem was dealt with in brilliant fashion in the Star Trek: The Next Generation episode "Ship in a Bottle" -- where in the end even the brilliant Professor Moriarty never did figure out that he was still trapped on the Holodeck.)


So those are our unsettling thoughts for the day.  Me, I have to wonder why, if we are in a simulation, the Great Simulators chose to make this place so freakin' weird.  Maybe it's just for the entertainment value.  As Max Tegmark put it, "If you're unsure at the end of the day if you live in a simulation, go out there and live really interesting lives and do unexpected things so the simulators don't get bored and shut you down." 

Which seems like good advice whether we're in a simulation or not.

****************************************