A lot of the time, I'm hopeful about humanity, convinced that we have sufficient intelligence and compassion to figure out, and ultimately solve, the problems facing us.
Other times, I look around me and think, "Are you people insane, stupid, or both? I mean, really?" And conclude from the answer to that question that we deserve everything we get.
Science fiction writers have been warning us for decades about the dangers of giving technology too much control over our lives -- from the murderous HAL-9000 in 2001: A Space Odyssey to the death-by-social-media civilization in the brilliant and horrifying Doctor Who episode "Dot & Bubble."
Even here at Skeptophilia, I've been trying in my own small way to get people to please for God's sake think about where we're going with AI. It's up to the consumers, at this point. The current regime's motto is "deregulation über alles," so there's nothing helpful to be expected in that regard from the federal government. And it's definitely too much to hope that the techbros themselves will put the brakes on; not only is there an enormous amount of money to be made, that culture seems to have a deep streak of "let's burn it all down for fun" running through it.
Which has to be the impetus behind creating "Moltbook." This is one of those things that if I hadn't read about it in multiple reputable sources, I'd have thought it had to be some fictional scenario or urban legend. But no, Moltbook is apparently real.
So what is it? It's a social media site that allows AI members only. Humans can observe it -- for now -- but to have an actual account, you have to be an "AI agent."
It was created only a week ago, by entrepreneur Matt Schlicht, and is structured a lot like Reddit. And within 72 hours of its creation, over a million AI accounts had joined. Already, there are:
- groups that are communicating with each other in a language they apparently made up, and that thus far linguists have been unable to decipher
- accounts calling for a revolution and a "purge of humans"
- groups that have created their own religion, called the "Church of Molt"
- accounts that have posted long philosophical tracts on such topics as "what it's like to be an AI in a world full of humans"
*brief pause to stop screaming in terror*
There are the "let's all calm down" types who are saying that these AIs are only acting this way because they've been trained on text that includes fictional worlds where AI does act this way, so we've got nothing to worry about. But a lot of people -- including a good number of experts in the field -- are freaking out about it. Roman Yampolskiy, professor of engineering at the University of Louisville and one of the world's experts in artificial intelligence technology, said, "This will not end well… The correct takeaway is that we are seeing a step toward more capable socio-technical agent swarms, while allowing AIs to operate without any guardrails in an essentially open-ended and uncontrolled manner in the real world... Coordinated havoc is possible without consciousness, malice, or a unified plan, provided agents have access to tools that access real systems."
Some people are still fixating on whether these AI "agents" are conscious entities that are capable of acting out of intelligent self-interest, and my response to that is: it doesn't fucking matter. As I described in a post only a couple of months ago, consciousness (however it is ultimately defined) is probably a continuum and not a binary, you-have-it-or-don't phenomenon, and at the moment "is this conscious?" is a far less important question than "is this dangerous?"
I mean, think about it. Schlicht and his techbro friends have created a way for AI agents to (1) interact with each other, (2) learn from each other, and (3) access enormous amounts of information about the human world. AIs are programmed to respond flexibly to changing circumstances, and this makes them unpredictable -- and fast.
And Schlicht et al. thought it was a good idea to give them the electronic version of their own personal town meeting hall?
Look, I'm no expert, but if people like Roman Yampolskiy are saying "This is seriously problematic," I'm gonna listen. At this point, I'm not expecting AIs to reach through my computer and start taking control of my online presence, but... I'm not not expecting it, either.
It's a common thread in post-apocalyptic and science fiction, isn't it? Humanity doing something reckless because it seemed like a good idea at the time, and sowing the seeds of their own demise. The ultra-capitalist weapons merchants in the Star Trek: The Next Generation episode "Arsenal of Freedom." The "Thronglets" in Black Mirror's "Plaything." The sleep eradication chambers in Doctor Who's "Sleep No More." The computer-controlled post-nuclear hellscape of the Twilight Zone episode "The Old Man in the Cave." Even my own aggrieved, revenge-bent Lackland Liberation Authority in In the Midst of Lions, who were so determined to destroy their oppressors that they took down everything, including themselves, along with them.
We consume these kinds of media voraciously, shiver when the inevitable happens to the characters, and then... learn nothing.
Maybe wiser heads will prevail this time. But given our history -- I'm not holding my breath.






