In the episode of Star Trek: The Next Generation called "Phantasms," the android Commander Data continues to pursue his lifelong dream of experiencing what it's like to be human by creating a "dream program" -- a piece of software that activates when he sleeps, allowing him to go into a dreamlike state. The whole thing goes seriously off the rails when he starts having bizarre nightmares, and then waking hallucinations that spur him to attack the ship's counselor Deanna Troi, an action that leaves him relieved of duty and confined to his quarters.
Of course, being Star Trek, the whole thing has to do with aliens, but the more interesting aspect of the story to me is the question of what an artificial intelligence would dream about. We've yet to figure out exactly why dreaming is so important to our mental health, but it clearly is (this was the subject of what might be the single creepiest TNG episode ever, "Night Terrors"). Without REM sleep and the dreams that occur during it, we become paranoid, neurotic, and eventually completely non-functional; ultimately we start hallucinating, as if the lack of dreams while we're asleep makes them spill over into our waking hours.
So being that the question of why exactly we dream isn't satisfactorily solved, it's going even further out onto a limb to ask what a different intelligence (artificial or otherwise) would dream about, or even if they'd need to dream at all. Our own dreams have a few very common themes; just about all of us have dreams of being chased, of being embarrassed, of stressful situations (like the "teaching anxiety" dreams I used to have, usually involving my being in my classroom and having my students misbehaving no matter what I tried to stop it). I still get anxiety dreams about being in a math class in college (it's always math, for some reason), and showing up to find I have an exam that I haven't studied for. In some versions, I haven't even attended class for weeks, and have no idea what's going on.
Grieving or trauma can induce dreams; we often dream about loved ones we've lost or terrifying situations we've been in. Most of us have erotic dreams, sometimes acting out situations we'd never dream of participating in while awake.
So although the content of dreams is pretty universal, and in fact shares a lot with the visions induced by psychedelic drugs, why we dream is still unknown. So it was with considerable curiosity that I read a paper that showed up in the journal Neuroscience of Consciousness this month called, "Neural Network Models for DMT-induced Visual Hallucinations," by Michael Schartner (Université de Genève) and Christopher Timmermann (University College London), who took an AI neural network and introduced input to it that mimicked the kind of endogenous (self-created) visual input that occurs during a hallucination, and watched what happened.
The authors write:
Using two deep convolutional network architectures, we pointed out the potential to generate changes in natural images that are in line with subjective reports of DMT-induced hallucinations. Unlike human paintings of psychedelic hallucinations—the traditional way to illustrate psychedelic imagery—using well-defined deep network architectures allows to draw parallels to brain mechanisms, in particular with respect to a perturbed balance between sensory information and prior information, mediated by the serotonergic system.
In our first model, NVIDIA’s generative model StyleGAN, we show how perturbation of the noise input can lead to image distortions reminiscent of verbal reports from controlled experiments in which DMT has been administered. In particular, the omission of noise leads to a smoother, painterly look of the images, illustrating a potential hypothesis that can be conceptualized with such models: as a 5-HT2A receptor agonist, DMT induces a state in which environmental (i.e. exogenous) sensory information is partially blocked—gated by the inserted noise—and system-internal (endogenous) signals are influencing conscious imagery more strongly. Contents of immersive imagery experienced in eyes-closed conditions during DMT administration would thereby correspond to the system’s prior information for the construction of a consciously perceived scene.
If you're ready for some nightmares yourself, here's one of their images of the output from introducing psychedelic-like noise into the input of a face-recognition software:
For more disturbing images that come out of giving AI hallucinogens, and a more in-depth explanation of the research than I'm giving here (or am even capable of giving), I direct you to the paper itself, which is fascinating. The study gives a new lens into the question of our own consciousness -- whether it's an illusion generated by our brain chemistry, or if there really is something more there (a soul, spirit, mind, whatever you might want to call it) that is in some sense independent of the neural underpinning. The authors write:
Research on image encoding in IT suggests that ‘the computational mission of IT face patches is to generate a robust, efficient, and invariant code for faces, which can then be read-out for any behavioural/cognitive purpose downstream’ (Kornblith and Tsao 2017). The latent information entering the NVIDIA generative model may thus be interpreted as activity in IT and the output image as the consciously perceived scene, constructed during the read-out by other cortical areas. How this read-out creates an experience is at the heart of the mind-body problem and we suggest that modelling the effects of DMT on the balance between exogenous and endogenous information may provide experimentally testable hypotheses about this central question of consciousness science.All of this points out something I've said many times here at Skeptophilia; that we are only beginning to understand how our own brains work. To quote my friend and mentor, Dr. Rita Calvo, Professor Emeritus of Human Genetics at Cornell University, with respect to brain science we're about where we were with respect to genetics in 1921 -- we know a little bit about some of the effects, and a little bit about where things happen, but almost no understanding at all about the mechanisms that are driving the whole thing. But with research like Schartner and Timmermann's recent paper, we're finally getting a glimpse of the inner workings of that mysterious organ that lies between your ears, the one that is allowing you to read and understand this blog post right now.
I'm always amazed by the resilience we humans can sometimes show. Knocked down again and again, in circumstances that "adverse" doesn't even begin to describe, we rise above and move beyond, sometimes accomplishing great things despite catastrophic setbacks.
In Why Fish Don't Exist: A Story of Love, Loss, and the Hidden Order of Life, journalist Lulu Miller looks at the life of David Starr Jordan, a taxonomist whose fascination with aquatic life led him to the discovery of a fifth of the species of fish known in his day. But to say the man had bad luck is a ridiculous understatement. He lost his collections, drawings, and notes repeatedly, first to lightning, then to fire, and finally and catastrophically to the 1906 San Francisco Earthquake, which shattered just about every specimen bottle he had.
But Jordan refused to give up. After the earthquake he set about rebuilding one more time, becoming the founding president of Stanford University and living and working until his death in 1931 at the age of eighty. Miller's biography of Jordan looks at his scientific achievements and incredible tenacity -- but doesn't shy away from his darker side as an early proponent of eugenics, and the allegations that he might have been complicit in the coverup of a murder.
She paints a picture of a complex, fascinating man, and her vivid writing style brings him and the world he lived in to life. If you are looking for a wonderful biography, give Why Fish Don't Exist a read. You won't be able to put it down.
[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]