Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Wednesday, March 24, 2021

The emergent mind

One of the arguments I've heard the most often in discussions of the possibility of developing a true artificial intelligence is that computers are completely mechanistic.  I've heard this framed as, "You can only get out of them what you put into them."  In other words, you could potentially program a machine to simulate intelligence, perhaps even simulate it convincingly.  But there's nothing really in there -- it's just an input/output device no more intelligent than a pocket calculator, albeit a highly sophisticated one.

My question at this juncture is usually, "How are our brains any different?"  Our neurons act on electrical voltage shifts; they fire (or not) based upon the movement of sodium and potassium ions, modulated by a complex group of chemicals called neurotransmitters that alter the neuron's ability to move those ions around.  That our minds are a construct of this elaborate biochemistry is supported by the fact that if you introduce substances that alter the concentrations or reactivity of the neurotransmitters -- better known as "psychoactive drugs" -- it can radically alter perception, emotion, personality, and behavior.

But there's the nagging feeling, even amongst those of us who are diehard materialists, that there's something more in there, an ineffable ghost in the machine that is somehow independent of the biological underpinnings.  Would a sufficiently complex electronic brain have this perception of self?  Could an artificial intelligence eventually be capable of insight, of generating something more than the purely mechanical, rule-driven output we usually associate with computers?  Of -- in other words -- creativity?

Or will that always be in the realm of science fiction?

If you doubt an artificial intelligence could ever have insight or creativity, some research out of a collaboration between Tsinghua University and the University of California - San Diego may make you want to reconsider your stance.

Ce Wang and Hui Zhai (Tsinghua) and Yi-Zhuang You (UC-San Diego) have created an artificial neural network that is able to look at raw data and figure out the equations that govern the reality.  In other words, it does what scientists do -- finds a mathematical model that accounts for observations.  And we're not talking about something simple like F = ma, here; the Wang et al. neural network was given experimental data of the measured position of quantum particles, and was able to develop...

... the Schrödinger Wave Equation.

To put this in perspective, the first data that gave us humans insight into the quantum-mechanical nature of subatomic particles, studies of photons by Max Planck in 1900, led to the highly non-intuitive notion that photons of light were quantized, emitted in discrete steps that were multiples of a minimum energy now known as Planck's constant.  From there, further experimentation with particle momentums and positions by such luminaries as Albert Einstein, Louis de Broglie, and Werner Heisenberg led to the discovery of the weird wave/particle duality (subatomic particles are, in some sense, a wave and a particle simultaneously, and which properties you see depend on which you look for).  Finally, Erwin Schrödinger put the whole thing together in the fundamental law of quantum mechanics, now called the Schrödinger Wave Equation in his honor.

But it took twenty-five years.

For those of you who aren't physics types, here's the equation we're talking about:

And to make you feel better, I majored in physics and I can't really say I understand it, either.

Here's how Wang et al. describe their neural network's accomplishment:

Can physical concepts and laws emerge in a neural network as it learns to predict the observation data of physical systems?  As a benchmark and a proof-of-principle study of this possibility, here we show an introspective learning architecture that can automatically develop the concept of the quantum wave function and discover the Schrödinger equation from simulated experimental data of the potential-to-density mappings of a quantum particle.  This introspective learning architecture contains a machine translator to perform the potential to density mapping, and a knowledge distiller auto-encoder to extract the essential information and its update law from the hidden states of the translator, which turns out to be the quantum wave function and the Schrödinger equation.  We envision that our introspective learning architecture can enable machine learning to discover new physics in the future.

I read this with my jaw hanging open.  I think I even said "holy shit" a couple of times.  Because they're not stopping with the network recreating science we already know; they're talking about having it find new science that we currently don't understand fully -- or perhaps, that we know nothing about. 

It's hard to imagine calling something that can do this anything other than a true intelligence.  Yes, it's limited -- a neural network that discovers new physics can't write a poem or create a piece of art or hold a conversation -- but as one by one, each of those hurdles is passed, it's not hard to envision putting them together into one system that is not so far off from AI brains envisioned by science fiction.

As exciting as it is, this also makes me a little nervous.  Deep thinkers such as Stephen Hawking, Nick Bostrom, Marvin Minsky, and Roman Yampolskiy have all urged caution in the development of AI, suggesting that the leap from artificial neural networks being beneath human intelligence levels to being far, far beyond them could happen suddenly.  When an artificial intelligence gains the ability to modify its own source code to improve its own functionality -- or, perhaps, to engage in such human-associated behaviors as self-preservation -- we could be in serious trouble.  (The Wikipedia page on the existential risk from artificial general intelligence gives a great overview of the current thought about this issue, if you're interested, or if perhaps you find you're sleeping too soundly at night.)

None of which is meant to detract from Wang et al.'s accomplishment, which is stupendous.  It'll be fascinating to see what their neural network finds out when it moves beyond the proof-of-concept stage and turns its -- mind? -- onto actual unsolved problems in physics.

It does leave me wondering, though, when all is said and done, if we'll be looking at a conscious emergent intelligence that might have needs, desires, preferences... and rights.  If so, it will dramatically shift our perspective as the unquestioned dominant species on Earth, not to mention generating minds who might decide that it is in the Earth's best interest to end that dominance permanently.

At which point it will be a little too late to say, "Wait, maybe this wasn't such a good idea."

******************************************

Last week's Skeptophilia book-of-the-week, Simon Singh's The Code Book, prompted a reader to respond, "Yes, but have you read his book on Fermat's Last Theorem?"

In this book, Singh turns his considerable writing skill toward the fascinating story of Pierre de Fermat, the seventeenth-century French mathematician who -- amongst many other contributions -- touched off over three hundred years of controversy by writing that there were no integer solutions for the equation  an + bn = cn for any integer value of n greater than 2, then adding, "I have discovered a truly marvelous proof of this, which this margin is too narrow to contain," and proceeding to die before elaborating on what this "marvelous proof" might be.

The attempts to recreate Fermat's proof -- or at least find an equivalent one -- began with Fermat's contemporaries, Evariste de Gaulois, Marin Mersenne, Blaise Pascal, and John Wallis, and continued for the next three centuries to stump the greatest minds in mathematics.  It was finally proven that Fermat's conjecture was correct by Andrew Wiles in 1994.

Singh's book Fermat's Last Theorem: The Story of a Riddle that Confounded the World's Greatest Minds for 350 Years describes the hunt for a solution and the tapestry of personalities that took on the search -- ending with a tour-de-force paper by soft-spoken British mathematician Andrew Wiles.  It's a fascinating journey, as enjoyable for a curious layperson as it is for the mathematically inclined -- and in Singh's hands, makes for a story you will thoroughly enjoy.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



1 comment:

  1. So the intelligence of Andrew Wiles simply recreated math that Pierre de Fermat already knew? And people are worried whether AI can discover "new" things :-D

    ReplyDelete