Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label neural networks. Show all posts
Showing posts with label neural networks. Show all posts

Wednesday, March 24, 2021

The emergent mind

One of the arguments I've heard the most often in discussions of the possibility of developing a true artificial intelligence is that computers are completely mechanistic.  I've heard this framed as, "You can only get out of them what you put into them."  In other words, you could potentially program a machine to simulate intelligence, perhaps even simulate it convincingly.  But there's nothing really in there -- it's just an input/output device no more intelligent than a pocket calculator, albeit a highly sophisticated one.

My question at this juncture is usually, "How are our brains any different?"  Our neurons act on electrical voltage shifts; they fire (or not) based upon the movement of sodium and potassium ions, modulated by a complex group of chemicals called neurotransmitters that alter the neuron's ability to move those ions around.  That our minds are a construct of this elaborate biochemistry is supported by the fact that if you introduce substances that alter the concentrations or reactivity of the neurotransmitters -- better known as "psychoactive drugs" -- it can radically alter perception, emotion, personality, and behavior.

But there's the nagging feeling, even amongst those of us who are diehard materialists, that there's something more in there, an ineffable ghost in the machine that is somehow independent of the biological underpinnings.  Would a sufficiently complex electronic brain have this perception of self?  Could an artificial intelligence eventually be capable of insight, of generating something more than the purely mechanical, rule-driven output we usually associate with computers?  Of -- in other words -- creativity?

Or will that always be in the realm of science fiction?

If you doubt an artificial intelligence could ever have insight or creativity, some research out of a collaboration between Tsinghua University and the University of California - San Diego may make you want to reconsider your stance.

Ce Wang and Hui Zhai (Tsinghua) and Yi-Zhuang You (UC-San Diego) have created an artificial neural network that is able to look at raw data and figure out the equations that govern the reality.  In other words, it does what scientists do -- finds a mathematical model that accounts for observations.  And we're not talking about something simple like F = ma, here; the Wang et al. neural network was given experimental data of the measured position of quantum particles, and was able to develop...

... the Schrödinger Wave Equation.

To put this in perspective, the first data that gave us humans insight into the quantum-mechanical nature of subatomic particles, studies of photons by Max Planck in 1900, led to the highly non-intuitive notion that photons of light were quantized, emitted in discrete steps that were multiples of a minimum energy now known as Planck's constant.  From there, further experimentation with particle momentums and positions by such luminaries as Albert Einstein, Louis de Broglie, and Werner Heisenberg led to the discovery of the weird wave/particle duality (subatomic particles are, in some sense, a wave and a particle simultaneously, and which properties you see depend on which you look for).  Finally, Erwin Schrödinger put the whole thing together in the fundamental law of quantum mechanics, now called the Schrödinger Wave Equation in his honor.

But it took twenty-five years.

For those of you who aren't physics types, here's the equation we're talking about:

And to make you feel better, I majored in physics and I can't really say I understand it, either.

Here's how Wang et al. describe their neural network's accomplishment:

Can physical concepts and laws emerge in a neural network as it learns to predict the observation data of physical systems?  As a benchmark and a proof-of-principle study of this possibility, here we show an introspective learning architecture that can automatically develop the concept of the quantum wave function and discover the Schrödinger equation from simulated experimental data of the potential-to-density mappings of a quantum particle.  This introspective learning architecture contains a machine translator to perform the potential to density mapping, and a knowledge distiller auto-encoder to extract the essential information and its update law from the hidden states of the translator, which turns out to be the quantum wave function and the Schrödinger equation.  We envision that our introspective learning architecture can enable machine learning to discover new physics in the future.

I read this with my jaw hanging open.  I think I even said "holy shit" a couple of times.  Because they're not stopping with the network recreating science we already know; they're talking about having it find new science that we currently don't understand fully -- or perhaps, that we know nothing about. 

It's hard to imagine calling something that can do this anything other than a true intelligence.  Yes, it's limited -- a neural network that discovers new physics can't write a poem or create a piece of art or hold a conversation -- but as one by one, each of those hurdles is passed, it's not hard to envision putting them together into one system that is not so far off from AI brains envisioned by science fiction.

As exciting as it is, this also makes me a little nervous.  Deep thinkers such as Stephen Hawking, Nick Bostrom, Marvin Minsky, and Roman Yampolskiy have all urged caution in the development of AI, suggesting that the leap from artificial neural networks being beneath human intelligence levels to being far, far beyond them could happen suddenly.  When an artificial intelligence gains the ability to modify its own source code to improve its own functionality -- or, perhaps, to engage in such human-associated behaviors as self-preservation -- we could be in serious trouble.  (The Wikipedia page on the existential risk from artificial general intelligence gives a great overview of the current thought about this issue, if you're interested, or if perhaps you find you're sleeping too soundly at night.)

None of which is meant to detract from Wang et al.'s accomplishment, which is stupendous.  It'll be fascinating to see what their neural network finds out when it moves beyond the proof-of-concept stage and turns its -- mind? -- onto actual unsolved problems in physics.

It does leave me wondering, though, when all is said and done, if we'll be looking at a conscious emergent intelligence that might have needs, desires, preferences... and rights.  If so, it will dramatically shift our perspective as the unquestioned dominant species on Earth, not to mention generating minds who might decide that it is in the Earth's best interest to end that dominance permanently.

At which point it will be a little too late to say, "Wait, maybe this wasn't such a good idea."

******************************************

Last week's Skeptophilia book-of-the-week, Simon Singh's The Code Book, prompted a reader to respond, "Yes, but have you read his book on Fermat's Last Theorem?"

In this book, Singh turns his considerable writing skill toward the fascinating story of Pierre de Fermat, the seventeenth-century French mathematician who -- amongst many other contributions -- touched off over three hundred years of controversy by writing that there were no integer solutions for the equation  an + bn = cn for any integer value of n greater than 2, then adding, "I have discovered a truly marvelous proof of this, which this margin is too narrow to contain," and proceeding to die before elaborating on what this "marvelous proof" might be.

The attempts to recreate Fermat's proof -- or at least find an equivalent one -- began with Fermat's contemporaries, Evariste de Gaulois, Marin Mersenne, Blaise Pascal, and John Wallis, and continued for the next three centuries to stump the greatest minds in mathematics.  It was finally proven that Fermat's conjecture was correct by Andrew Wiles in 1994.

Singh's book Fermat's Last Theorem: The Story of a Riddle that Confounded the World's Greatest Minds for 350 Years describes the hunt for a solution and the tapestry of personalities that took on the search -- ending with a tour-de-force paper by soft-spoken British mathematician Andrew Wiles.  It's a fascinating journey, as enjoyable for a curious layperson as it is for the mathematically inclined -- and in Singh's hands, makes for a story you will thoroughly enjoy.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, January 11, 2018

Reconstructing mental images

It has long been the Holy Grail of neuroscience to design a device that can not simply see the brain's gross anatomy (a CT scan can do that) or which parts of the brain are active (an fMRI can do that), but to take neural firing patterns and reconstruct what people are thinking.

Which would, honestly, amount to reading someone's mind.

And a significant step has been taken toward that goal by a team of neuroscientists at the ATR Computational Neuroscience Laboratories of Kyoto University.  In a paper that was published just two weeks ago, the scientists, Guohua Shen, Tomoyasu Horikawa1, Kei Majima, and Yukiyasu Kamitani, describe a technology that can take the neural output of a person and use it to come up with an image of what the person was looking at.

The paper, called "Deep Image Reconstruction from Human Brain Activity," is available open-source on the site BioRxiv, and all of you should take the time to read it, because this quick look is not nearly going to do it justice.  The idea is that the researchers are taking a novel approach to detecting fluctuations in the electric field generated by the brain, and from that reconstruct images that are nothing short of astonishing.

The authors write:
Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its [deep neural network] features similar to those decoded from human brain activity at multiple layers.  We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery.  While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars.  A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images.
I'm not going to show you all of the results -- like I said, I want you to take a look at the paper itself -- but here are the results for some images, using three different human subjects:


The top is the image the subject was shown, and underneath are the images the software came up with.

What astonishes me is not just the accuracy -- the spots on the jaguar, the tilt of the stained glass window -- but the consistency from one human subject to the next.  I realize that the results are still pretty rudimentary; no one would look a the image on the bottom right and guess it was an airplane.  (A UFO, perhaps...)  But the technique is only going to improve.  The authors write:
Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content.  However, it has been limited to the reconstruction with low-level image bases or to the matching to exemplars.  Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features...  [H]uman judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images.  The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.
This is amazingly cool, but I have to admit that it's a little scary.  The idea that we're approaching the point where a device can read people's minds will have some major impacts on issues of privacy.  I mean, think about it; do you want someone able to tell what you're thinking -- or even what you're picturing in your mind -- without your consent?  And if this technology eventually becomes sensitive enough to do with a hand-held device instead of an fMRI headset, how could you stop them?

Maybe I'm being a little alarmist, here.  I know I have Luddite tendencies, so I have to stop myself from yelling "Back in my day we wrote in cuneiform on clay tablets!  And we didn't complain about it!" whenever someone starts telling me about new advances in technology.  But this one...  all I can say is the "wow" is tempered by a sense of "... but wait a moment."

As Michael Crichton put it in Jurassic Park: "[S]cience is starting not to fit the world any more.  [S]cience cannot help us decide what to do with that world, or how to live.  Science can make a nuclear reactor, but it cannot tell us not to build it.  Science can make pesticide, but cannot tell us not to use it."

Put another way, science tells us what we can do, not what we should do.  For the latter, we have to stop and think -- something humans as a whole are not very good at.

Saturday, November 21, 2015

Opening the door to the Chinese Room

The idea of artificial intelligence terrifies a lot of people.

The reasons for this fear vary.  Some are repelled by the thought that our mental processes could be emulated in a machine. Others worry that if we do develop AI, it will rise up and overthrow us, à la The Matrix.  Still others are convinced that humans have something that is inherently unrepresentable -- a heart, a soul, perhaps even simply consciousness -- so any machine that appeared to be intelligent and human-like would only be a clever replica.

The people who believe that human intelligence will never be emulated in a machine usually fall back on something like the John Searle's "Chinese Room Analogy" as an argument.  Searle, an American philosopher, has said that computers are simply string-conversion devices; they take an input string, manipulate it in some completely predictable way, and then create an output string which they then give you.  What they do is analogous to someone sitting in a locked room with a Chinese-English dictionary who is given a string of Chinese text, and uses the dictionary to convert it to English.  There is no true understanding; it's mere symbol manipulation.

[image courtesy of the Wikimedia Commons]

There are two significant problems with Searle's Chinese Room.  One is the question of whether our brains themselves aren't simply string-conversion devices.  Vastly more sophisticated ones, of course; but given our brain chemistry and wiring at a given moment, it's far from a settled question whether our neural networks aren't reacting in a completely deterministic fashion.

The second, of course, is the problem that even though the woman in the Chinese Room starts out being a simple string-converter, if she keeps doing it long enough, eventually she will learn Chinese.  At that point there will be understanding going on.

Yes, says Searle, but that's because she has a human brain, which can do more than a computer can.  A machine could never abstract a language, or anything of the sort, without having explicit programming -- lists of vocabulary, syntax rules, morphological structure -- to go by.  Humans learn language starting with a highly receptive tabula rasa that is unlike anything that could be emulated in a computer.

Which was true, until this month.

A team of researchers at the University of Sassari (Italy) and the University of Plymouth (UK) have devised a network of two million interconnected artificial neurons that is capable of learning language "organically" -- starting with nothing, and using only communication with a human interlocutor as input.  Called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), this network is capable of doing what AI people call "bootstrapping" or "recursive self-improvement" -- it begins with only a capacity for plasticity and improves its understanding as it goes, a feature that up till now has been considered by some to be impossible to achieve.

Bruno Golosio, head of the team that created ANNABELL, writes:
ANNABELL does not have pre-coded language knowledge; it learns only through communication with a human interlocutor, thanks to two fundamental mechanisms, which are also present in the biological brain: synaptic plasticity and neural gating.  Synaptic plasticity is the ability of the connection between two neurons to increase its efficiency when the two neurons are often active simultaneously, or nearly simultaneously.  This mechanism is essential for learning and for long-term memory.  Neural gating mechanisms are based on the properties of certain neurons (called bistable neurons) to behave as switches that can be turned "on" or "off" by a control signal coming from other neurons.  When turned on, the bistable neurons transmit the signal from a part of the brain to another, otherwise they block it.  The model is able to learn, due to synaptic plasticity, to control the signals that open and close the neural gates, so as to control the flow of information among different areas.
Which in my mind blows a neat hole in the contention that the human mind has some je ne sais quoi that will never be copied in a mechanical device.  This simple model (and compared to an actual brain, it is rudimentary, however impressive Golosio's team's achievement is) is doing precisely what an infant's brain does when it learns language -- taking in input, abstracting rules, and adjusting as it goes so that it improves over time.

Myself, I think this is awesome.  I'm not particularly concerned about machines taking over the world -- for one thing, a typical human brain has about 100 billion neurons, so to have something that really could emulate anything a human could do would take scaling up ANNABELL by a factor of 50,000.  (That's assuming that an intelligent mind couldn't operate out of a brain that was more compact and efficient, which is certainly a possibility.)  I also don't think it's demeaning to humans that we may be "nothing more than meat machines," as one biologist put it.  This doesn't diminish our own personal capacity for experience, it just means that we're built from the same stuff as the rest of the universe.

Which is sort of cool.

Anyhow, what Golosio et al. have done is only the beginning of what appears to be a quantum leap in AI research.  As I've said many times, and about many things; I can't imagine what wonders await in the future.