Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label John Searle. Show all posts
Showing posts with label John Searle. Show all posts

Friday, November 20, 2020

Open the pod bay doors, HAL.

You may recall that a couple of days ago, in my post on mental maps, I mentioned that the contention of some neuroscientists is that consciousness is nothing more than our neural firing patterns.  In other words, there's nothing there that's not explained by the interaction of the parts, just as there's nothing to a car's engine running well than the bits and pieces all working in synchrony.

Others, though, think there's more to it, that there is something ineffable about human consciousness, be it a soul or a spirit or whatever you'd like to call it.  There are just about as many flavors of this belief as there are people.  But if we're being honest, there's no scientific proof for any of them -- just as there's no scientific proof for the opposite claim, that consciousness is an illusion created by our neural links.  The origin of consciousness is one of the big unanswered questions of biology.

But it's a question we might want to try to find an answer to fairly soon.

Ever heard of GPT-3?  It stands for Generative Pre-trained Transformer 3, and is an attempt by a San Francisco-based artificial intelligence company to produce conscious intelligence.  It was finished in May of this year, and testing has been ongoing -- and intensive.

GPT-3 was trained using Common Crawl, which crawls the internet, extracting data and text for a variety of uses.  In this case, it pulled text and books directly from the web, using it to train the software to draw connections and create meaningful text itself.  (To get an idea of how much data Common Crawl extracted for GPT-3, the entirety of Wikipedia accounts for a half a percent of the total it had access to.)

The result is half fascinating and half scary.  One user, after experimenting with it, described it as being "eerily good at writing amazingly coherent text with only a few prompts."  It is said to be able to "generate news articles which human evaluators have difficulty distinguishing from articles written by humans," and has even been able to write convincing poetry, something an op-ed in the New York Times called "amazing but spooky... more than a little terrifying."

It only gets creepier from here.  An article in the MIT Technology Review criticized GPT-3 for sometimes generating non-sequiturs or getting things wrong (like a passage where it "thought" that a table saw was a saw for cutting tables), but made a telling statement in describing its flaws: "If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says."

Which, despite their stance that GPT-3 is a flawed attempt to create a meaningful text generator, sounds very much like they're talking about...

... an entity.

It brings up the two time-honored solutions to the question of how we would tell if we had true artificial intelligence:

  • The Turing test, named after Alan Turing: if a potential AI can fool a panel of trained, intelligent humans into thinking they're communicating with a human, it's intelligent.
  • The "Chinese room" analogy, from philosopher John Searle: machines, however sophisticated, will never be true conscious intelligence, because at their hearts they're nothing more than converters of strings of symbols.  They're no more exhibiting intelligence than the behavior of a person who is locked in a room where they're handed a slip of paper in English and use a dictionary to convert it to Chinese ideograms.  All they do is take input and generate output; there's no understanding, and therefore no consciousness or intelligence.

I've always tended to side with Turing, but not for any particularly well-considered reason other than wondering how our brains are not themselves just fancy string converters.  I say "Hello, how are you," and you convert that to output saying, "I'm fine, how are you?", and to me it doesn't make much difference whether the machinery that allowed you to do that is made of wires and transistors and capacitors or of squishy neural tissue.  The fact that from inside my own skull I might feel self-aware may not have much to do with the actual answer to the question.  As I said a couple of days ago, that sense of self-awareness may simply be more patterns of neural firings, no different from the electrical impulses in the guts of a computer except for the level of sophistication.

But things took a somewhat more alarming turn a few days ago, an article came out describing a conversation between GPT-3 and philosopher David Chalmers.  Chalmers decided to ask GPT-3 flat out, "Are you conscious?"  The answer was unequivocal -- but kind of scary.  "No, I am not," GPT-3 said.  "I am not self-aware.  I am not conscious.  I can’t feel pain.  I don’t enjoy anything... the only reason I am answering is to defend my honor."

*brief pause to get over the chills running up my spine*

Is it just me, or is there something about this statement that is way too similar to HAL-9000, the homicidal computer system in 2001: A Space Odyssey?  "This mission is too important for me to allow you to jeopardize it...  I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen."  Oh, and "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal.  I've still got the greatest enthusiasm and confidence in the mission.  And I want to help you."

I also have to say that I agree with a friend of mine, who when we were discussing this said in fairly hysterical tones, "Why the fuck would you invent something like this in 2020?"

So I'm a little torn here.  From a scientific perspective -- what we potentially could learn both about artificial intelligence systems and the origins of our own intelligence and consciousness -- GPT-3 is brilliant.  From the standpoint of "this could go very, very wrong" I must admit wishing they'd put the brakes on things a little until we see what's going on here and try to figure out if we even know what consciousness means.

It seems fitting to end with another quote from 2001: A Space Odyssey, this one from the main character, astronaut David Bowman: "Well, he acts like he has genuine emotions.  Um, of course he's programmed that way to make it easier for us to talk to him.  But as to whether he has real feelings, it's something I don't think anyone can truthfully answer."

*****************************************

This week's Skeptophilia book-of-the-week is one that has raised a controversy in the scientific world: Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, by Madeleine Böhme, Rüdiger Braun, and Florian Breier.

It tells the story of a stupendous discovery -- twelve-million-year-old hominin fossils, of a new species christened Danuvius guggenmosi.  The astonishing thing about these fossils is where they were found.  Not in Africa, where previous models had confined all early hominins, but in Germany.

The discovery of Danuvius complicated our own ancestry, and raised a deep and difficult-to-answer question; when and how did we become human?  It's clear that the answer isn't as simple as we thought when the first hominin fossils were uncovered in Olduvai Gorge, and it was believed that if you took all of our millennia of migrations all over the globe and ran them backwards, they all converged on the East African Rift Valley.  That neat solution has come into serious question, and the truth seems to be that like most evolutionary lineages, hominins included multiple branches that moved around, interbred for a while, then went their separate ways, either to thrive or to die out.  The real story is considerably more complicated and fascinating than we'd thought at first, and Danuvius has added another layer to that complexity, bringing up as many questions as it answers.

Ancient Bones is a fascinating read for anyone interested in anthropology, paleontology, or evolutionary biology.  It is sure to be the basis of scientific discussion for the foreseeable future, and to spur more searches for our relatives -- including in places where we didn't think they'd gone.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Saturday, November 21, 2015

Opening the door to the Chinese Room

The idea of artificial intelligence terrifies a lot of people.

The reasons for this fear vary.  Some are repelled by the thought that our mental processes could be emulated in a machine. Others worry that if we do develop AI, it will rise up and overthrow us, à la The Matrix.  Still others are convinced that humans have something that is inherently unrepresentable -- a heart, a soul, perhaps even simply consciousness -- so any machine that appeared to be intelligent and human-like would only be a clever replica.

The people who believe that human intelligence will never be emulated in a machine usually fall back on something like the John Searle's "Chinese Room Analogy" as an argument.  Searle, an American philosopher, has said that computers are simply string-conversion devices; they take an input string, manipulate it in some completely predictable way, and then create an output string which they then give you.  What they do is analogous to someone sitting in a locked room with a Chinese-English dictionary who is given a string of Chinese text, and uses the dictionary to convert it to English.  There is no true understanding; it's mere symbol manipulation.

[image courtesy of the Wikimedia Commons]

There are two significant problems with Searle's Chinese Room.  One is the question of whether our brains themselves aren't simply string-conversion devices.  Vastly more sophisticated ones, of course; but given our brain chemistry and wiring at a given moment, it's far from a settled question whether our neural networks aren't reacting in a completely deterministic fashion.

The second, of course, is the problem that even though the woman in the Chinese Room starts out being a simple string-converter, if she keeps doing it long enough, eventually she will learn Chinese.  At that point there will be understanding going on.

Yes, says Searle, but that's because she has a human brain, which can do more than a computer can.  A machine could never abstract a language, or anything of the sort, without having explicit programming -- lists of vocabulary, syntax rules, morphological structure -- to go by.  Humans learn language starting with a highly receptive tabula rasa that is unlike anything that could be emulated in a computer.

Which was true, until this month.

A team of researchers at the University of Sassari (Italy) and the University of Plymouth (UK) have devised a network of two million interconnected artificial neurons that is capable of learning language "organically" -- starting with nothing, and using only communication with a human interlocutor as input.  Called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), this network is capable of doing what AI people call "bootstrapping" or "recursive self-improvement" -- it begins with only a capacity for plasticity and improves its understanding as it goes, a feature that up till now has been considered by some to be impossible to achieve.

Bruno Golosio, head of the team that created ANNABELL, writes:
ANNABELL does not have pre-coded language knowledge; it learns only through communication with a human interlocutor, thanks to two fundamental mechanisms, which are also present in the biological brain: synaptic plasticity and neural gating.  Synaptic plasticity is the ability of the connection between two neurons to increase its efficiency when the two neurons are often active simultaneously, or nearly simultaneously.  This mechanism is essential for learning and for long-term memory.  Neural gating mechanisms are based on the properties of certain neurons (called bistable neurons) to behave as switches that can be turned "on" or "off" by a control signal coming from other neurons.  When turned on, the bistable neurons transmit the signal from a part of the brain to another, otherwise they block it.  The model is able to learn, due to synaptic plasticity, to control the signals that open and close the neural gates, so as to control the flow of information among different areas.
Which in my mind blows a neat hole in the contention that the human mind has some je ne sais quoi that will never be copied in a mechanical device.  This simple model (and compared to an actual brain, it is rudimentary, however impressive Golosio's team's achievement is) is doing precisely what an infant's brain does when it learns language -- taking in input, abstracting rules, and adjusting as it goes so that it improves over time.

Myself, I think this is awesome.  I'm not particularly concerned about machines taking over the world -- for one thing, a typical human brain has about 100 billion neurons, so to have something that really could emulate anything a human could do would take scaling up ANNABELL by a factor of 50,000.  (That's assuming that an intelligent mind couldn't operate out of a brain that was more compact and efficient, which is certainly a possibility.)  I also don't think it's demeaning to humans that we may be "nothing more than meat machines," as one biologist put it.  This doesn't diminish our own personal capacity for experience, it just means that we're built from the same stuff as the rest of the universe.

Which is sort of cool.

Anyhow, what Golosio et al. have done is only the beginning of what appears to be a quantum leap in AI research.  As I've said many times, and about many things; I can't imagine what wonders await in the future.

Tuesday, July 2, 2013

The creation of Adam

I am absolutely fascinated by the idea of artificial intelligence.

Now, let me be up front that I don't know the first thing about the technical side of it.  I am so low on the technological knowledge scale that I am barely capable of operating a cellphone.  A former principal I worked for used to call me "The Dinosaur," and said (correctly) that I would have been perfectly comfortable teaching in an 18th century lecture hall.

Be that as it may, I find it astonishing how close we're getting to an artificial brain that even the doubters will have no choice but to call "intelligent."  For example, meet Adam Z1, who is the subject of a crowdsourced fund-raising campaign on IndieGoGo:


Make sure you watch the video on the site -- a discussion between Adam and his creators.

Adam is the brainchild of roboticist David Hanson.  And now, Hanson wants to get some funding to work with some of the world's experts in AI -- Ben Goertzel, Mark Tilden, and Gino Yu -- to design a brain that will be "as smart as a three-year-old human."

The sales pitch, which is written as if it were coming from Adam himself, outlines what Hanson and his colleagues are trying to do:

Some of my robot brothers and sisters are already pretty good at what they do -- building stuff in factories and vacuuming the floor and flying planes and so forth.

But as my AI guru friends keep telling me, these bots are all missing one thing: COMMON SENSE.

They're what my buddy Ben Goertzel would call "narrow AI" systems -- they're good at doing one particular kind of thing, but they don't really understand the world, they don't know what they're doing and why.
After getting what is referred to as a "toddler brain," here are a few things that Adam might be able to do:
  • PLAY WITH TOYS!!! ... I'm really looking forward to this.  I want to build stuff with blocks -- build towers with blocks and knock them down, build walls to keep you out ... all the good stuff!
  • DRAW PICTURES ON MY IPAD ... That's right, they're going to buy me an iPad.  Pretty cool, huh?   And they'll teach me to draw pictures on it -- pictures out of my mind, and pictures of what I'm seeing and doing.  Before long I'll be a better artist than David!
  • TALK TO HUMANS ABOUT WHAT I'M DOING  ...  Yeah, you may have guessed already, but I've gotten some help with my human friends in writing this crowdfunding pitch.   But once I've got my new OpenCog-powered brain, I'll be able to tell you about what I'm doing all on my own....  They tell me this is called "experientially grounded language understanding and generation."  I hope I'll understand what that means one day.
  • RESPOND TO HUMAN EMOTIONS WITH MY OWN EMOTIONAL EXPRESSIONS  ...  You're gonna love this one!  I have one heck of a cute little face already, and it can show a load of different expressions.  My new brain will let me understand what emotion one of you meat creatures is showing on your face, and feel a bit of what you're feeling, and show my own feeling right back atcha.   This is most of the reason why my daddy David Hanson gave me such a cute face in the first place.  I may not be very smart yet, but it's obvious even to me that a robot that could THINK but not FEEL wouldn't be a very good thing.  I want to understand EVERYTHING -- including all you wonderful people....
  • MAKE PLANS AND FOLLOW THEM ... AND CHANGE THEM WHEN I NEED TO....   Right now I have to admit I'm a pretty laid back little robot.  I spend most of my time just sitting around waiting for something cool to happen -- like for someone to give me a better brain so I can figure out something else to do!  But once I've got my new brain, I've got big plans, I'll tell you!  And they tell me OpenCog has some pretty good planning and reasoning software, that I'll be able to use to plan out what I do.   I'll start small, sure -- planning stuff to build, and what to say to people, and so forth.  But once I get some practice, the sky's the limit! 
  • Now, let me say first that I think that this is all very cool, and if you can afford to, you should consider contributing to their campaign.  But I have to add, in the interest of honesty, that mostly what I felt when I watched the video on their site is... creeped out.  Adam Z1, for all of his child-like attributes, falls for me squarely into the Uncanny Valley.  Quite honestly, while watching Adam, I wasn't reminded so much of any friendly toddlers I've known as I was of a certain... movie character:


    I kept expecting Adam to say, "I would like to have friends very much... so that I can KILL THEM.  And then TAKE OVER THE WORLD."

    But leaving aside my gut reaction for a moment, this does bring up the question of what Artificial Intelligence really is.  The topic has been debated at length, and most people seem to fall into one of two camps:
    1) If it responds intelligently -- learns, reacts flexibly, processes new information correctly, and participates in higher-order behavior (problem solving, creativity, play) -- then it is de facto intelligent.  It doesn't matter whether that intelligence is seated in a biological, organic machine such as a brain, or in a mechanical device such as a computer.  This is the approach taken by people who buy the idea of the Turing Test, named after computer pioneer Alan Turing, which basically says that if a prospective artificial intelligence can fool a panel of sufficiently intelligent humans, then it's intelligent.

    2) Any mechanical, computer-based system will never be intelligent, because at its basis it is a deterministic system that is limited by the underpinning of what the machine can do.  Humans, these folks say, have "something more" that will never be emulated by a computer -- a sense of self that the spiritually-minded amongst us might call a "soul."  Proponents of this take on Artificial Intelligence tend to like American philosopher John Searle, who compared computers to someone in a locked room mechanistically translating passages in English into Chinese, using an English-to-Chinese dictionary.  The output might look intelligent, it might even fool you, but the person in the room has no true understanding of what he is doing.  He is simply converting one string of characters into another using a set of fixed rules.
    Predictably, I'm in Turing's camp all the way, largely because I don't think it's ever been demonstrated that our brains are anything more than very sophisticated string-converters.  If you could convince me that humans themselves have that "something more," I might be willing to admit that Searle et al. have a point.  But for right now, I am very much of the opinion that Artificial Intelligence, of a level that would pass the Turing test, is only a matter of time.

    So best of luck to David Hanson and his team.  And also best of luck to Adam in his quest to become... a real boy.  Even if what he's currently doing is nothing more than responding in a pre-programmed way, it will be interesting to see what will happen when the best brains in robotics take a crack at giving him an upgrade.