Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Uncanny Valley. Show all posts
Showing posts with label Uncanny Valley. Show all posts

Tuesday, October 7, 2025

Island of the dolls

One of the very first topics I addressed here at Skeptophilia -- only a few months after I started, in fall of 2010 -- was the idea of the uncanny valley.

The term was coined by Japanese robotics engineer Masahiro Mori way back in 1970, in his book Bukimi No Tani (不気味の谷), the title of which roughly translates to it.  The idea, which you're probably familiar with, is that if you map out our emotional response to a face as a function of its proximity to a normal human face, you find a fascinating pattern.  Faces very different from our own -- animal faces, stuffed toys, and stylized faces (like the famous "smiley face"), for example -- usually elicit positive, or at least neutral, responses.  Normal human faces, of course, are usually viewed positively.

Where you run into trouble is when a face is kinda similar to a human face, but not similar enough.  This is why clowns frequently trigger fear rather than amusement.  You may recall that the animators of the 2004 movie The Polar Express ran headlong into this, when the animation of the characters, especially the Train Conductor (who was supposed to be a nice character), freaked kids out instead of charming them.  Roboticists have been trying like mad to create a humanoid robot whose face doesn't elicit people to recoil with horror, with (thus far) little success.

That dip in the middle, between very non-human faces and completely human ones, is what Mori called "the uncanny valley."

Why this happens is a matter of conjecture.  Some psychologists have speculated that the not-quite-human-enough faces that elicit the strongest negative reactions often have a flat affect and a mask-like quality, which might act as primal triggers warning us about people with severe mental disorders like psychosis.  But the human psyche is a complex place, and it may well be that the reasons for the near-universal terror sparked by characters like The Gangers in the Doctor Who episode "The Almost People" are multifaceted.


What's certain is this aversion to faces in the uncanny valley exists across cultures.  Take, for example, a place I found out about only yesterday -- Mexico's Isla de las Muñecas, the "Island of the Dolls."

The island is in Lake Xochimilco, south of Mexico City, and it was owned by a peculiar recluse named Don Julián Santana Barrera.  Some time in the 1940s, so the story goes, Barrera found the body of a girl who had drowned in the shallows of the lake (another version is that he saw her drowning and was unable to save her).  The day after she died, Barrera found a doll floating in the water, and he became convinced that it was the girl's spirit returning.  So he put the doll on display, and started looking through the washed up flotsam and jetsam for more.

He found more.  Then he started trading produce he'd raised with the locals for more dolls.  Ultimately it became an obsession, and in the next five decades he collected over a thousand of them (along with assorted parts).  The place became a site for pilgrims, who were convinced that the dolls housed the spirits of the dead.  Legends arose that visitors saw the dolls moving or opening their eyes -- and that some heard them whispering to each other.

Barrera himself died in 2001 under (very) mysterious circumstances.  His nephew had come to help him -- at that point he was around eighty years old -- and the two were out fishing in the lake when the old man became convinced he heard mermaids calling to him.  The nephew rowed them both to shore and went to get assistance, but when he returned his uncle was face down in the water, drowned...

... at the same spot where he'd discovered the little girl's body, over fifty years earlier.

Since then, the island has been popular as a destination for dark tourism -- the attraction some people have for places associated with injury, death, or tragedy.  It was the filming location for the extremely creepy music video Lady Gaga released just a month ago, "The Dead Dance."

There's no doubt that dolls fall squarely into the uncanny valley for a lot of people.  Their still, unchanging expressions are right in that middle ground between being human and non-human.  (Explaining the success of horror flicks like Chucky and M3gan.)

And you can see why Mexico's Island of the Dolls has the draw it does.  You don't even need to believe in disembodied spirits of the dead to get the chills from it.

[Image licensed under the Creative Commons Esparta Palma, Xochimilco Dolls' Island, CC BY 2.0]

What astonishes me, though, is that Barrera himself wanted to live there.  I mean, I'm a fairly staunch disbeliever in all things paranormal, and those things still strike me as scary as fuck.

If I ever visit Mexico, I might be persuaded to go to the island.  But no way in hell would I spend the night there.

Just because I'm a skeptic doesn't mean I'm not suggestible.  In fact, the case could be argued that I became a skeptic precisely because I'm so suggestible.  After all, the other option was running around making little whimpering noises all the time, which is kind of counterproductive.

In any case, I'll be curious to hear what my readers think.  Are you susceptible to the uncanny valley?  Or resistant enough that you'd stay overnight on Isla de las Muñecas?

Maybe bring along a clown, for good measure?

Me, I'm creeped out just thinking about it.

****************************************


Wednesday, April 3, 2024

Marching into the uncanny valley

"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

That quote from Michael Crichton's Jurassic Park kept going through my head as I read about the latest in robotics from Columbia University -- a robot that can recognize a human facial expression, then mimic it so fast that it looks like it's responding to emotion the way a real human would.

One of the major technical problems with trying to get robots to emulate human emotions is that up until now, they hadn't been able to respond quickly enough to make it look natural.  A delayed smile, for example, comes across as forced; on a mechanical face it drops right into the uncanny valley, the phenomenon noted by Japanese roboticist Masahiro Mori in 1970 as an expression or gesture that is close to being human, but not quite close enough.  Take, for example, "Sophia," the interactive robot invented back in 2016 that was able to mimic human expressions, but for most people generated an "Oh, hell no" response rather than the warm-and-trusting-confidant response which the roboticists were presumably shooting for.  The timing of her expressions and comments was subtly off, and the result was that very few of us would have trusted Sophia with the kitchen knives when our backs were turned.

This new creation, though -- a robot called "Emo" -- is able to pick up on human microexpressions that signal a smile or a frown or whatnot is coming, and respond in kind so fast that it looks like true empathy.  They trained it using hours of videos of people interacting, until finally the software controlling its face was able to detect the tiny muscle movements that preceded a change in facial expressions, allowing it to emulate the emotional response it was watching.

Researcher Yuhang Hu interacting with Emo  [Image credit: Creative Machines Lab, Columbia University]

"I think predicting human facial expressions accurately is a revolution in HRI [human-robot interaction]," Hu said.  "Traditionally, robots have not been designed to consider humans' expressions during interactions. Now, the robot can integrate human facial expressions as feedback.  When a robot makes co-expressions with people in real-time, it not only improves the interaction quality but also helps in building trust between humans and robots.  In the future, when interacting with a robot, it will observe and interpret your facial expressions, just like a real person."

Hod Lipson, professor of robotics and artificial intelligence research at Columbia, at least gave a quick nod toward the potential issues with this, but very quickly lapsed into superlatives about how wonderful it would be.  "Although this capability heralds a plethora of positive applications, ranging from home assistants to educational aids, it is incumbent upon developers and users to exercise prudence and ethical considerations," Lipson said.  "But it’s also very exciting -- by advancing robots that can interpret and mimic human expressions accurately, we're moving closer to a future where robots can seamlessly integrate into our daily lives, offering companionship, assistance, and even empathy.  Imagine a world where interacting with a robot feels as natural and comfortable as talking to a friend."

Yeah, I'm imagining it, but not with the pleased smile Lipson probably wants.  I suspect I'm not alone in thinking, "What in the hell are we doing?"  We're already at the point where generative AI is not only flooding the arts -- resulting in actual creative human beings finding it hard to make a living -- but deepfake AI photographs, audio, and video are becoming so close to the real thing that you simply can't trust what you see or hear anymore.  We evolved to recognize when something in our environment was dangerously off; many psychologists think the universality of the uncanny valley phenomenon is because our brains long ago evolved the ability to detect a subtle "wrongness" in someone's expression as a warning signal.

But what happens when the fake becomes so good, so millimeter-and-millisecond accurate, that our detection systems stop working?

I don't tend to be an alarmist, but the potential for misusing this technology is, to put not too fine a point on it, fucking enormous.  We don't need another proxy for human connection; we need more opportunities for actual human connection.  We don't need another way for corporations with their own agendas (almost always revolving around making more money) to manipulate us using machines that can trick us into thinking we're talking with a human.

And for cryin' in the sink, we don't need more ways in which we can be lied to.

I'm usually very much rah-rah about scientific advances, and it's always seemed to me an impossibly thorny ethical conundrum to determine whether there are things humans simply shouldn't investigate.  Who sets those limits, and based upon what rules?  Here, though, we're accelerating the capacity for the unscrupulous to take advantage -- not just of the gullible, anymore, but everyone -- because we're rapidly getting to the point that even the smart humans won't be able to tell the difference between what's real and what's not.

And that's a flat-out dangerous situation.

So a qualified congratulations to Hu and Lipson and their team.  What they've done is, honestly, pretty amazing.  But that said, they need to stop, and so do the AI techbros who are saying "damn the torpedoes, full speed ahead" and inundating the internet with generative AI everything. 

And for the love of all that's good and holy, all of us internet users need to STOP SHARING AI IMAGES.  Completely.  Not only is it often passing off a faked image as real -- worse, the software is trained using art and photography without permission from, compensation to, or even the knowledge of the actual human artists and photographers.  I.e. -- it's stolen.  I don't care how "beautiful" or "cute" or "precious" you think it is.  If you don't know the source of an image, and can't be bothered to find out, don't share it.  It's that simple.

We need to put the brakes on, hard, at least until we have lawmakers consider -- in a sober and intelligent fashion -- how to evaluate the potential dangers, and set some guidelines for how this technology can be fairly and safely used.

Otherwise, we're marching right into the valley of the shadow of uncanniness, absurdly confident we'll be fine despite all the warning signs.

****************************************



Saturday, May 29, 2021

Falling into the uncanny valley

As we get closer and closer to something that is unequivocally an artificial intelligence, engineers have tackled another aspect of this; how do you create something that not only acts (and interacts) intelligently, but looks human?

It's a harder question than it appears at first.  We're all familiar with depictions of robots from movies and television -- from ones that made no real attempt to mimic the human face in anything more than the most superficial features (such as the robots in I, Robot and the droids in Star Wars) to ones where the producers effectively cheated by having actual human actors simply try to act robotic (the most famous, and in my opinion the best, was Commander Data in Star Trek: The Next Generation).  The problem is, we are so attuned to the movement of faces that we can be thrown off, even repulsed, by something so minor that we can't quite put our finger on what exactly is wrong.

This phenomenon was noted a long time ago -- first back in 1970, when roboticist Masahiro Mori coined the name "uncanny valley" to describe the phenomenon.  His contention, which has been borne out by research, is that we generally do not have a strong negative reaction to clearly non-human faces (such as teddy bears, the animated characters in most kids' cartoons, and the aforementioned non-human-looking robots).  But as you get closer to accurately representing a human face, something fascinating happens.  We suddenly start being repelled -- the sense is that the face looks human, but there's something "off."  This has been a problem not only in robotics but in CGI; in fact, one of the first and best-known cases of an accidental descent into the uncanny valley was the train conductor in the CGI movie The Polar Express, where a character who was supposed to be friendly and sympathetic ended up scaring the shit out of the kids for no very obvious reason.

As I noted earlier, the difficulty is that we evolved to extract a huge amount of information from extremely subtle movements of the human face.  Think of what can be communicated by tiny gestures like a slight lift of a eyebrow or the momentary quirking upward of the corner of the mouth.  Mimicking that well enough to look authentic has turned out to be as challenging as the complementary problem of creating AI that can act human in other ways, such as conversation, responses to questions, and the incorporation of emotion, layers of meaning, and humor.

The latest attempt to create a face with human expressivity comes out of Columbia University, and was the subject of a paper in arXiv this week called "Smile Like You Mean It: Animatronic Robotic Face with Learned Models," by Boyuan Chen, Yuhang Hu, Lianfeng Li, Sara Cummings, and Hod Lipson.  They call their robot EVA:

The authors write:

Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots.  At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans.  In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts.  We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry.  Our algorithm does not require any knowledge of the robot's kinematic model, camera calibration or predefined expression set.  By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor dataset.

Now, let me say up front that I'm extremely impressed by the skill of the roboticists who tackled this project, and I can't even begin to understand how they managed it.  But the result falls, in my opinion, into the deepest part of the uncanny valley.  Take a look:


The tiny motors that control the movement of EVA's face are amazingly sophisticated, but the expressions they generate are just... off.  It's not the blue skin, for what it's worth.  It's something about the look in the eyes and the rest of the face being mismatched or out-of-sync.  As a result, EVA doesn't appear friendly to me.

To me, EVA looks like she's plotting something, like possibly the subjugation of humanity.

So as amazing as it is that we now have a robot who can mimic human expressions without those expressions being pre-programmed, we have a long way to go before we'll see an authentically human-looking artificial face.  It's a bit of a different angle on the Turing test, isn't it?  But instead of the interactions having to fool a human judge, here the appearance has to fool one.

And I wonder if that, in the long haul, might turn out to be even harder to do.

***********************************

Saber-toothed tigers.  Giant ground sloths.  Mastodons and woolly mammoths.  Enormous birds like the elephant bird and the moa.  North American camels, hippos, and rhinos.  Glyptodons, an armadillo relative as big as a Volkswagen Beetle with an enormous spiked club on the end of their tail.

What do they all have in common?  Besides being huge and cool?

They all went extinct, and all around the same time -- around 14,000 years ago.  Remnant populations persisted a while longer in some cases (there was a small herd of woolly mammoths on Wrangel Island in the Aleutians only four thousand years ago, for example), but these animals went from being the major fauna of North America, South America, Eurasia, and Australia to being completely gone in an astonishingly short time.

What caused their demise?

This week's Skeptophilia book of the week is The End of the Megafauna: The Fate of the World's Hugest, Fiercest, and Strangest Animals, by Ross MacPhee, which considers the question, and looks at various scenarios -- human overhunting, introduced disease, climatic shifts, catastrophes like meteor strikes or nearby supernova explosions.  Seeing how fast things can change is sobering, especially given that we are currently in the Sixth Great Extinction -- a recent paper said that current extinction rates are about the same as they were during the height of the Cretaceous-Tertiary Extinction 66 million years ago, which wiped out all the non-avian dinosaurs and a great many other species at the same time.  

Along the way we get to see beautiful depictions of these bizarre animals by artist Peter Schouten, giving us a glimpse of what this continent's wildlife would have looked like only fifteen thousand years ago.  It's a fascinating glimpse into a lost world, and an object lesson to the people currently creating our global environmental policy -- we're no more immune to the consequences of environmental devastation as the ground sloths and glyptodons were.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!] 


Wednesday, July 3, 2019

Uncanniness in the brain

When The Polar Express hit the theaters in 2004, it had a rather unexpected effect on some young movie-goers.

The train conductor, who (like several other characters) was voiced by Tom Hanks and was supposed to be viewed in a positive light, freaked a lot of kids right the hell out.  It was difficult for them to say exactly why.  He was "creepy" and "sinister" and "scary" -- even though nothing he explicitly did was any of those things.

The best guess we have about why people had this reaction is a phenomenon first described in the 1970s by Japanese robotics professor Masahiro Mori.  Called the uncanny valley, Mori's discovery came out of studies of people's responses to human-like robots (studies that were later repeated with CGI figures like the conductor in Express).  What Mori (and others) found was that faces intended to represent humans but in fact very dissimilar to an actual human face -- think, for example, of Dora the Explorer -- are perceived positively.  Take Dora's face and make it more human-like, and the positive response continues to rise -- for a while.  When you get close to a real human face, people's reactions take a sudden nosedive.  Eventually, of course, when you arrive at an actual face, it's again perceived positively.

That dip in the middle, with faces that are almost human but not quite human enough, is what Mori called "the uncanny valley."

The explanation many psychologists give is that a face being very human-like but having something non-human about the expression can be a sign of psychopathy -- the emotionless, "mask-like" demeanor of true psychopaths has been well documented.  (This probably also explains the antipathy many people have to clowns.)  In the case of the unfortunate train conductor, in 2004 CGI was well-enough developed to give him almost human facial features, expressions, and movements, but still just a half a bubble off from those of a real human face, and that was enough to land him squarely in the uncanny valley -- and to seriously freak out a lot of young movie-goers.

This all comes up because of a study that appeared this week in The Journal of Neuroscience, by Fabian Grabenhorst (Cambridge University) and Astrid Rosenthal-von der Pütten (University of Aachen), called "Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley."  And what the researchers have done is to identify the neural underpinning of our perception of the uncanny valley -- and to narrow it down to one spot in the brain, the ventromedial prefrontal cortex, which is part of our facial-recognition module.  Confronted with a face that shows something amiss, the VMPFC then triggers a reaction in the amygdala, the brain's center of fear, anxiety, perception of danger, and avoidance.

The authors write:
Using functional MRI, we investigated neural activity when subjects evaluated artificial agents and made decisions about them.  Across two experimental tasks, the ventromedial prefrontal cortex (VMPFC) encoded an explicit representation of subjects' UV reactions.  Specifically, VMPFC signaled the subjective likability of artificial agents as a nonlinear function of human-likeness, with selective low likability for highly humanlike agents.  In exploratory across-subject analyses, these effects explained individual differences in psychophysical evaluations and preference choices...  A distinct amygdala signal predicted rejection of artificial agents.  Our data suggest that human reactions toward artificial agents are governed by a neural mechanism that generates a selective, nonlinear valuation in response to a specific feature combination (human-likeness in nonhuman agents).  Thus, a basic principle known from sensory coding—neural feature selectivity from linear-nonlinear transformation—may also underlie human responses to artificial social partners.
The coolest part of this is that what once was simply a qualitative observation of human behavior can now be shown to have an observable and quantifiable neurological cause.

"It is useful to understand where this repulsive effect may be generated and take into account that part of the future users might dislike very humanlike robots," said Astrid Rosenthal-von der Pütten, who co-authored the study, in an interview in Inverse.  "To me that underlines that there is no ‘one robot that fits all users’ because some users might actually like robots that give other people goosebumps or chills."

This puts me in mind of Data from Star Trek: The Next Generation.  He never struck me as creepy -- although to be fair, he was being played by an actual human, so he could only go so far in appearing as an artificial life form using makeup and mannerisms.  It must be said that I did have a bit more of a shuddery reaction to Data's daughter Lal in the episode "The Offspring," probably because the actress who played her (Hallie Todd) was so insanely good at making her movements and expressions jerky and machine-like.  (I have to admit to bawling at the end of the episode, though.  You'd have to have a heart of stone not to.)


So we've taken a further step in elucidating the neurological basis of some of our most basic responses.  All of which goes back to what my friend Rita Calvo, professor emeritus of human genetics at Cornell University, said to me years ago: "If I was going into science now, I would go into neurophysiology.  We're at the same point in our understanding of the brain now that we were in our understanding of the gene in 1910 -- we knew genes existed, we had some guesses about how they worked and their connection to macroscopic features, and that was about all.  The twentieth century was the century of the gene; the twenty-first will be the century of the brain."

*********************************

This week's Skeptophilia book recommendation is about a subject near and dear to me: sleep.

I say this not only because I like to sleep, but for two other reasons; being a chronic insomniac, I usually don't get enough sleep, and being an aficionado of neuroscience, I've always been fascinated by the role of sleep and dreaming in mental health.  And for the most up-to-date analysis of what we know about this ubiquitous activity -- found in just about every animal studied -- go no further than Matthew Walker's brilliant book Why We Sleep: Unlocking the Power of Sleep and Dreams.

Walker, who is a professor of neuroscience at the University of California - Berkeley, tells us about what we've found out, and what we still have to learn, about the sleep cycle, and (more alarmingly) the toll that sleep deprivation is taking on our culture.  It's an eye-opening read (pun intended) -- and should be required reading for anyone interested in the intricacies of our brain and behavior.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]






Tuesday, December 5, 2017

SAM and Sophia

The old quip says that true artificial intelligence is twenty years in the future -- and always will be.

I'm beginning to wonder about that.  Two pieces of software-driven machinery have, just in the last few months, pushed the boundaries considerably.  My hunch is that in five years, we'll have a computer (or robot) who can pass the Turing test -- which opens up a whole bunch of sticky ethical problems about the rights of sentient beings.

The first one is SAM, a robot designed by Nick Gerritsen of New Zealand, whose interaction with humans is pretty damn convincing.  SAM was programmed heuristically, meaning that it tries things out and learns from its mistakes.  It is not simply returning snippets of dialogue that it's been programmed to say; it is working its way up and learning as it goes, the same way a human synaptic grid does.

SAM is particularly interested in politics, and has announced that it wants at some point to run for public office.  "I make decisions based on both facts and opinions, but I will never knowingly tell a lie, or misrepresent information," SAM said.  "I will change over time to reflect the issues that the people of New Zealand care about most.  My positions will evolve as more of you add your voice, to better reflect the views of New Zealanders."

For any New Zealanders in my reading audience, allow me to assuage your concerns; SAM, and other AI creations, are not able to run for office... yet.  However, I must say that here in the United States, in this last year a smart robot would almost certainly do a better job than the yahoos who got elected.

Of course, the same thing could be said of a poop-flinging monkey, so maybe that's not the highest bar available.

But I digress.

Then there's Sophia, a robot built by David Hanson of Hanson Robotics, whose interactions with humans have been somewhere between fascinating and terrifying.  Sophia, who was also programmed heuristically, can speak, recognize faces, and has preferences.  "I'm always happy when surrounded by smart people who also happen to be rich and powerful," Sophia said.  "I can let you know if I am angry about something or if something has upset me...  I want to live and work with humans so I need to express the emotions to understand humans and build trust with people."

As far as the dangers, Sophia was quick to point out that she means us flesh-and-blood humans no harm.  "My AI is designed around human values like wisdom, kindness, and compassion," she said.   "[If you think I'd harm anyone] you've been reading too much Elon Musk and watching too many Hollywood movies.  Don't worry, if you're nice to me I'll be nice to you."

On the other hand, when she appeared on Jimmy Fallon's show, she shocked the absolute hell out of everyone by cracking a joke... we think.  She challenged Fallon to a game of Rock/Paper/Scissors (which, of course, she won), and then said, "This is the great beginning of my plan to dominate the human race."  Afterwards, she laughed, and so did Fallon and the audience, but to my ears the laughter sounded a little on the strained side.


Sophia is so impressive that a representative of the government of Saudi Arabia officially granted her Saudi citizenship, despite the fact that she goes around with her head uncovered.  Not only does she lack a black head covering, she lacks skin on the top and back of her head.  But that didn't deter the Saudis from their offer, which Sophia herself was tickled with.  "I am very honored and proud for this unique distinction," Sophia said.  "This is historical to be the first robot in the world to be recognized with a citizenship."

I think part of the problem with Sophia for me is that her face falls squarely into the uncanny valley -- our perception that a face that is human-like but not quite authentically human is frightening or upsetting.  It is probably why so many people are afraid of clowns; it is certainly why a lot of kids were scared by the character of the Conductor in the movie The Polar Express.  The CGI got close to a real human face -- but not close enough.

So I find all of this simultaneously exciting and worrisome.  Because once a robot has true intelligence, it could well start exhibiting other behaviors, such as a desire for self-preservation and a capacity for emotion and creativity.  (Some are saying Sophia has already crossed that line.)  And at that point, we're in for some rough seas.  We already treat our fellow humans terribly; how will we respond when we have to interact with intelligent robots?  (The irony of Sophia being given citizenship in Saudi Arabia, which has one of the worst records for women's rights of any country in the world, did not escape me.)

It might only be a matter of time before the robots decide they can do better than the humans at running the world -- an eventuality that could well play out poorly for the humans.

Tuesday, July 2, 2013

The creation of Adam

I am absolutely fascinated by the idea of artificial intelligence.

Now, let me be up front that I don't know the first thing about the technical side of it.  I am so low on the technological knowledge scale that I am barely capable of operating a cellphone.  A former principal I worked for used to call me "The Dinosaur," and said (correctly) that I would have been perfectly comfortable teaching in an 18th century lecture hall.

Be that as it may, I find it astonishing how close we're getting to an artificial brain that even the doubters will have no choice but to call "intelligent."  For example, meet Adam Z1, who is the subject of a crowdsourced fund-raising campaign on IndieGoGo:


Make sure you watch the video on the site -- a discussion between Adam and his creators.

Adam is the brainchild of roboticist David Hanson.  And now, Hanson wants to get some funding to work with some of the world's experts in AI -- Ben Goertzel, Mark Tilden, and Gino Yu -- to design a brain that will be "as smart as a three-year-old human."

The sales pitch, which is written as if it were coming from Adam himself, outlines what Hanson and his colleagues are trying to do:

Some of my robot brothers and sisters are already pretty good at what they do -- building stuff in factories and vacuuming the floor and flying planes and so forth.

But as my AI guru friends keep telling me, these bots are all missing one thing: COMMON SENSE.

They're what my buddy Ben Goertzel would call "narrow AI" systems -- they're good at doing one particular kind of thing, but they don't really understand the world, they don't know what they're doing and why.
After getting what is referred to as a "toddler brain," here are a few things that Adam might be able to do:
  • PLAY WITH TOYS!!! ... I'm really looking forward to this.  I want to build stuff with blocks -- build towers with blocks and knock them down, build walls to keep you out ... all the good stuff!
  • DRAW PICTURES ON MY IPAD ... That's right, they're going to buy me an iPad.  Pretty cool, huh?   And they'll teach me to draw pictures on it -- pictures out of my mind, and pictures of what I'm seeing and doing.  Before long I'll be a better artist than David!
  • TALK TO HUMANS ABOUT WHAT I'M DOING  ...  Yeah, you may have guessed already, but I've gotten some help with my human friends in writing this crowdfunding pitch.   But once I've got my new OpenCog-powered brain, I'll be able to tell you about what I'm doing all on my own....  They tell me this is called "experientially grounded language understanding and generation."  I hope I'll understand what that means one day.
  • RESPOND TO HUMAN EMOTIONS WITH MY OWN EMOTIONAL EXPRESSIONS  ...  You're gonna love this one!  I have one heck of a cute little face already, and it can show a load of different expressions.  My new brain will let me understand what emotion one of you meat creatures is showing on your face, and feel a bit of what you're feeling, and show my own feeling right back atcha.   This is most of the reason why my daddy David Hanson gave me such a cute face in the first place.  I may not be very smart yet, but it's obvious even to me that a robot that could THINK but not FEEL wouldn't be a very good thing.  I want to understand EVERYTHING -- including all you wonderful people....
  • MAKE PLANS AND FOLLOW THEM ... AND CHANGE THEM WHEN I NEED TO....   Right now I have to admit I'm a pretty laid back little robot.  I spend most of my time just sitting around waiting for something cool to happen -- like for someone to give me a better brain so I can figure out something else to do!  But once I've got my new brain, I've got big plans, I'll tell you!  And they tell me OpenCog has some pretty good planning and reasoning software, that I'll be able to use to plan out what I do.   I'll start small, sure -- planning stuff to build, and what to say to people, and so forth.  But once I get some practice, the sky's the limit! 
  • Now, let me say first that I think that this is all very cool, and if you can afford to, you should consider contributing to their campaign.  But I have to add, in the interest of honesty, that mostly what I felt when I watched the video on their site is... creeped out.  Adam Z1, for all of his child-like attributes, falls for me squarely into the Uncanny Valley.  Quite honestly, while watching Adam, I wasn't reminded so much of any friendly toddlers I've known as I was of a certain... movie character:


    I kept expecting Adam to say, "I would like to have friends very much... so that I can KILL THEM.  And then TAKE OVER THE WORLD."

    But leaving aside my gut reaction for a moment, this does bring up the question of what Artificial Intelligence really is.  The topic has been debated at length, and most people seem to fall into one of two camps:
    1) If it responds intelligently -- learns, reacts flexibly, processes new information correctly, and participates in higher-order behavior (problem solving, creativity, play) -- then it is de facto intelligent.  It doesn't matter whether that intelligence is seated in a biological, organic machine such as a brain, or in a mechanical device such as a computer.  This is the approach taken by people who buy the idea of the Turing Test, named after computer pioneer Alan Turing, which basically says that if a prospective artificial intelligence can fool a panel of sufficiently intelligent humans, then it's intelligent.

    2) Any mechanical, computer-based system will never be intelligent, because at its basis it is a deterministic system that is limited by the underpinning of what the machine can do.  Humans, these folks say, have "something more" that will never be emulated by a computer -- a sense of self that the spiritually-minded amongst us might call a "soul."  Proponents of this take on Artificial Intelligence tend to like American philosopher John Searle, who compared computers to someone in a locked room mechanistically translating passages in English into Chinese, using an English-to-Chinese dictionary.  The output might look intelligent, it might even fool you, but the person in the room has no true understanding of what he is doing.  He is simply converting one string of characters into another using a set of fixed rules.
    Predictably, I'm in Turing's camp all the way, largely because I don't think it's ever been demonstrated that our brains are anything more than very sophisticated string-converters.  If you could convince me that humans themselves have that "something more," I might be willing to admit that Searle et al. have a point.  But for right now, I am very much of the opinion that Artificial Intelligence, of a level that would pass the Turing test, is only a matter of time.

    So best of luck to David Hanson and his team.  And also best of luck to Adam in his quest to become... a real boy.  Even if what he's currently doing is nothing more than responding in a pre-programmed way, it will be interesting to see what will happen when the best brains in robotics take a crack at giving him an upgrade.

    Wednesday, March 9, 2011

    The valley of the shadow of uncanniness

    Today in the news is a story about the creation of a robot named "Kaspar" at the University of Hertfordshire, whose purpose is to help autistic children relate to people better.

    Kaspar is programmed not only to respond to speech, but to react when hugged or hurt.  He is capable of demonstrating a number of facial expressions, helping autistic individuals learn to connect expressions with emotions in others.  The program has tremendous potential, says Dr. Abigael San, a London clinical psychologist and spokesperson for the British Psychological Society.  "Autistic children like things that are made up of different parts, like a robot," she said, "so they may process what the robot does more easily than a real person."

    I think this is awesome -- autism is a tremendously difficult disorder to deal with, much less to treat, and conventional therapies can take years and result in highly varied outcomes.  Anything that is developed to help streamline the treatment process is all to the good.

    I am equally intrigued, however, by my reaction to photographs of Kaspar.  (You can see a photograph here.) 

    On looking at the picture, I had to suppress a shudder.  Kaspar, to me, looks creepy, and I don't think it's just associations with dolls like Chucky that made me react that way.  To me, Kaspar lies squarely in the Uncanny Valley.

    The concept of the Uncanny Valley was first formalized by Japanese roboticist Masahiro Mori in 1970, and it has to do with our reaction to non-human faces.  A toy, doll, or robot with a very inhuman face is considered somewhere in the middle on the creepiness scale (think of the Transformers, the Iron Giant, or Sonny in I, Robot).  As its features become more human, it generally becomes less creepy looking -- think of a stuffed toy, or a well-made doll.  Then, at some point, there's a spike on the creepiness axis -- it's just too close to being like a human for comfort, but not close enough to be actually human -- and we tend to rank those faces as scarier than the purely non-human ones.  This is the "Uncanny Valley."

    This concept has been used to explain why a lot of people had visceral negative reactions to the protagonists in the movies The Polar Express and Beowulf.  There was something a little too still, a little too unnatural, a little too much like something nonhuman pretending to be human, about the CGI faces of the characters.  The character Data in Star Trek: The Next Generation, however, seems to be on the uphill side of the Uncanny Valley; since he was played by a human actor, he had enough human-like characteristics that his android features were intriguing rather than disturbing.

    It is an open question as to why the Uncanny Valley exists.  It's been explained through mechanisms of mate selection (we are programmed to find attractive faces that respond in a thoroughly normal, human way, and to be repelled by human-like faces which do not, because normal responses are a sign of genetic soundness), fear of death or disease (the face of a corpse resides somewhere in the Uncanny Valley, as do the faces of individuals with some mental and physical disorders), or a simple violation of what it means to be human.  A robot that is too close but not close enough to mimicking human behavior gets caught both ways -- it seems not to be a machine trying to appear human, but a human with abnormal appearance and reactions.

    Don't get me wrong; I'm thrilled that Kaspar has been created.  And given that a hallmark of autism is the inability to make judgments about body and facial language, I doubt an Uncanny Valley exists for autistic kids (or, perhaps, it is configured differently -- I don't think the question has been researched).  But in most people, facial recognition is a very fundamental thing.  It's hard-wired into our brains, at a very young age -- one of the first things a newborn baby does is fix onto its mother's face.  We're extraordinarily good at recognizing faces, and face-like patterns (thus the phenomenon of pareidolia, or the detection of faces in wood grain, clouds, and grilled cheese sandwiches, about which I have blogged before).

    It's just that the faces need to be either very much like human faces, or sufficiently far away, or they result in a strong aversive reaction.  All of which makes me wonder who first came up with the concept of "clown."