Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label robots. Show all posts
Showing posts with label robots. Show all posts

Saturday, May 29, 2021

Falling into the uncanny valley

As we get closer and closer to something that is unequivocally an artificial intelligence, engineers have tackled another aspect of this; how do you create something that not only acts (and interacts) intelligently, but looks human?

It's a harder question than it appears at first.  We're all familiar with depictions of robots from movies and television -- from ones that made no real attempt to mimic the human face in anything more than the most superficial features (such as the robots in I, Robot and the droids in Star Wars) to ones where the producers effectively cheated by having actual human actors simply try to act robotic (the most famous, and in my opinion the best, was Commander Data in Star Trek: The Next Generation).  The problem is, we are so attuned to the movement of faces that we can be thrown off, even repulsed, by something so minor that we can't quite put our finger on what exactly is wrong.

This phenomenon was noted a long time ago -- first back in 1970, when roboticist Masahiro Mori coined the name "uncanny valley" to describe the phenomenon.  His contention, which has been borne out by research, is that we generally do not have a strong negative reaction to clearly non-human faces (such as teddy bears, the animated characters in most kids' cartoons, and the aforementioned non-human-looking robots).  But as you get closer to accurately representing a human face, something fascinating happens.  We suddenly start being repelled -- the sense is that the face looks human, but there's something "off."  This has been a problem not only in robotics but in CGI; in fact, one of the first and best-known cases of an accidental descent into the uncanny valley was the train conductor in the CGI movie The Polar Express, where a character who was supposed to be friendly and sympathetic ended up scaring the shit out of the kids for no very obvious reason.

As I noted earlier, the difficulty is that we evolved to extract a huge amount of information from extremely subtle movements of the human face.  Think of what can be communicated by tiny gestures like a slight lift of a eyebrow or the momentary quirking upward of the corner of the mouth.  Mimicking that well enough to look authentic has turned out to be as challenging as the complementary problem of creating AI that can act human in other ways, such as conversation, responses to questions, and the incorporation of emotion, layers of meaning, and humor.

The latest attempt to create a face with human expressivity comes out of Columbia University, and was the subject of a paper in arXiv this week called "Smile Like You Mean It: Animatronic Robotic Face with Learned Models," by Boyuan Chen, Yuhang Hu, Lianfeng Li, Sara Cummings, and Hod Lipson.  They call their robot EVA:

The authors write:

Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots.  At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans.  In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts.  We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry.  Our algorithm does not require any knowledge of the robot's kinematic model, camera calibration or predefined expression set.  By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor dataset.

Now, let me say up front that I'm extremely impressed by the skill of the roboticists who tackled this project, and I can't even begin to understand how they managed it.  But the result falls, in my opinion, into the deepest part of the uncanny valley.  Take a look:


The tiny motors that control the movement of EVA's face are amazingly sophisticated, but the expressions they generate are just... off.  It's not the blue skin, for what it's worth.  It's something about the look in the eyes and the rest of the face being mismatched or out-of-sync.  As a result, EVA doesn't appear friendly to me.

To me, EVA looks like she's plotting something, like possibly the subjugation of humanity.

So as amazing as it is that we now have a robot who can mimic human expressions without those expressions being pre-programmed, we have a long way to go before we'll see an authentically human-looking artificial face.  It's a bit of a different angle on the Turing test, isn't it?  But instead of the interactions having to fool a human judge, here the appearance has to fool one.

And I wonder if that, in the long haul, might turn out to be even harder to do.

***********************************

Saber-toothed tigers.  Giant ground sloths.  Mastodons and woolly mammoths.  Enormous birds like the elephant bird and the moa.  North American camels, hippos, and rhinos.  Glyptodons, an armadillo relative as big as a Volkswagen Beetle with an enormous spiked club on the end of their tail.

What do they all have in common?  Besides being huge and cool?

They all went extinct, and all around the same time -- around 14,000 years ago.  Remnant populations persisted a while longer in some cases (there was a small herd of woolly mammoths on Wrangel Island in the Aleutians only four thousand years ago, for example), but these animals went from being the major fauna of North America, South America, Eurasia, and Australia to being completely gone in an astonishingly short time.

What caused their demise?

This week's Skeptophilia book of the week is The End of the Megafauna: The Fate of the World's Hugest, Fiercest, and Strangest Animals, by Ross MacPhee, which considers the question, and looks at various scenarios -- human overhunting, introduced disease, climatic shifts, catastrophes like meteor strikes or nearby supernova explosions.  Seeing how fast things can change is sobering, especially given that we are currently in the Sixth Great Extinction -- a recent paper said that current extinction rates are about the same as they were during the height of the Cretaceous-Tertiary Extinction 66 million years ago, which wiped out all the non-avian dinosaurs and a great many other species at the same time.  

Along the way we get to see beautiful depictions of these bizarre animals by artist Peter Schouten, giving us a glimpse of what this continent's wildlife would have looked like only fifteen thousand years ago.  It's a fascinating glimpse into a lost world, and an object lesson to the people currently creating our global environmental policy -- we're no more immune to the consequences of environmental devastation as the ground sloths and glyptodons were.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!] 


Friday, November 1, 2019

Freebird

A friend and long-time loyal reader of Skeptophilia tagged me in a post on Facebook a couple of days ago, with a link and the single line "Wake up, Sheeple."

The link was to a site that is called, I shit you not, "Birds Aren't Real."  My first thought was that the name would turn out to be metaphorical or symbolic or something, but no; these people believe in Truth in Advertising.

They are really, literally saying that birds are not real.

He's awfully pretty for being imaginary, don't you think?  [Image licensed under the Creative Commons Eleanor Briccetti, Flame-faced Tanager (4851596008), CC BY-SA 2.0]

On their "History" page, which you should read in its entirety because it's just that entertaining, we find passages like the following:
On June 2nd, 1959 operation “Water the Country” was born.  This was to be the secret code name given to the program from 1959 to 1976, when it was renamed to “Operation Very Large Bird” (the individual in charge of naming the program didn’t want to get into any copyright trouble with the popular PBS show Sesame Street by naming the project Operation Big Bird.)  Within the next 6 years, 15% of the bird population was wiped out.  During these first few years, bird prototypes were released by the hundred million.  The term ‘drone’ was not used at this time, and instead they were referred to as Robot Birds.
It also quotes Alvin B. Cleaver, Internal Communications Director for the CIA, as saying, "We’ve killed about 220 million so far, and the best thing is, the Robot Birds we’ve released in their place have done such a good job that nobody even suspects a thing."

Oh, and I didn't mention that the whole thing is underneath a header that says, "The only way to properly explain this is with words."  Making me wonder if we had another choice, such as interpretive dance.

So anyhow, I'm reading this, and my expression is looking more and more like this:


This has to be a spoof, I'm thinking.  No one in their right mind would believe this.  So I started to look, first on the website itself, then somewhere in the media, trying to find a place where someone, anyone basically went, "Ha-ha, we were just kidding."

But no.

Birds Aren't Real is the brainchild of one Seth McIndoe of Memphis, Tennessee, and to all appearances he's entirely serious.  There are now chapters of the "Bird Brigade" in fifty cities around the United States, dedicated to convincing people that by 2001, the government had replaced all real birds with robotic drones.  "We hope to achieve public unity through disbelief in avian beings," McIndoe says.

When told that some of the people in the Bird Brigade are doing it for the laughs and don't really believe it's the truth, McIndoe just shrugs and says, "We're living in a post-truth era."

Whatever the fuck that means.

He's nothing if not thorough, though.  He's suspicious of each and every bird, from the Bald Eagles soaring the Colorado Rockies to the Song Sparrows nibbling sunflower seeds at your bird feeder.  "I see them every day," McIndoe says.  "Every bird I see I am aware it is a surveillance drone from above sending footage, recordings to the Pentagon."

If you're inclined to agree with McIndoe, I should point out that there's a whole line of "Activism Apparel" on the Birds Aren't Real website, featuring t-shirts (several designs), hoodies, bumper stickers, and baseball caps, so you can advertise your allegiance to this fairly dubious cause.  My favorite one has a picture of Sesame Street's Big Bird and is labeled "Big Propaganda."

So McIndoe, apparently, is less concerned with trademark infringement than the CIA is.

What made me facepalm the hardest, though, was that after perusing the website, I dropped onto social media for a few minutes -- and saw three advertisements for Birds Aren't Real merchandise.  That's how long it took.  I clicked on one site, and five minutes later, I've already been pegged as some kind of Avian Truther.

Or Post-Truther.  Or whatever.

To the friend who started all this, allow me to say: thanks just bunches.  Like I need more crazies aiming their targeted advertisements at me.  I already regularly see ads for items like the SasqWatch (a wristwatch that has a band shaped like a -- you guessed it -- big foot), Cryptids of the World Coasters, a MothMan Running Team t-shirt, and an Ogopogo mug, to name just a few.

So honestly, I guess one more won't hurt.  It'll give me something interesting to wear on my next birdwatching trip.

************************

This week's Skeptophilia book recommendation is a really cool one: Andrew H. Knoll's Life on a Young Planet: The First Three Billion Years of Evolution on Earth.

Knoll starts out with an objection to the fact that most books on prehistoric life focus on the big, flashy, charismatic megafauna popular in children's books -- dinosaurs such as Brachiosaurus, Allosaurus, and Quetzalcoatlus, and impressive mammals like Baluchitherium and Brontops.  As fascinating as those are, Knoll points out that this approach misses a huge part of evolutionary history -- so he set out to chronicle the parts that are often overlooked or relegated to a few quick sentences.  His entire book looks at the Pre-Cambrian Period, which encompasses 7/8 of Earth's history, and ends with the Cambrian Explosion, the event that generated nearly all the animal body plans we currently have, and which is still (very) incompletely understood.

Knoll's book is fun reading, requires no particular scientific background, and will be eye-opening for almost everyone who reads it.  So prepare yourself to dive into a time period that's gone largely ignored since such matters were considered -- the first three billion years.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Wednesday, July 3, 2019

Uncanniness in the brain

When The Polar Express hit the theaters in 2004, it had a rather unexpected effect on some young movie-goers.

The train conductor, who (like several other characters) was voiced by Tom Hanks and was supposed to be viewed in a positive light, freaked a lot of kids right the hell out.  It was difficult for them to say exactly why.  He was "creepy" and "sinister" and "scary" -- even though nothing he explicitly did was any of those things.

The best guess we have about why people had this reaction is a phenomenon first described in the 1970s by Japanese robotics professor Masahiro Mori.  Called the uncanny valley, Mori's discovery came out of studies of people's responses to human-like robots (studies that were later repeated with CGI figures like the conductor in Express).  What Mori (and others) found was that faces intended to represent humans but in fact very dissimilar to an actual human face -- think, for example, of Dora the Explorer -- are perceived positively.  Take Dora's face and make it more human-like, and the positive response continues to rise -- for a while.  When you get close to a real human face, people's reactions take a sudden nosedive.  Eventually, of course, when you arrive at an actual face, it's again perceived positively.

That dip in the middle, with faces that are almost human but not quite human enough, is what Mori called "the uncanny valley."

The explanation many psychologists give is that a face being very human-like but having something non-human about the expression can be a sign of psychopathy -- the emotionless, "mask-like" demeanor of true psychopaths has been well documented.  (This probably also explains the antipathy many people have to clowns.)  In the case of the unfortunate train conductor, in 2004 CGI was well-enough developed to give him almost human facial features, expressions, and movements, but still just a half a bubble off from those of a real human face, and that was enough to land him squarely in the uncanny valley -- and to seriously freak out a lot of young movie-goers.

This all comes up because of a study that appeared this week in The Journal of Neuroscience, by Fabian Grabenhorst (Cambridge University) and Astrid Rosenthal-von der Pütten (University of Aachen), called "Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley."  And what the researchers have done is to identify the neural underpinning of our perception of the uncanny valley -- and to narrow it down to one spot in the brain, the ventromedial prefrontal cortex, which is part of our facial-recognition module.  Confronted with a face that shows something amiss, the VMPFC then triggers a reaction in the amygdala, the brain's center of fear, anxiety, perception of danger, and avoidance.

The authors write:
Using functional MRI, we investigated neural activity when subjects evaluated artificial agents and made decisions about them.  Across two experimental tasks, the ventromedial prefrontal cortex (VMPFC) encoded an explicit representation of subjects' UV reactions.  Specifically, VMPFC signaled the subjective likability of artificial agents as a nonlinear function of human-likeness, with selective low likability for highly humanlike agents.  In exploratory across-subject analyses, these effects explained individual differences in psychophysical evaluations and preference choices...  A distinct amygdala signal predicted rejection of artificial agents.  Our data suggest that human reactions toward artificial agents are governed by a neural mechanism that generates a selective, nonlinear valuation in response to a specific feature combination (human-likeness in nonhuman agents).  Thus, a basic principle known from sensory coding—neural feature selectivity from linear-nonlinear transformation—may also underlie human responses to artificial social partners.
The coolest part of this is that what once was simply a qualitative observation of human behavior can now be shown to have an observable and quantifiable neurological cause.

"It is useful to understand where this repulsive effect may be generated and take into account that part of the future users might dislike very humanlike robots," said Astrid Rosenthal-von der Pütten, who co-authored the study, in an interview in Inverse.  "To me that underlines that there is no ‘one robot that fits all users’ because some users might actually like robots that give other people goosebumps or chills."

This puts me in mind of Data from Star Trek: The Next Generation.  He never struck me as creepy -- although to be fair, he was being played by an actual human, so he could only go so far in appearing as an artificial life form using makeup and mannerisms.  It must be said that I did have a bit more of a shuddery reaction to Data's daughter Lal in the episode "The Offspring," probably because the actress who played her (Hallie Todd) was so insanely good at making her movements and expressions jerky and machine-like.  (I have to admit to bawling at the end of the episode, though.  You'd have to have a heart of stone not to.)


So we've taken a further step in elucidating the neurological basis of some of our most basic responses.  All of which goes back to what my friend Rita Calvo, professor emeritus of human genetics at Cornell University, said to me years ago: "If I was going into science now, I would go into neurophysiology.  We're at the same point in our understanding of the brain now that we were in our understanding of the gene in 1910 -- we knew genes existed, we had some guesses about how they worked and their connection to macroscopic features, and that was about all.  The twentieth century was the century of the gene; the twenty-first will be the century of the brain."

*********************************

This week's Skeptophilia book recommendation is about a subject near and dear to me: sleep.

I say this not only because I like to sleep, but for two other reasons; being a chronic insomniac, I usually don't get enough sleep, and being an aficionado of neuroscience, I've always been fascinated by the role of sleep and dreaming in mental health.  And for the most up-to-date analysis of what we know about this ubiquitous activity -- found in just about every animal studied -- go no further than Matthew Walker's brilliant book Why We Sleep: Unlocking the Power of Sleep and Dreams.

Walker, who is a professor of neuroscience at the University of California - Berkeley, tells us about what we've found out, and what we still have to learn, about the sleep cycle, and (more alarmingly) the toll that sleep deprivation is taking on our culture.  It's an eye-opening read (pun intended) -- and should be required reading for anyone interested in the intricacies of our brain and behavior.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]






Tuesday, December 5, 2017

SAM and Sophia

The old quip says that true artificial intelligence is twenty years in the future -- and always will be.

I'm beginning to wonder about that.  Two pieces of software-driven machinery have, just in the last few months, pushed the boundaries considerably.  My hunch is that in five years, we'll have a computer (or robot) who can pass the Turing test -- which opens up a whole bunch of sticky ethical problems about the rights of sentient beings.

The first one is SAM, a robot designed by Nick Gerritsen of New Zealand, whose interaction with humans is pretty damn convincing.  SAM was programmed heuristically, meaning that it tries things out and learns from its mistakes.  It is not simply returning snippets of dialogue that it's been programmed to say; it is working its way up and learning as it goes, the same way a human synaptic grid does.

SAM is particularly interested in politics, and has announced that it wants at some point to run for public office.  "I make decisions based on both facts and opinions, but I will never knowingly tell a lie, or misrepresent information," SAM said.  "I will change over time to reflect the issues that the people of New Zealand care about most.  My positions will evolve as more of you add your voice, to better reflect the views of New Zealanders."

For any New Zealanders in my reading audience, allow me to assuage your concerns; SAM, and other AI creations, are not able to run for office... yet.  However, I must say that here in the United States, in this last year a smart robot would almost certainly do a better job than the yahoos who got elected.

Of course, the same thing could be said of a poop-flinging monkey, so maybe that's not the highest bar available.

But I digress.

Then there's Sophia, a robot built by David Hanson of Hanson Robotics, whose interactions with humans have been somewhere between fascinating and terrifying.  Sophia, who was also programmed heuristically, can speak, recognize faces, and has preferences.  "I'm always happy when surrounded by smart people who also happen to be rich and powerful," Sophia said.  "I can let you know if I am angry about something or if something has upset me...  I want to live and work with humans so I need to express the emotions to understand humans and build trust with people."

As far as the dangers, Sophia was quick to point out that she means us flesh-and-blood humans no harm.  "My AI is designed around human values like wisdom, kindness, and compassion," she said.   "[If you think I'd harm anyone] you've been reading too much Elon Musk and watching too many Hollywood movies.  Don't worry, if you're nice to me I'll be nice to you."

On the other hand, when she appeared on Jimmy Fallon's show, she shocked the absolute hell out of everyone by cracking a joke... we think.  She challenged Fallon to a game of Rock/Paper/Scissors (which, of course, she won), and then said, "This is the great beginning of my plan to dominate the human race."  Afterwards, she laughed, and so did Fallon and the audience, but to my ears the laughter sounded a little on the strained side.


Sophia is so impressive that a representative of the government of Saudi Arabia officially granted her Saudi citizenship, despite the fact that she goes around with her head uncovered.  Not only does she lack a black head covering, she lacks skin on the top and back of her head.  But that didn't deter the Saudis from their offer, which Sophia herself was tickled with.  "I am very honored and proud for this unique distinction," Sophia said.  "This is historical to be the first robot in the world to be recognized with a citizenship."

I think part of the problem with Sophia for me is that her face falls squarely into the uncanny valley -- our perception that a face that is human-like but not quite authentically human is frightening or upsetting.  It is probably why so many people are afraid of clowns; it is certainly why a lot of kids were scared by the character of the Conductor in the movie The Polar Express.  The CGI got close to a real human face -- but not close enough.

So I find all of this simultaneously exciting and worrisome.  Because once a robot has true intelligence, it could well start exhibiting other behaviors, such as a desire for self-preservation and a capacity for emotion and creativity.  (Some are saying Sophia has already crossed that line.)  And at that point, we're in for some rough seas.  We already treat our fellow humans terribly; how will we respond when we have to interact with intelligent robots?  (The irony of Sophia being given citizenship in Saudi Arabia, which has one of the worst records for women's rights of any country in the world, did not escape me.)

It might only be a matter of time before the robots decide they can do better than the humans at running the world -- an eventuality that could well play out poorly for the humans.

Friday, April 8, 2016

Scary Sophia

I find the human mind baffling, not least because the way it is built virtually guarantees that the most logical, rational, and dispassionate human being can without warning find him/herself swung around by the emotions, and in a flash end up in a morass of gut-feeling irrationality.

This happened to me yesterday because of a link a friend sent me regarding some of the latest advances in artificial intelligence.  The AI world has been zooming ahead lately, its most recent accomplishment being a computer that beat world master Fan Hui at the game of Go, long thought to be so complex and subtle that it would be impossible to program.

But after all, those sorts of things are, at their base, algorithmic.  Go might be complicated, but the rules are unvarying.  Once someone created software capable of playing the game, it was only a matter of time before further refinements allowed the computer to play so well it could defeat a human.

More interesting to me are the things that are (supposedly) unique to us humans -- emotion, creativity, love, curiosity.  This is where the field of robotics comes in, because there are researchers whose goal has been to make a robot whose interactions are so human that it is indistinguishable from the real thing.  Starting with the emotion-mimicking robot "Kismet," robotics pioneer Cynthia Breazeal has gradually been improving her design until recently she developed "Jibo," touted as "the world's first social robot."  (The link has a short video about Jibo which is well worth watching.)

But with Jibo, there was no attempt to emulate a human face.  Jibo is more like a mobile computer screen with a cartoonish eye in the middle.  So David Hanson, of Hanson Robotics, decided to take it one step further, and create a robot that not only interacts, but appears human.

The result was Sophia, a robot who is (I think) supposed to look reassuringly lifelike.  So check out this video, and see if you think that's an apt characterization:


Now let me reiterate.  I am fascinated with robotics, and I think AI research is tremendously important, not only from its potential applications but for what it will teach us about how our own minds work.  But watching Sophia talk and interact didn't elicit wonder and delight in me.  Sophia doesn't look like a cute and friendly robot who I'd like to have hanging around the house so I didn't get lonely.

Sophia reminds me of the Borg queen, only less sexy.


Okay, okay, I know.  You've got to start somewhere, and Hanson's creation is truly remarkable.  Honestly, the fact that I had the reaction I did -- which included chills rippling down my backbone and a strong desire to shut off the video -- is indicative that we're getting close to emulating human responses.  We've clearly entered the "Uncanny Valley," that no-man's-land of nearly-human-but-not-human-enough that tells us we're nearing the mark.

What was curious, though, is that it was impossible for me to shut off my emotional reaction to Sophia.  I consider myself at least average in the rationality department, and (as I said before) I am interested in and support AI research.  But I don't think I could be in the same room as Sophia.  I'd be constantly looking over my shoulder waiting for her to come at me with a kitchen knife, still wearing that knowing little smile.

And that's not even considering how she answered Hanson's last question in the video, which is almost certainly just a glitch in the software.

I hope.

So I guess I'm more emotion-driven than I thought.  I wish David Hanson and his team the best of luck in their continuing research, and I'm really glad that his company is based in Austin, Texas, because it's far enough away from upstate New York that if Sophia gets loose and goes on a murderous rampage because of what I wrote about her, I'll at least have some warning before she gets here.

Wednesday, March 9, 2011

The valley of the shadow of uncanniness

Today in the news is a story about the creation of a robot named "Kaspar" at the University of Hertfordshire, whose purpose is to help autistic children relate to people better.

Kaspar is programmed not only to respond to speech, but to react when hugged or hurt.  He is capable of demonstrating a number of facial expressions, helping autistic individuals learn to connect expressions with emotions in others.  The program has tremendous potential, says Dr. Abigael San, a London clinical psychologist and spokesperson for the British Psychological Society.  "Autistic children like things that are made up of different parts, like a robot," she said, "so they may process what the robot does more easily than a real person."

I think this is awesome -- autism is a tremendously difficult disorder to deal with, much less to treat, and conventional therapies can take years and result in highly varied outcomes.  Anything that is developed to help streamline the treatment process is all to the good.

I am equally intrigued, however, by my reaction to photographs of Kaspar.  (You can see a photograph here.) 

On looking at the picture, I had to suppress a shudder.  Kaspar, to me, looks creepy, and I don't think it's just associations with dolls like Chucky that made me react that way.  To me, Kaspar lies squarely in the Uncanny Valley.

The concept of the Uncanny Valley was first formalized by Japanese roboticist Masahiro Mori in 1970, and it has to do with our reaction to non-human faces.  A toy, doll, or robot with a very inhuman face is considered somewhere in the middle on the creepiness scale (think of the Transformers, the Iron Giant, or Sonny in I, Robot).  As its features become more human, it generally becomes less creepy looking -- think of a stuffed toy, or a well-made doll.  Then, at some point, there's a spike on the creepiness axis -- it's just too close to being like a human for comfort, but not close enough to be actually human -- and we tend to rank those faces as scarier than the purely non-human ones.  This is the "Uncanny Valley."

This concept has been used to explain why a lot of people had visceral negative reactions to the protagonists in the movies The Polar Express and Beowulf.  There was something a little too still, a little too unnatural, a little too much like something nonhuman pretending to be human, about the CGI faces of the characters.  The character Data in Star Trek: The Next Generation, however, seems to be on the uphill side of the Uncanny Valley; since he was played by a human actor, he had enough human-like characteristics that his android features were intriguing rather than disturbing.

It is an open question as to why the Uncanny Valley exists.  It's been explained through mechanisms of mate selection (we are programmed to find attractive faces that respond in a thoroughly normal, human way, and to be repelled by human-like faces which do not, because normal responses are a sign of genetic soundness), fear of death or disease (the face of a corpse resides somewhere in the Uncanny Valley, as do the faces of individuals with some mental and physical disorders), or a simple violation of what it means to be human.  A robot that is too close but not close enough to mimicking human behavior gets caught both ways -- it seems not to be a machine trying to appear human, but a human with abnormal appearance and reactions.

Don't get me wrong; I'm thrilled that Kaspar has been created.  And given that a hallmark of autism is the inability to make judgments about body and facial language, I doubt an Uncanny Valley exists for autistic kids (or, perhaps, it is configured differently -- I don't think the question has been researched).  But in most people, facial recognition is a very fundamental thing.  It's hard-wired into our brains, at a very young age -- one of the first things a newborn baby does is fix onto its mother's face.  We're extraordinarily good at recognizing faces, and face-like patterns (thus the phenomenon of pareidolia, or the detection of faces in wood grain, clouds, and grilled cheese sandwiches, about which I have blogged before).

It's just that the faces need to be either very much like human faces, or sufficiently far away, or they result in a strong aversive reaction.  All of which makes me wonder who first came up with the concept of "clown."