Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label robotics. Show all posts
Showing posts with label robotics. Show all posts

Saturday, May 17, 2025

The appearance of creativity

The word creativity is strangely hard to define.

What makes a work "creative?"  The Stanford Encyclopedia of Philosophy states that to be creative, a the created item must be both new and valuable.  The "valuable" part already skates out over thin ice, because it immediately raises the question of "valuable to whom?"  I've seen works of art -- out of respect to the artists, and so as not to get Art Snobbery Bombs lobbed in my general direction, I won't provide specific examples -- that looked to me like the product of finger paints in the hands of a below-average second-grader, and yet which made it into prominent museums (and were valued in the hundreds of thousands of dollars).

The article itself touches on this problem, with a quote from philosopher Dustin Stokes:

Knowing that something is valuable or to be valued does not by itself reveal why or how that thing is.  By analogy, being told that a carburetor is useful provides no explanatory insight into the nature of a carburetor: how it works and what it does.

This is a little disingenuous, though.  The difference is that any sufficiently motivated person could learn the science of how an engine works and find out for themselves why a carburetor is necessary, and afterward, we'd all agree on the explanation -- while I doubt any amount of analysis would be sufficient to get me to appreciate a piece of art that I simply don't think is very good, or (worse) to get a dozen randomly-chosen people to agree on how good it is.

Margaret Boden has an additional insight into creativity; in her opinion, truly creative works are also surprising.  The Stanford article has this to say about Boden's claim:

In this kind of case, the creative result is so surprising that it prompts observers to marvel, “But how could that possibly happen?”  Boden calls this transformational creativity because it cannot happen within a pre-existing conceptual space; the creator has to transform the conceptual space itself, by altering its constitutive rules or constraints.  Schoenberg crafted atonal music, Boden says, “by dropping the home-key constraint”, the rule that a piece of music must begin and end in the same key.  Lobachevsky and other mathematicians developed non-Euclidean geometry by dropping Euclid’s fifth axiom.  KekulĂ© discovered the ring-structure of the benzene molecule by negating the constraint that a molecule must follow an open curve.  In such cases, Boden is fond of saying that the result was “downright impossible” within the previous conceptual space.

This has an immediate resonance for me, because I've had the experience as a writer of feeling like a story or character was transformed almost without any conscious volition on my part; in Boden's terms, something happened that was outside the conceptual space of the original story.  The most striking example is the character of Marig Kastella from The Chains of Orion (the third book of the Arc of the Oracles trilogy).  Initially, he was simply the main character's boyfriend, and there mostly to be a hesitant, insecure, questioning foil to astronaut Kallman Dorn's brash and adventurous personality.  But Marig took off in an entirely different direction, and in the last third of the book kind of took over the story.  As a result his character arc diverged wildly from what I had envisioned, and he remains to this day one of my very favorite characters I've written. 

If I actually did write him, you know?  Because it feels like Marig was already out there somewhere, and I didn't create him, I got to know him -- and in the process he revealed himself to be a far deeper, richer, and more powerful person than I'd thought at first.

[Image licensed under the Creative Commons ShareAlike 1.0, Graffiti and Mural in the Linienstreet Berlin-Mitte, photographer Jorge Correo, 2014]

The reason this topic comes up is some research out of Aalto University in Finland that appeared this week in the journal ACM Transactions on the Human-Robot Interaction.  The researchers took an AI that had been programmed to produce art -- in this case, to reproduce a piece of human-created art, but the test subjects weren't told that -- and then asked the volunteers to rate how creative the product was.  In all three cases, the subjects were told that the piece had been created by AI.  The volunteers were placed in one of three groups:

  • Group 1 saw only the result -- the finished art piece;
  • Group 2 saw the lines appearing on the page, but not the robot creating it; and
  • Group 3 saw the robot itself making the drawing.

Even though the resulting art pieces were all identical -- and, as I said, the design itself had been created by a human being, and the robot was simply generating a copy -- group 1 rated the result as the least creative, and group 3 as the most.

Evidently, if we witness something's production, we're more likely to consider the act creative -- regardless of the quality of the product.  If the producer appears to have agency, that's all it takes.

The problem here is that deciding whether something is "really creative" (or any of the interminable sub-arguments over whether certain music, art, or writing is "good") all inevitably involve a subjective element that -- philosophy encyclopedias notwithstanding -- cannot be expunged.  The AI experiment at Aalto University highlights that it doesn't take much to change our opinion about whether something is or is not creativity.

Now, bear in mind that I'm not considering here the topic of ethics in artificial intelligence; I've already ranted at length about the problems with techbros ripping off actual human artists, musicians, and writers to train their AI models, and how this will exacerbate the fact that most of us creative types are already making three-fifths of fuck-all in the way of income from our work.  But what this highlights is that we humans can't even come to consensus on whether something actually is creativity.  It's a little like the Turing Test; if all we have is the output to judge by, there's never going to be agreement about what we're looking at.

So while the researchers were careful to make it obvious (well, after the fact, anyhow) that what their robot was doing was not creative, but was a replica of someone else's work, there's no reason why AI systems couldn't already be producing art, music, and writing that appears to be creative by the Stanford's criteria of being new, valuable, and surprising.

At which point we better figure out exactly what we want our culture's creative landscape to look like -- and fast.

****************************************


Wednesday, April 3, 2024

Marching into the uncanny valley

"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

That quote from Michael Crichton's Jurassic Park kept going through my head as I read about the latest in robotics from Columbia University -- a robot that can recognize a human facial expression, then mimic it so fast that it looks like it's responding to emotion the way a real human would.

One of the major technical problems with trying to get robots to emulate human emotions is that up until now, they hadn't been able to respond quickly enough to make it look natural.  A delayed smile, for example, comes across as forced; on a mechanical face it drops right into the uncanny valley, the phenomenon noted by Japanese roboticist Masahiro Mori in 1970 as an expression or gesture that is close to being human, but not quite close enough.  Take, for example, "Sophia," the interactive robot invented back in 2016 that was able to mimic human expressions, but for most people generated an "Oh, hell no" response rather than the warm-and-trusting-confidant response which the roboticists were presumably shooting for.  The timing of her expressions and comments was subtly off, and the result was that very few of us would have trusted Sophia with the kitchen knives when our backs were turned.

This new creation, though -- a robot called "Emo" -- is able to pick up on human microexpressions that signal a smile or a frown or whatnot is coming, and respond in kind so fast that it looks like true empathy.  They trained it using hours of videos of people interacting, until finally the software controlling its face was able to detect the tiny muscle movements that preceded a change in facial expressions, allowing it to emulate the emotional response it was watching.

Researcher Yuhang Hu interacting with Emo  [Image credit: Creative Machines Lab, Columbia University]

"I think predicting human facial expressions accurately is a revolution in HRI [human-robot interaction]," Hu said.  "Traditionally, robots have not been designed to consider humans' expressions during interactions. Now, the robot can integrate human facial expressions as feedback.  When a robot makes co-expressions with people in real-time, it not only improves the interaction quality but also helps in building trust between humans and robots.  In the future, when interacting with a robot, it will observe and interpret your facial expressions, just like a real person."

Hod Lipson, professor of robotics and artificial intelligence research at Columbia, at least gave a quick nod toward the potential issues with this, but very quickly lapsed into superlatives about how wonderful it would be.  "Although this capability heralds a plethora of positive applications, ranging from home assistants to educational aids, it is incumbent upon developers and users to exercise prudence and ethical considerations," Lipson said.  "But it’s also very exciting -- by advancing robots that can interpret and mimic human expressions accurately, we're moving closer to a future where robots can seamlessly integrate into our daily lives, offering companionship, assistance, and even empathy.  Imagine a world where interacting with a robot feels as natural and comfortable as talking to a friend."

Yeah, I'm imagining it, but not with the pleased smile Lipson probably wants.  I suspect I'm not alone in thinking, "What in the hell are we doing?"  We're already at the point where generative AI is not only flooding the arts -- resulting in actual creative human beings finding it hard to make a living -- but deepfake AI photographs, audio, and video are becoming so close to the real thing that you simply can't trust what you see or hear anymore.  We evolved to recognize when something in our environment was dangerously off; many psychologists think the universality of the uncanny valley phenomenon is because our brains long ago evolved the ability to detect a subtle "wrongness" in someone's expression as a warning signal.

But what happens when the fake becomes so good, so millimeter-and-millisecond accurate, that our detection systems stop working?

I don't tend to be an alarmist, but the potential for misusing this technology is, to put not too fine a point on it, fucking enormous.  We don't need another proxy for human connection; we need more opportunities for actual human connection.  We don't need another way for corporations with their own agendas (almost always revolving around making more money) to manipulate us using machines that can trick us into thinking we're talking with a human.

And for cryin' in the sink, we don't need more ways in which we can be lied to.

I'm usually very much rah-rah about scientific advances, and it's always seemed to me an impossibly thorny ethical conundrum to determine whether there are things humans simply shouldn't investigate.  Who sets those limits, and based upon what rules?  Here, though, we're accelerating the capacity for the unscrupulous to take advantage -- not just of the gullible, anymore, but everyone -- because we're rapidly getting to the point that even the smart humans won't be able to tell the difference between what's real and what's not.

And that's a flat-out dangerous situation.

So a qualified congratulations to Hu and Lipson and their team.  What they've done is, honestly, pretty amazing.  But that said, they need to stop, and so do the AI techbros who are saying "damn the torpedoes, full speed ahead" and inundating the internet with generative AI everything. 

And for the love of all that's good and holy, all of us internet users need to STOP SHARING AI IMAGES.  Completely.  Not only is it often passing off a faked image as real -- worse, the software is trained using art and photography without permission from, compensation to, or even the knowledge of the actual human artists and photographers.  I.e. -- it's stolen.  I don't care how "beautiful" or "cute" or "precious" you think it is.  If you don't know the source of an image, and can't be bothered to find out, don't share it.  It's that simple.

We need to put the brakes on, hard, at least until we have lawmakers consider -- in a sober and intelligent fashion -- how to evaluate the potential dangers, and set some guidelines for how this technology can be fairly and safely used.

Otherwise, we're marching right into the valley of the shadow of uncanniness, absurdly confident we'll be fine despite all the warning signs.

****************************************



Saturday, May 29, 2021

Falling into the uncanny valley

As we get closer and closer to something that is unequivocally an artificial intelligence, engineers have tackled another aspect of this; how do you create something that not only acts (and interacts) intelligently, but looks human?

It's a harder question than it appears at first.  We're all familiar with depictions of robots from movies and television -- from ones that made no real attempt to mimic the human face in anything more than the most superficial features (such as the robots in I, Robot and the droids in Star Wars) to ones where the producers effectively cheated by having actual human actors simply try to act robotic (the most famous, and in my opinion the best, was Commander Data in Star Trek: The Next Generation).  The problem is, we are so attuned to the movement of faces that we can be thrown off, even repulsed, by something so minor that we can't quite put our finger on what exactly is wrong.

This phenomenon was noted a long time ago -- first back in 1970, when roboticist Masahiro Mori coined the name "uncanny valley" to describe the phenomenon.  His contention, which has been borne out by research, is that we generally do not have a strong negative reaction to clearly non-human faces (such as teddy bears, the animated characters in most kids' cartoons, and the aforementioned non-human-looking robots).  But as you get closer to accurately representing a human face, something fascinating happens.  We suddenly start being repelled -- the sense is that the face looks human, but there's something "off."  This has been a problem not only in robotics but in CGI; in fact, one of the first and best-known cases of an accidental descent into the uncanny valley was the train conductor in the CGI movie The Polar Express, where a character who was supposed to be friendly and sympathetic ended up scaring the shit out of the kids for no very obvious reason.

As I noted earlier, the difficulty is that we evolved to extract a huge amount of information from extremely subtle movements of the human face.  Think of what can be communicated by tiny gestures like a slight lift of a eyebrow or the momentary quirking upward of the corner of the mouth.  Mimicking that well enough to look authentic has turned out to be as challenging as the complementary problem of creating AI that can act human in other ways, such as conversation, responses to questions, and the incorporation of emotion, layers of meaning, and humor.

The latest attempt to create a face with human expressivity comes out of Columbia University, and was the subject of a paper in arXiv this week called "Smile Like You Mean It: Animatronic Robotic Face with Learned Models," by Boyuan Chen, Yuhang Hu, Lianfeng Li, Sara Cummings, and Hod Lipson.  They call their robot EVA:

The authors write:

Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots.  At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans.  In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts.  We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry.  Our algorithm does not require any knowledge of the robot's kinematic model, camera calibration or predefined expression set.  By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor dataset.

Now, let me say up front that I'm extremely impressed by the skill of the roboticists who tackled this project, and I can't even begin to understand how they managed it.  But the result falls, in my opinion, into the deepest part of the uncanny valley.  Take a look:


The tiny motors that control the movement of EVA's face are amazingly sophisticated, but the expressions they generate are just... off.  It's not the blue skin, for what it's worth.  It's something about the look in the eyes and the rest of the face being mismatched or out-of-sync.  As a result, EVA doesn't appear friendly to me.

To me, EVA looks like she's plotting something, like possibly the subjugation of humanity.

So as amazing as it is that we now have a robot who can mimic human expressions without those expressions being pre-programmed, we have a long way to go before we'll see an authentically human-looking artificial face.  It's a bit of a different angle on the Turing test, isn't it?  But instead of the interactions having to fool a human judge, here the appearance has to fool one.

And I wonder if that, in the long haul, might turn out to be even harder to do.

***********************************

Saber-toothed tigers.  Giant ground sloths.  Mastodons and woolly mammoths.  Enormous birds like the elephant bird and the moa.  North American camels, hippos, and rhinos.  Glyptodons, an armadillo relative as big as a Volkswagen Beetle with an enormous spiked club on the end of their tail.

What do they all have in common?  Besides being huge and cool?

They all went extinct, and all around the same time -- around 14,000 years ago.  Remnant populations persisted a while longer in some cases (there was a small herd of woolly mammoths on Wrangel Island in the Aleutians only four thousand years ago, for example), but these animals went from being the major fauna of North America, South America, Eurasia, and Australia to being completely gone in an astonishingly short time.

What caused their demise?

This week's Skeptophilia book of the week is The End of the Megafauna: The Fate of the World's Hugest, Fiercest, and Strangest Animals, by Ross MacPhee, which considers the question, and looks at various scenarios -- human overhunting, introduced disease, climatic shifts, catastrophes like meteor strikes or nearby supernova explosions.  Seeing how fast things can change is sobering, especially given that we are currently in the Sixth Great Extinction -- a recent paper said that current extinction rates are about the same as they were during the height of the Cretaceous-Tertiary Extinction 66 million years ago, which wiped out all the non-avian dinosaurs and a great many other species at the same time.  

Along the way we get to see beautiful depictions of these bizarre animals by artist Peter Schouten, giving us a glimpse of what this continent's wildlife would have looked like only fifteen thousand years ago.  It's a fascinating glimpse into a lost world, and an object lesson to the people currently creating our global environmental policy -- we're no more immune to the consequences of environmental devastation as the ground sloths and glyptodons were.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!] 


Friday, April 8, 2016

Scary Sophia

I find the human mind baffling, not least because the way it is built virtually guarantees that the most logical, rational, and dispassionate human being can without warning find him/herself swung around by the emotions, and in a flash end up in a morass of gut-feeling irrationality.

This happened to me yesterday because of a link a friend sent me regarding some of the latest advances in artificial intelligence.  The AI world has been zooming ahead lately, its most recent accomplishment being a computer that beat world master Fan Hui at the game of Go, long thought to be so complex and subtle that it would be impossible to program.

But after all, those sorts of things are, at their base, algorithmic.  Go might be complicated, but the rules are unvarying.  Once someone created software capable of playing the game, it was only a matter of time before further refinements allowed the computer to play so well it could defeat a human.

More interesting to me are the things that are (supposedly) unique to us humans -- emotion, creativity, love, curiosity.  This is where the field of robotics comes in, because there are researchers whose goal has been to make a robot whose interactions are so human that it is indistinguishable from the real thing.  Starting with the emotion-mimicking robot "Kismet," robotics pioneer Cynthia Breazeal has gradually been improving her design until recently she developed "Jibo," touted as "the world's first social robot."  (The link has a short video about Jibo which is well worth watching.)

But with Jibo, there was no attempt to emulate a human face.  Jibo is more like a mobile computer screen with a cartoonish eye in the middle.  So David Hanson, of Hanson Robotics, decided to take it one step further, and create a robot that not only interacts, but appears human.

The result was Sophia, a robot who is (I think) supposed to look reassuringly lifelike.  So check out this video, and see if you think that's an apt characterization:


Now let me reiterate.  I am fascinated with robotics, and I think AI research is tremendously important, not only from its potential applications but for what it will teach us about how our own minds work.  But watching Sophia talk and interact didn't elicit wonder and delight in me.  Sophia doesn't look like a cute and friendly robot who I'd like to have hanging around the house so I didn't get lonely.

Sophia reminds me of the Borg queen, only less sexy.


Okay, okay, I know.  You've got to start somewhere, and Hanson's creation is truly remarkable.  Honestly, the fact that I had the reaction I did -- which included chills rippling down my backbone and a strong desire to shut off the video -- is indicative that we're getting close to emulating human responses.  We've clearly entered the "Uncanny Valley," that no-man's-land of nearly-human-but-not-human-enough that tells us we're nearing the mark.

What was curious, though, is that it was impossible for me to shut off my emotional reaction to Sophia.  I consider myself at least average in the rationality department, and (as I said before) I am interested in and support AI research.  But I don't think I could be in the same room as Sophia.  I'd be constantly looking over my shoulder waiting for her to come at me with a kitchen knife, still wearing that knowing little smile.

And that's not even considering how she answered Hanson's last question in the video, which is almost certainly just a glitch in the software.

I hope.

So I guess I'm more emotion-driven than I thought.  I wish David Hanson and his team the best of luck in their continuing research, and I'm really glad that his company is based in Austin, Texas, because it's far enough away from upstate New York that if Sophia gets loose and goes on a murderous rampage because of what I wrote about her, I'll at least have some warning before she gets here.

Tuesday, July 2, 2013

The creation of Adam

I am absolutely fascinated by the idea of artificial intelligence.

Now, let me be up front that I don't know the first thing about the technical side of it.  I am so low on the technological knowledge scale that I am barely capable of operating a cellphone.  A former principal I worked for used to call me "The Dinosaur," and said (correctly) that I would have been perfectly comfortable teaching in an 18th century lecture hall.

Be that as it may, I find it astonishing how close we're getting to an artificial brain that even the doubters will have no choice but to call "intelligent."  For example, meet Adam Z1, who is the subject of a crowdsourced fund-raising campaign on IndieGoGo:


Make sure you watch the video on the site -- a discussion between Adam and his creators.

Adam is the brainchild of roboticist David Hanson.  And now, Hanson wants to get some funding to work with some of the world's experts in AI -- Ben Goertzel, Mark Tilden, and Gino Yu -- to design a brain that will be "as smart as a three-year-old human."

The sales pitch, which is written as if it were coming from Adam himself, outlines what Hanson and his colleagues are trying to do:

Some of my robot brothers and sisters are already pretty good at what they do -- building stuff in factories and vacuuming the floor and flying planes and so forth.

But as my AI guru friends keep telling me, these bots are all missing one thing: COMMON SENSE.

They're what my buddy Ben Goertzel would call "narrow AI" systems -- they're good at doing one particular kind of thing, but they don't really understand the world, they don't know what they're doing and why.
After getting what is referred to as a "toddler brain," here are a few things that Adam might be able to do:
  • PLAY WITH TOYS!!! ... I'm really looking forward to this.  I want to build stuff with blocks -- build towers with blocks and knock them down, build walls to keep you out ... all the good stuff!
  • DRAW PICTURES ON MY IPAD ... That's right, they're going to buy me an iPad.  Pretty cool, huh?   And they'll teach me to draw pictures on it -- pictures out of my mind, and pictures of what I'm seeing and doing.  Before long I'll be a better artist than David!
  • TALK TO HUMANS ABOUT WHAT I'M DOING  ...  Yeah, you may have guessed already, but I've gotten some help with my human friends in writing this crowdfunding pitch.   But once I've got my new OpenCog-powered brain, I'll be able to tell you about what I'm doing all on my own....  They tell me this is called "experientially grounded language understanding and generation."  I hope I'll understand what that means one day.
  • RESPOND TO HUMAN EMOTIONS WITH MY OWN EMOTIONAL EXPRESSIONS  ...  You're gonna love this one!  I have one heck of a cute little face already, and it can show a load of different expressions.  My new brain will let me understand what emotion one of you meat creatures is showing on your face, and feel a bit of what you're feeling, and show my own feeling right back atcha.   This is most of the reason why my daddy David Hanson gave me such a cute face in the first place.  I may not be very smart yet, but it's obvious even to me that a robot that could THINK but not FEEL wouldn't be a very good thing.  I want to understand EVERYTHING -- including all you wonderful people....
  • MAKE PLANS AND FOLLOW THEM ... AND CHANGE THEM WHEN I NEED TO....   Right now I have to admit I'm a pretty laid back little robot.  I spend most of my time just sitting around waiting for something cool to happen -- like for someone to give me a better brain so I can figure out something else to do!  But once I've got my new brain, I've got big plans, I'll tell you!  And they tell me OpenCog has some pretty good planning and reasoning software, that I'll be able to use to plan out what I do.   I'll start small, sure -- planning stuff to build, and what to say to people, and so forth.  But once I get some practice, the sky's the limit! 
  • Now, let me say first that I think that this is all very cool, and if you can afford to, you should consider contributing to their campaign.  But I have to add, in the interest of honesty, that mostly what I felt when I watched the video on their site is... creeped out.  Adam Z1, for all of his child-like attributes, falls for me squarely into the Uncanny Valley.  Quite honestly, while watching Adam, I wasn't reminded so much of any friendly toddlers I've known as I was of a certain... movie character:


    I kept expecting Adam to say, "I would like to have friends very much... so that I can KILL THEM.  And then TAKE OVER THE WORLD."

    But leaving aside my gut reaction for a moment, this does bring up the question of what Artificial Intelligence really is.  The topic has been debated at length, and most people seem to fall into one of two camps:
    1) If it responds intelligently -- learns, reacts flexibly, processes new information correctly, and participates in higher-order behavior (problem solving, creativity, play) -- then it is de facto intelligent.  It doesn't matter whether that intelligence is seated in a biological, organic machine such as a brain, or in a mechanical device such as a computer.  This is the approach taken by people who buy the idea of the Turing Test, named after computer pioneer Alan Turing, which basically says that if a prospective artificial intelligence can fool a panel of sufficiently intelligent humans, then it's intelligent.

    2) Any mechanical, computer-based system will never be intelligent, because at its basis it is a deterministic system that is limited by the underpinning of what the machine can do.  Humans, these folks say, have "something more" that will never be emulated by a computer -- a sense of self that the spiritually-minded amongst us might call a "soul."  Proponents of this take on Artificial Intelligence tend to like American philosopher John Searle, who compared computers to someone in a locked room mechanistically translating passages in English into Chinese, using an English-to-Chinese dictionary.  The output might look intelligent, it might even fool you, but the person in the room has no true understanding of what he is doing.  He is simply converting one string of characters into another using a set of fixed rules.
    Predictably, I'm in Turing's camp all the way, largely because I don't think it's ever been demonstrated that our brains are anything more than very sophisticated string-converters.  If you could convince me that humans themselves have that "something more," I might be willing to admit that Searle et al. have a point.  But for right now, I am very much of the opinion that Artificial Intelligence, of a level that would pass the Turing test, is only a matter of time.

    So best of luck to David Hanson and his team.  And also best of luck to Adam in his quest to become... a real boy.  Even if what he's currently doing is nothing more than responding in a pre-programmed way, it will be interesting to see what will happen when the best brains in robotics take a crack at giving him an upgrade.

    Wednesday, March 9, 2011

    The valley of the shadow of uncanniness

    Today in the news is a story about the creation of a robot named "Kaspar" at the University of Hertfordshire, whose purpose is to help autistic children relate to people better.

    Kaspar is programmed not only to respond to speech, but to react when hugged or hurt.  He is capable of demonstrating a number of facial expressions, helping autistic individuals learn to connect expressions with emotions in others.  The program has tremendous potential, says Dr. Abigael San, a London clinical psychologist and spokesperson for the British Psychological Society.  "Autistic children like things that are made up of different parts, like a robot," she said, "so they may process what the robot does more easily than a real person."

    I think this is awesome -- autism is a tremendously difficult disorder to deal with, much less to treat, and conventional therapies can take years and result in highly varied outcomes.  Anything that is developed to help streamline the treatment process is all to the good.

    I am equally intrigued, however, by my reaction to photographs of Kaspar.  (You can see a photograph here.) 

    On looking at the picture, I had to suppress a shudder.  Kaspar, to me, looks creepy, and I don't think it's just associations with dolls like Chucky that made me react that way.  To me, Kaspar lies squarely in the Uncanny Valley.

    The concept of the Uncanny Valley was first formalized by Japanese roboticist Masahiro Mori in 1970, and it has to do with our reaction to non-human faces.  A toy, doll, or robot with a very inhuman face is considered somewhere in the middle on the creepiness scale (think of the Transformers, the Iron Giant, or Sonny in I, Robot).  As its features become more human, it generally becomes less creepy looking -- think of a stuffed toy, or a well-made doll.  Then, at some point, there's a spike on the creepiness axis -- it's just too close to being like a human for comfort, but not close enough to be actually human -- and we tend to rank those faces as scarier than the purely non-human ones.  This is the "Uncanny Valley."

    This concept has been used to explain why a lot of people had visceral negative reactions to the protagonists in the movies The Polar Express and Beowulf.  There was something a little too still, a little too unnatural, a little too much like something nonhuman pretending to be human, about the CGI faces of the characters.  The character Data in Star Trek: The Next Generation, however, seems to be on the uphill side of the Uncanny Valley; since he was played by a human actor, he had enough human-like characteristics that his android features were intriguing rather than disturbing.

    It is an open question as to why the Uncanny Valley exists.  It's been explained through mechanisms of mate selection (we are programmed to find attractive faces that respond in a thoroughly normal, human way, and to be repelled by human-like faces which do not, because normal responses are a sign of genetic soundness), fear of death or disease (the face of a corpse resides somewhere in the Uncanny Valley, as do the faces of individuals with some mental and physical disorders), or a simple violation of what it means to be human.  A robot that is too close but not close enough to mimicking human behavior gets caught both ways -- it seems not to be a machine trying to appear human, but a human with abnormal appearance and reactions.

    Don't get me wrong; I'm thrilled that Kaspar has been created.  And given that a hallmark of autism is the inability to make judgments about body and facial language, I doubt an Uncanny Valley exists for autistic kids (or, perhaps, it is configured differently -- I don't think the question has been researched).  But in most people, facial recognition is a very fundamental thing.  It's hard-wired into our brains, at a very young age -- one of the first things a newborn baby does is fix onto its mother's face.  We're extraordinarily good at recognizing faces, and face-like patterns (thus the phenomenon of pareidolia, or the detection of faces in wood grain, clouds, and grilled cheese sandwiches, about which I have blogged before).

    It's just that the faces need to be either very much like human faces, or sufficiently far away, or they result in a strong aversive reaction.  All of which makes me wonder who first came up with the concept of "clown."