Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, May 29, 2021

Falling into the uncanny valley

As we get closer and closer to something that is unequivocally an artificial intelligence, engineers have tackled another aspect of this; how do you create something that not only acts (and interacts) intelligently, but looks human?

It's a harder question than it appears at first.  We're all familiar with depictions of robots from movies and television -- from ones that made no real attempt to mimic the human face in anything more than the most superficial features (such as the robots in I, Robot and the droids in Star Wars) to ones where the producers effectively cheated by having actual human actors simply try to act robotic (the most famous, and in my opinion the best, was Commander Data in Star Trek: The Next Generation).  The problem is, we are so attuned to the movement of faces that we can be thrown off, even repulsed, by something so minor that we can't quite put our finger on what exactly is wrong.

This phenomenon was noted a long time ago -- first back in 1970, when roboticist Masahiro Mori coined the name "uncanny valley" to describe the phenomenon.  His contention, which has been borne out by research, is that we generally do not have a strong negative reaction to clearly non-human faces (such as teddy bears, the animated characters in most kids' cartoons, and the aforementioned non-human-looking robots).  But as you get closer to accurately representing a human face, something fascinating happens.  We suddenly start being repelled -- the sense is that the face looks human, but there's something "off."  This has been a problem not only in robotics but in CGI; in fact, one of the first and best-known cases of an accidental descent into the uncanny valley was the train conductor in the CGI movie The Polar Express, where a character who was supposed to be friendly and sympathetic ended up scaring the shit out of the kids for no very obvious reason.

As I noted earlier, the difficulty is that we evolved to extract a huge amount of information from extremely subtle movements of the human face.  Think of what can be communicated by tiny gestures like a slight lift of a eyebrow or the momentary quirking upward of the corner of the mouth.  Mimicking that well enough to look authentic has turned out to be as challenging as the complementary problem of creating AI that can act human in other ways, such as conversation, responses to questions, and the incorporation of emotion, layers of meaning, and humor.

The latest attempt to create a face with human expressivity comes out of Columbia University, and was the subject of a paper in arXiv this week called "Smile Like You Mean It: Animatronic Robotic Face with Learned Models," by Boyuan Chen, Yuhang Hu, Lianfeng Li, Sara Cummings, and Hod Lipson.  They call their robot EVA:

The authors write:

Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots.  At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans.  In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts.  We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry.  Our algorithm does not require any knowledge of the robot's kinematic model, camera calibration or predefined expression set.  By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor dataset.

Now, let me say up front that I'm extremely impressed by the skill of the roboticists who tackled this project, and I can't even begin to understand how they managed it.  But the result falls, in my opinion, into the deepest part of the uncanny valley.  Take a look:


The tiny motors that control the movement of EVA's face are amazingly sophisticated, but the expressions they generate are just... off.  It's not the blue skin, for what it's worth.  It's something about the look in the eyes and the rest of the face being mismatched or out-of-sync.  As a result, EVA doesn't appear friendly to me.

To me, EVA looks like she's plotting something, like possibly the subjugation of humanity.

So as amazing as it is that we now have a robot who can mimic human expressions without those expressions being pre-programmed, we have a long way to go before we'll see an authentically human-looking artificial face.  It's a bit of a different angle on the Turing test, isn't it?  But instead of the interactions having to fool a human judge, here the appearance has to fool one.

And I wonder if that, in the long haul, might turn out to be even harder to do.

***********************************

Saber-toothed tigers.  Giant ground sloths.  Mastodons and woolly mammoths.  Enormous birds like the elephant bird and the moa.  North American camels, hippos, and rhinos.  Glyptodons, an armadillo relative as big as a Volkswagen Beetle with an enormous spiked club on the end of their tail.

What do they all have in common?  Besides being huge and cool?

They all went extinct, and all around the same time -- around 14,000 years ago.  Remnant populations persisted a while longer in some cases (there was a small herd of woolly mammoths on Wrangel Island in the Aleutians only four thousand years ago, for example), but these animals went from being the major fauna of North America, South America, Eurasia, and Australia to being completely gone in an astonishingly short time.

What caused their demise?

This week's Skeptophilia book of the week is The End of the Megafauna: The Fate of the World's Hugest, Fiercest, and Strangest Animals, by Ross MacPhee, which considers the question, and looks at various scenarios -- human overhunting, introduced disease, climatic shifts, catastrophes like meteor strikes or nearby supernova explosions.  Seeing how fast things can change is sobering, especially given that we are currently in the Sixth Great Extinction -- a recent paper said that current extinction rates are about the same as they were during the height of the Cretaceous-Tertiary Extinction 66 million years ago, which wiped out all the non-avian dinosaurs and a great many other species at the same time.  

Along the way we get to see beautiful depictions of these bizarre animals by artist Peter Schouten, giving us a glimpse of what this continent's wildlife would have looked like only fifteen thousand years ago.  It's a fascinating glimpse into a lost world, and an object lesson to the people currently creating our global environmental policy -- we're no more immune to the consequences of environmental devastation as the ground sloths and glyptodons were.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!] 


No comments:

Post a Comment