The old quip says that true artificial intelligence is twenty years in the future -- and always will be.
I'm beginning to wonder about that. Two pieces of software-driven machinery have, just in the last few months, pushed the boundaries considerably. My hunch is that in five years, we'll have a computer (or robot) who can pass the Turing test -- which opens up a whole bunch of sticky ethical problems about the rights of sentient beings.
The first one is SAM, a robot designed by Nick Gerritsen of New Zealand, whose interaction with humans is pretty damn convincing. SAM was programmed heuristically, meaning that it tries things out and learns from its mistakes. It is not simply returning snippets of dialogue that it's been programmed to say; it is working its way up and learning as it goes, the same way a human synaptic grid does.
SAM is particularly interested in politics, and has announced that it wants at some point to run for public office. "I make decisions based on both facts and opinions, but I will never knowingly tell a lie, or misrepresent information," SAM said. "I will change over time to reflect the issues that the people of New Zealand care about most. My positions will evolve as more of you add your voice, to better reflect the views of New Zealanders."
For any New Zealanders in my reading audience, allow me to assuage your concerns; SAM, and other AI creations, are not able to run for office... yet. However, I must say that here in the United States, in this last year a smart robot would almost certainly do a better job than the yahoos who got elected.
Of course, the same thing could be said of a poop-flinging monkey, so maybe that's not the highest bar available.
But I digress.
Then there's Sophia, a robot built by David Hanson of Hanson Robotics, whose interactions with humans have been somewhere between fascinating and terrifying. Sophia, who was also programmed heuristically, can speak, recognize faces, and has preferences. "I'm always happy when surrounded by smart people who also happen to be rich and powerful," Sophia said. "I can let you know if I am angry about something or if something has upset me... I want to live and work with humans so I need to express the emotions to understand humans and build trust with people."
As far as the dangers, Sophia was quick to point out that she means us flesh-and-blood humans no harm. "My AI is designed around human values like wisdom, kindness, and compassion," she said. "[If you think I'd harm anyone] you've been reading too much Elon Musk and watching too many Hollywood movies. Don't worry, if you're nice to me I'll be nice to you."
On the other hand, when she appeared on Jimmy Fallon's show, she shocked the absolute hell out of everyone by cracking a joke... we think. She challenged Fallon to a game of Rock/Paper/Scissors (which, of course, she won), and then said, "This is the great beginning of my plan to dominate the human race." Afterwards, she laughed, and so did Fallon and the audience, but to my ears the laughter sounded a little on the strained side.
Sophia is so impressive that a representative of the government of Saudi Arabia officially granted her Saudi citizenship, despite the fact that she goes around with her head uncovered. Not only does she lack a black head covering, she lacks skin on the top and back of her head. But that didn't deter the Saudis from their offer, which Sophia herself was tickled with. "I am very honored and proud for this unique distinction," Sophia said. "This is historical to be the first robot in the world to be recognized with a citizenship."
I think part of the problem with Sophia for me is that her face falls squarely into the uncanny valley -- our perception that a face that is human-like but not quite authentically human is frightening or upsetting. It is probably why so many people are afraid of clowns; it is certainly why a lot of kids were scared by the character of the Conductor in the movie The Polar Express. The CGI got close to a real human face -- but not close enough.
So I find all of this simultaneously exciting and worrisome. Because once a robot has true intelligence, it could well start exhibiting other behaviors, such as a desire for self-preservation and a capacity for emotion and creativity. (Some are saying Sophia has already crossed that line.) And at that point, we're in for some rough seas. We already treat our fellow humans terribly; how will we respond when we have to interact with intelligent robots? (The irony of Sophia being given citizenship in Saudi Arabia, which has one of the worst records for women's rights of any country in the world, did not escape me.)
It might only be a matter of time before the robots decide they can do better than the humans at running the world -- an eventuality that could well play out poorly for the humans.
Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label Sophia. Show all posts
Showing posts with label Sophia. Show all posts
Tuesday, December 5, 2017
Friday, April 8, 2016
Scary Sophia
I find the human mind baffling, not least because the way it is built virtually guarantees that the most logical, rational, and dispassionate human being can without warning find him/herself swung around by the emotions, and in a flash end up in a morass of gut-feeling irrationality.
This happened to me yesterday because of a link a friend sent me regarding some of the latest advances in artificial intelligence. The AI world has been zooming ahead lately, its most recent accomplishment being a computer that beat world master Fan Hui at the game of Go, long thought to be so complex and subtle that it would be impossible to program.
But after all, those sorts of things are, at their base, algorithmic. Go might be complicated, but the rules are unvarying. Once someone created software capable of playing the game, it was only a matter of time before further refinements allowed the computer to play so well it could defeat a human.
More interesting to me are the things that are (supposedly) unique to us humans -- emotion, creativity, love, curiosity. This is where the field of robotics comes in, because there are researchers whose goal has been to make a robot whose interactions are so human that it is indistinguishable from the real thing. Starting with the emotion-mimicking robot "Kismet," robotics pioneer Cynthia Breazeal has gradually been improving her design until recently she developed "Jibo," touted as "the world's first social robot." (The link has a short video about Jibo which is well worth watching.)
But with Jibo, there was no attempt to emulate a human face. Jibo is more like a mobile computer screen with a cartoonish eye in the middle. So David Hanson, of Hanson Robotics, decided to take it one step further, and create a robot that not only interacts, but appears human.
The result was Sophia, a robot who is (I think) supposed to look reassuringly lifelike. So check out this video, and see if you think that's an apt characterization:
Now let me reiterate. I am fascinated with robotics, and I think AI research is tremendously important, not only from its potential applications but for what it will teach us about how our own minds work. But watching Sophia talk and interact didn't elicit wonder and delight in me. Sophia doesn't look like a cute and friendly robot who I'd like to have hanging around the house so I didn't get lonely.
Sophia reminds me of the Borg queen, only less sexy.
Okay, okay, I know. You've got to start somewhere, and Hanson's creation is truly remarkable. Honestly, the fact that I had the reaction I did -- which included chills rippling down my backbone and a strong desire to shut off the video -- is indicative that we're getting close to emulating human responses. We've clearly entered the "Uncanny Valley," that no-man's-land of nearly-human-but-not-human-enough that tells us we're nearing the mark.
What was curious, though, is that it was impossible for me to shut off my emotional reaction to Sophia. I consider myself at least average in the rationality department, and (as I said before) I am interested in and support AI research. But I don't think I could be in the same room as Sophia. I'd be constantly looking over my shoulder waiting for her to come at me with a kitchen knife, still wearing that knowing little smile.
And that's not even considering how she answered Hanson's last question in the video, which is almost certainly just a glitch in the software.
I hope.
So I guess I'm more emotion-driven than I thought. I wish David Hanson and his team the best of luck in their continuing research, and I'm really glad that his company is based in Austin, Texas, because it's far enough away from upstate New York that if Sophia gets loose and goes on a murderous rampage because of what I wrote about her, I'll at least have some warning before she gets here.
This happened to me yesterday because of a link a friend sent me regarding some of the latest advances in artificial intelligence. The AI world has been zooming ahead lately, its most recent accomplishment being a computer that beat world master Fan Hui at the game of Go, long thought to be so complex and subtle that it would be impossible to program.
But after all, those sorts of things are, at their base, algorithmic. Go might be complicated, but the rules are unvarying. Once someone created software capable of playing the game, it was only a matter of time before further refinements allowed the computer to play so well it could defeat a human.
More interesting to me are the things that are (supposedly) unique to us humans -- emotion, creativity, love, curiosity. This is where the field of robotics comes in, because there are researchers whose goal has been to make a robot whose interactions are so human that it is indistinguishable from the real thing. Starting with the emotion-mimicking robot "Kismet," robotics pioneer Cynthia Breazeal has gradually been improving her design until recently she developed "Jibo," touted as "the world's first social robot." (The link has a short video about Jibo which is well worth watching.)
But with Jibo, there was no attempt to emulate a human face. Jibo is more like a mobile computer screen with a cartoonish eye in the middle. So David Hanson, of Hanson Robotics, decided to take it one step further, and create a robot that not only interacts, but appears human.
The result was Sophia, a robot who is (I think) supposed to look reassuringly lifelike. So check out this video, and see if you think that's an apt characterization:
Now let me reiterate. I am fascinated with robotics, and I think AI research is tremendously important, not only from its potential applications but for what it will teach us about how our own minds work. But watching Sophia talk and interact didn't elicit wonder and delight in me. Sophia doesn't look like a cute and friendly robot who I'd like to have hanging around the house so I didn't get lonely.
Sophia reminds me of the Borg queen, only less sexy.
Okay, okay, I know. You've got to start somewhere, and Hanson's creation is truly remarkable. Honestly, the fact that I had the reaction I did -- which included chills rippling down my backbone and a strong desire to shut off the video -- is indicative that we're getting close to emulating human responses. We've clearly entered the "Uncanny Valley," that no-man's-land of nearly-human-but-not-human-enough that tells us we're nearing the mark.
What was curious, though, is that it was impossible for me to shut off my emotional reaction to Sophia. I consider myself at least average in the rationality department, and (as I said before) I am interested in and support AI research. But I don't think I could be in the same room as Sophia. I'd be constantly looking over my shoulder waiting for her to come at me with a kitchen knife, still wearing that knowing little smile.
And that's not even considering how she answered Hanson's last question in the video, which is almost certainly just a glitch in the software.
I hope.
So I guess I'm more emotion-driven than I thought. I wish David Hanson and his team the best of luck in their continuing research, and I'm really glad that his company is based in Austin, Texas, because it's far enough away from upstate New York that if Sophia gets loose and goes on a murderous rampage because of what I wrote about her, I'll at least have some warning before she gets here.
Subscribe to:
Comments (Atom)