My friend and fellow author Gil Miller, who has suggested many a topic for me here at Skeptophilia, threw a real doozy my way a couple of days ago. He shares my interest in all things scientific, and is especially curious about where technology is leading. (It must be said that he knows way more about tech than I do; if you look up the word "Luddite" in the dictionary you'll find a little pic of me to illustrate the concept.) The topic he suggested was, on its surface, flat-out hilarious, but beyond the amusement value it raises some deep and fascinating questions about the nature of intelligence.
He sent me a link to an article that appeared at the site PC Gamer, about an artificial intelligence system that was being tested by the military. The idea was to beef up a defensive AI's ability to detect someone approaching -- something that would have obvious military applications, and could also potentially be useful in security systems. So an AI that had been specifically developed to recognize humans and sense their proximity was placed in the center of a traffic circle, and eight Marines were given the task of trying to reach it undetected; whoever got there without being seen won the game.
The completely unexpected outcome was that all eight Marines handily defeated the AI.
A spokesperson for the project described what happened as follows:
Eight marines: not a single one got detected. They defeated the AI system not with traditional camouflage but with clever tricks that were outside of the AI system's testing regime. Two somersaulted for three hundred meters; never got detected. Two hid under a cardboard box. You could hear them giggling the whole time. Like Bugs Bunny in a Looney Tunes cartoon, sneaking up on Elmer Fudd in a cardboard box. One guy, my favorite, he field-stripped a fir tree and walked like a fir tree. You can see his smile, and that's about all you see. The AI system had been trained to detect humans walking, not humans somersaulting, hiding in a cardboard box, or disguised as a tree.
This brings up some really interesting question about our own intelligence, because I think any reasonably intelligent four-year-old would have caught the Marines at their game -- and thus outperformed the AI. In a lot of ways we're exquisitely sensitive to our surroundings (although I'll qualify that in a moment); as proto-hominids on the African savanna, we had to be really good at detecting anything anomalous in order to survive, because sometimes those anomalies were the swishing tails of hungry lions. For myself, I have an instinctive sense of spaces with which I'm familiar. I recall distinctly walking into my classroom one morning, and immediately thinking, Someone's been in here since I locked up last night. There was nothing hugely different -- a couple of things moved a little -- but having taught in the same classroom for twenty years, I knew it so well that I immediately recognized something was amiss. It turned out to be nothing of concern; I asked the principal, and she said the usual room the school board met in was being used, so they'd held their session in my room the previous evening.
But even the small shifts they'd made stood out to me instantly.
It seems as if the only way you could get an AI to key in on what humans do more or less automatically is to program them explicitly to keep track of where everything is -- or to recognize humans somersaulting, hiding under cardboard boxes, and disguised as fir trees. Which kind of runs counter to the bottom-up approach that most AI designers are shooting for.
What's most fascinating, though, is that our "exquisite sensitivity" I referred to earlier has some gaping holes. We're programmed (as it were) to pay attention to certain things, and as a result are completely oblivious to others, usually based upon what our brains think is important to pay attention to at the time. Regular readers of Skeptophilia may recall my posting the following mindblowing short video, called "Whodunnit?" If you haven't seen it, take a couple of minutes and watch it before reading further:
Awareness is complex; trying to emulate our sensory processing systems in an AI would mean understanding first how ours actually work, and we're very far from that. Obviously, no one would want to build inattentional blindness into a security system, but I have to wonder how you would program an AI to recognize what was trivial and what was crucial to notice -- like the fact that it was being snuck up on by a Marine underneath a cardboard box. The fact that an AI that was good enough to undergo military testing failed so spectacularly, tricked by a ruse that wouldn't fool any normally-abled human being, indicates we have a very long way to go.