Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label inattentional blindness. Show all posts
Showing posts with label inattentional blindness. Show all posts

Wednesday, February 8, 2023

The cardboard box ruse

My friend and fellow author Gil Miller, who has suggested many a topic for me here at Skeptophilia, threw a real doozy my way a couple of days ago.  He shares my interest in all things scientific, and is especially curious about where technology is leading.  (It must be said that he knows way more about tech than I do; if you look up the word "Luddite" in the dictionary you'll find a little pic of me to illustrate the concept.)  The topic he suggested was, on its surface, flat-out hilarious, but beyond the amusement value it raises some deep and fascinating questions about the nature of intelligence.

He sent me a link to an article that appeared at the site PC Gamer, about an artificial intelligence system that was being tested by the military.  The idea was to beef up a defensive AI's ability to detect someone approaching -- something that would have obvious military applications, and could also potentially be useful in security systems.  So an AI that had been specifically developed to recognize humans and sense their proximity was placed in the center of a traffic circle, and eight Marines were given the task of trying to reach it undetected; whoever got there without being seen won the game.

The completely unexpected outcome was that all eight Marines handily defeated the AI.

A spokesperson for the project described what happened as follows:

Eight marines: not a single one got detected.  They defeated the AI system not with traditional camouflage but with clever tricks that were outside of the AI system's testing regime.  Two somersaulted for three hundred meters; never got detected.  Two hid under a cardboard box.  You could hear them giggling the whole time.  Like Bugs Bunny in a Looney Tunes cartoon, sneaking up on Elmer Fudd in a cardboard box.  One guy, my favorite, he field-stripped a fir tree and walked like a fir tree.  You can see his smile, and that's about all you see.  The AI system had been trained to detect humans walking, not humans somersaulting, hiding in a cardboard box, or disguised as a tree.
Remember Ralph the Wolf disguising himself as a bush?  Good thing the sheep had Sam the Sheepdog looking after them, and not some stupid AI.

This brings up some really interesting question about our own intelligence, because I think any reasonably intelligent four-year-old would have caught the Marines at their game -- and thus outperformed the AI.  In a lot of ways we're exquisitely sensitive to our surroundings (although I'll qualify that in a moment); as proto-hominids on the African savanna, we had to be really good at detecting anything anomalous in order to survive, because sometimes those anomalies were the swishing tails of hungry lions.  For myself, I have an instinctive sense of spaces with which I'm familiar.  I recall distinctly walking into my classroom one morning, and immediately thinking, Someone's been in here since I locked up last night.  There was nothing hugely different -- a couple of things moved a little -- but having taught in the same classroom for twenty years, I knew it so well that I immediately recognized something was amiss.  It turned out to be nothing of concern; I asked the principal, and she said the usual room the school board met in was being used, so they'd held their session in my room the previous evening.

But even the small shifts they'd made stood out to me instantly.

It seems as if the only way you could get an AI to key in on what humans do more or less automatically is to program them explicitly to keep track of where everything is -- or to recognize humans somersaulting, hiding under cardboard boxes, and disguised as fir trees.  Which kind of runs counter to the bottom-up approach that most AI designers are shooting for.

What's most fascinating, though, is that our "exquisite sensitivity" I referred to earlier has some gaping holes.  We're programmed (as it were) to pay attention to certain things, and as a result are completely oblivious to others, usually based upon what our brains think is important to pay attention to at the time.  Regular readers of Skeptophilia may recall my posting the following mindblowing short video, called "Whodunnit?"  If you haven't seen it, take a couple of minutes and watch it before reading further:


This phenomenon, called inattentional blindness, results in our focusing so deeply on a few things that we effectively miss everything else.  (And honesty demands I admit that despite my earlier flex about my attention to detail in my classroom, I did terribly watching "Whodunnit?".)

Awareness is complex; trying to emulate our sensory processing systems in an AI would mean understanding first how ours actually work, and we're very far from that.  Obviously, no one would want to build inattentional blindness into a security system, but I have to wonder how you would program an AI to recognize what was trivial and what was crucial to notice -- like the fact that it was being snuck up on by a Marine underneath a cardboard box.  The fact that an AI that was good enough to undergo military testing failed so spectacularly, tricked by a ruse that wouldn't fool any normally-abled human being, indicates we have a very long way to go.

****************************************


Thursday, April 9, 2020

The attentional window

One of the critical functions of our brain, and one that we don't often think about, is its ability to determine quickly what stimuli are important to pay attention to, and which can safely be ignored.

Which is not to say that it always gets things right.  There have been a number of fascinating experiments run on inattentional blindness, our complete lack of awareness of something we saw, presumably because the brain thought other stuff it was witnessing was more important.  You've probably heard about the most famous inattentional blindness experiment -- the video clip with a half-dozen people tossing around balls, where the instructions were (for example) to count the number of times a person in a black shirt caught a red ball -- and test subjects literally did not see the person in the gorilla suit who walked out into the middle of the scene, pounded his chest a few times, then walked off.


Even more curious is a less-known experiment where a table was set up in a hotel lobby, with one of the researchers sitting behind it (and a tablecloth over the table and down the front, obscuring what was happening underneath).  The researcher asked passersby if they'd mind taking a survey, and when he got a "yes" he handed them a clipboard, then "accidentally" dropped the pencil.  He ducked down to pick up the pencil -- then slipped under the table, and a completely different person came back up with the pencil.  No facial similarities at all.

Not only did virtually no one hesitate at all when the pencil was handed to them -- no reaction whatsoever -- when questioned afterward, a number of the test subjects claimed the researchers were lying about the switcheroo, even after seeing that there were two researchers behind the table who looked nothing alike.

By far my favorite, though, is the short video called "Whodunnit?" that was put together to increase public awareness of how inattentive and distractible we are (in the context of driving safely).  I won't clue you in about what's going on, but if you haven't seen it, take a look.  If you're anything like me, you'll spend the second half of the video with your mouth hanging open in astonishment.


So our brains aren't perceiving everything around us.  Far from it.  There's a filter applied to everything we sense, and the brain is the ultimate arbiter of what it deems important enough to notice and/or remember.  This is at least partly responsible for the experience I suspect we've all had, of having yourself and a friend describe an event and finding out that you and (s)he recall completely different parts of it.

This all comes up because of some research done at the National Eye Institute, published this week in the Journal of Neuroscience, that shows -- at least if human sensory/perceptive systems work like those of mice -- that there's a tenth of a second window during which your brain has to decide something's important, and if that window is missed, the stimulus is simply ignored.

A team made up of Lupeng Wang, Kerry McAlonan, Sheridan Goldstein, Charles R. Gerfen, and Richard J. Krauzlis took mice that had been genetically engineered to have cells that were switchable using a laser, and turned off some neurons in a region of the brain called the superior colliculus that is known to have a role in mammalian visual processing.  The switching mechanism was extremely fast and precise, allowing researchers to time the activity of the cells with astonishing accuracy.  They found that if the cells in the superior colliculus were turned off for a tenth of a second following a visual stimulus, the mouse acted as if it hadn't seen the stimulus at all.

So it looks like (again, if we can generalize a mouse model to a human brain) we may have an explanation for the invisible gorilla and the survey-switcheroo; our brains have a vanishingly short window in which to say "hey, this is important, pay attention!"  If that window passes, we're likely not to notice what's right in front of us.  Obviously, the mechanism works well enough.  It enabled our ancestors to notice their environment sufficiently well to avoid danger and respond quickly to threats.  But what it means is that once again, we're left with the rather unsettling conclusion that what we experience (and remember) is incomplete and inaccurate no matter how much we try to pay attention.  Even if you're concentrating, there are going to be some stimuli about which your superior colliculus says, "Meh, that's not important," and you just have to hope that most of the time, it makes the right call.

Me, I'm still wondering how I missed all that stuff in Lord Smythe's living room.  I guess my superior colliculus was really out to lunch on that one.

********************************

This week's Skeptophilia book recommendation of the week is brand new -- only published three weeks ago.  Neil Shubin, who became famous for his wonderful book on human evolution Your Inner Fish, has a fantastic new book out -- Some Assembly Required: Decoding Four Billion Years of Life, from Ancient Fossils to DNA.

Shubin's lucid prose makes for fascinating reading, as he takes you down the four-billion-year path from the first simple cells to the biodiversity of the modern Earth, wrapping in not only what we've discovered from the fossil record but the most recent innovations in DNA analysis that demonstrate our common ancestry with every other life form on the planet.  It's a wonderful survey of our current state of knowledge of evolutionary science, and will engage both scientist and layperson alike.  Get Shubin's latest -- and fasten your seatbelts for a wild ride through time.




Friday, October 4, 2019

Ignoring the unimportant

Before I get into the subject of today's post, I want all of you to watch this two-minute video, entitled "Whodunnit?"

*****

How many of you were successful?  I know I wasn't.  I've watched it since about a dozen times, usually in the context of my neuroscience class when we were studying perception, and even knowing what was going on I still didn't see it.  (Yes, I'm being deliberately oblique because there are probably some of you who haven't watched the video.  *stern glare*)

This comes up because of some recent research that appeared in Nature Communications about why it is we get tricked so easily, or (which amounts to the same thing) miss something happening right in front of our eyes.  In "Spatial Suppression Promotes Rapid Figure-Ground Segmentation of Moving Objects," a team made up of Duje Tadin, Woon Ju Park, Kevin C. Dieter, and Michael D. Melnick (of the University of Rochester) and Joseph S. Lappin and Randolph Blake (of Vanderbilt University) describe a fascinating experiment they conducted that shows how when we look at something, our brains are actively suppressing parts of it we've (subconsciously) decided are unimportant.

The authors write:
Segregation of objects from their backgrounds is one of vision’s most important tasks.  This essential step in visual processing, termed figure-ground segmentation, has fascinated neuroscientists and psychologists since the early days of Gestalt psychology.  Visual motion is an especially rich source of information for rapid, effective object segregation.  A stealthy animal cloaked by camouflage immediately loses its invisibility once it begins moving, just as does a friend you’re trying to spot, waving her arms amongst a bustling crowd at the arrival terminal of an airport.  While seemingly effortless, visual segregation of moving objects invokes a challenging problem that is ubiquitous across sensory and cognitive domains: balancing competing demands between processes that discriminate and those that integrate and generalize.  Figure-ground segmentation of moving objects, by definition, requires highlighting of local variations in velocity signals.  This, however, is in conflict with integrative processes necessitated by local motion signals that are often noisy and/or ambiguous.  Achieving an appropriate and adaptive balance between these two competing demands is a key requirement for efficient segregation of moving objects.
The most fascinating part of the research was that they found you can get better at doing this -- but only at the expense of getting worse at perceiving other things.  They tested people's ability to detect a small moving object against a moving background, and found most people were lousy at it.  After five weeks of training, though, they got better...

... but not because they'd gotten better at seeing the small moving object.  Tested by itself, that didn't change.  What changed was they got worse at seeing when the background was moving.  Their brains had decided the background's movement was unimportant, so they simply ignored it.

"In some sense, their brain discarded information it was able to process only five weeks ago," lead author Duje Tadin said in an interview in Quanta.  "Before attention gets to do its job, there’s already a lot of pruning of information.  For motion perception, that pruning has to happen automatically because it needs to be done very quickly."

The last thing a wildebeest ever ignores.  [Image licensed under the Creative Commons Nevit Dilmen, Lion Panthera leo in Tanzania 0670 Nevit, CC BY-SA 3.0]

All of this reinforces once again how generally inaccurate our sensory-integrative systems are.  Oh, they work well enough; they had to in order to be selected for evolutionarily.  But a gain of efficiency, and its subsequent gain in selective fitness, means ignoring as much (or more) than you're actually observing.  Which is why we so often find ourselves in situations where we and our friends relate a completely different version of events we both participated in -- and why, in fact, there are probably times we're both right, at least partly.  We're just remembering different pieces of what we saw and heard -- and misremembering other pieces different ways.

So "I know it happened that way, I saw it" is a big overstatement.  Think about that next time you hear about a court case where a defendant's fate depends on eyewitness testimony.  It may be the highest standard in a court of law -- but from a biological perspective, it's on pretty thin ice.

********************************

This week's Skeptophilia book recommendation is by the team of Mark Carwardine and the brilliant author of The Hitchhiker's Guide to the Galaxy, the late Douglas Adams.  Called Last Chance to See, it's about a round-the-world trip the two took to see the last populations of some of the world's most severely endangered animals, including the Rodrigues Fruit Bat, the Mountain Gorilla, the Aye-Aye, and the Komodo Dragon.  It's fascinating, entertaining, and sad, as Adams and Carwardine take an unflinching look at the devastation being wrought on the world's ecosystems by humans.

But it should be required reading for anyone interested in ecology, the environment, and the animal kingdom. Lucid, often funny, always eye-opening, Last Chance to See will give you a lens into the plight of some of the world's rarest species -- before they're gone forever.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]





Monday, February 11, 2013

Invisible lung gorillas

In recent posts, I've made the point more than once that eyewitness testimony is inherently flawed because of built-in inaccuracies in our perceptual apparatus.  Put simply, we are just poor observers.  Not only do our brains sometimes make stuff up, we also remember events inaccurately, and given appropriate priming, interpret things based on what we thought was happening

None of this is meant to malign our brains, honestly.  They are extraordinarily good at a great many things, and evolution has crafted them into a data-processing device that is orders of magnitude more complex than the best computer in the world.  The fact that they fail sometimes is only to be expected.

You can't be good at everything, after all.

However, a recent experiment, done by Trafton Drew, Melissa Vo, and Jeremy Wolfe of Brigham and Women's Hospital of Boston, has delivered yet another blow to our opinion of the brain's accuracy.  And this one is not just humbling, it's downright scary -- especially to anyone who has had to rely on the skills of medical professionals.  [Source]

The trio recruited a group of 24 trained radiologists as volunteers, and an equal number of average, non-medical types.  The volunteers were given a set of lung CT scans from five different patients to look at on a computer, and were instructed to click on any anomalous nodules they saw.  (The untrained group were given a brief description of what they were looking for.)  The nodules were small, and there were only ten of them in the hundreds of scans analyzed.

What they didn't tell any of the volunteers, however, was that hidden in the slides of the final patient was an image of a gorilla.  (The gorilla was chosen because of the seminal study of inattention, by Simons and Chabris -- see their famous video here.)  The gorilla image was huge by comparison with the nodules -- an estimated 48 times larger than the typical nodule size.

Twenty of the 24 radiologists, and all of the untrained volunteers, didn't see the gorilla.

And it wasn't hard to see.  Every single one of the people who didn't see the gorilla were shown the slide in question afterwards, and asked, "What is that?" and they all answered, "That's a gorilla."  Nevertheless, the vast majority of people who had analyzed the image closely didn't see what was right in front of their faces.  (The phenomenon has been named "inattentional blindness.")

Now, to their credit, the radiologists, who presumably would know that a gorilla in your lungs is abnormal, were better at spotting the anomaly than the average guy.  They were also (reassuringly) way better at finding the nodules.  But this once again punches a hole in our certainty that what we notice (and remember) is what is actually there.

I'm often asked -- usually apropos of UFO sightings, and less commonly about phenomena such as hauntings -- why I am so skeptical, when eyewitnesses report thousands of encounters every year.  It's not, honestly, that I think it's impossible that there is something weird out there; especially in the case of UFOs, I think that the likelihood of life elsewhere in the universe is near 100%, and I'd be mighty surprised if some of it didn't turn out to be intelligent.  (Why they'd want to come here, though, is a bit of a mystery.)  So, my beef isn't that I think the claim is impossible.  My problem is that eyewitness testimony is so inherently flawed that I need more than just your claim of having seen a UFO in order to believe it myself.  (In fact, I need more than just "I saw it," as well.  I don't trust my own brain any more than I trust yours.)  Our perceptual systems are simply too easy to fool, and too poor at remembering details, to be reliable recorders of data.

So, anyway, that's the latest from neuroscience.  More evidence of the inaccuracy of the human brain.  Makes me wonder what I'm missing, as I wander through my day -- all the stuff I'm not noticing.  Probably most of it is trivial, and it's just as well that my brain dismisses it -- but you have to wonder how many times something truly marvelous crosses your path -- the equivalent of an invisible lung gorilla -- and you don't see it.