Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, June 16, 2018

Illuminating a prison

In the unit on ethics in my Critical Thinking class, we always discuss a variety of experiments that have been done to elucidate the origins, characteristics, and extent of human morality.  Among the ones we look at are:
  • Philippa Foot's famous "Trolly Problem" experiment (1967), where a person is presented with two scenarios, both of which result in one death to save five people -- but in one, the death is caused by an action with a mechanical intermediary (flipping a switch), while in the second, the death is caused by the person shoving someone off a bridge with their own hands.  The interesting result is that humans don't view these as equivalent -- having a mechanical intermediary far reduces the emotional charge of the situation, and makes people much more likely to do it, even though the outcomes are identical.
  • The "Milgram experiment," conducted in 1963 by Stanley Milgram, which looked at the likelihood of someone hurting another person if commanded to do so by an authority figure.  Turns out, most of us will...
  • The Zurich tribalism experiment, done in Switzerland in 2015, wherein we find test subjects are willing to inflict painful shocks on others without activating their own empathy centers -- if the person being shocked is wearing a soccer jersey of a team the test subject didn't like.
  • Karen Wynn's "baby lab" experiment (2014), which found that even very young babies have an innate perception of fairness and morality, and want helpful individuals rewarded and unhelpful individuals punished.
The last time I taught the class, I included a fifth experiment -- the notorious "Stanford prison experiment," done by Philip Zimbardo in 1971.  You've probably heard about this one; it involved 24 Stanford students who had all undergone personality screening to weed out anyone with a tendency toward sociopathy.  The 24 were split into two groups -- the "prisoners" and the "guards."  As Zimbardo recounted the outcome, the guards very quickly banded together and acted with cruelty and disdain toward the prisoners, and the prisoners responded by sabotaging whatever they could.  Several of the prisoners broke down completely, and the experiment had to be called off because some of the prisoners were obviously in such mental distress that it would have been inhumane to continue.

Sing Sing Prison, 1915 [Image is in the Public Domain]

Zimbardo became famous instantly, and his results used to explain everything from people who'd been collaborators during the Holocaust to William Calley and his men and the perpetration of the My Lai Massacre.  When banding together against a perceived common enemy, Zimbardo said, we'll be much more likely to behave immorally -- especially when (as the Milgram experiment suggests) we're being ordered to behave that way by an authority.

There are two problems with this.

First, in 2001, psychologists Alex Haslam and Stephen Reicher tried to replicate Zimbardo's results, and found that it didn't work.  What they suggested was that the outcome of the Stanford prison experiment weren't because the "guards" saw the "prisoners" as enemies, but because the guards were identifying with the experimenters -- in other words, their activities were being directed by an authority figure.  So the experiment boils down to a rehash of what Milgram did eight years earlier.

But there's a darker side of this, which I just found out about in an article in Medium by Ben Blum called "The Lifespan of a Lie."  In it, Blum makes a disturbing claim; that Zimbardo hadn't done what he claimed, which was to break the students into groups randomly and give them no instructions other than "guards control prisoners, prisoners obey guards;" he had actually coached the guards to behave cruelly -- and may have even encouraged one of the prisoners to go into hysterics.

The most famous breakdown, that of "prisoner" Doug Korpi, was dramatic -- he was locked in a closet by a guard, and proceeded to have a complete meltdown, screaming and crying and kicking the door.  The problem, Korpi says, is that it was all an act, and both he and Zimbardo knew it.  "Anybody who is a clinician would know that I was faking,” Korpi told Blum.  "If you listen to the tape, it’s not subtle.  I’m not that good at acting.  I mean, I think I do a fairly good job, but I’m more hysterical than psychotic."

At least some of the guards were acting as well.  One of the ones that had (according to Zimbardo) exhibited true cruelty toward the prisoners, Dave Eshelman, said his whole persona was a put-on.  "I took it as a kind of an improv exercise,” Eshelman told Blum.  "I believed that I was doing what the researchers wanted me to do, and I thought I’d do it better than anybody else by creating this despicable guard persona.  I’d never been to the South, but I used a southern accent, which I got from Cool Hand Luke."

Zimbardo, of course, denies all of this, and spoke to Blum briefly -- mostly to say that the experiment was fine, and the claims of fraud all nonsense.  Instead, he said that Haslam and Reicher's failed attempt at replication was "fraudulent," and the experiment itself valid.  "It’s the most famous study in the history of psychology at this point," Zimbardo told Blum.  "There’s no study that people talk about fifty years later.  Ordinary people know about it.  They say, ‘What do you do?’ ‘I’m a psychologist.’  It could be a cab driver in Budapest.  It could be a restaurant owner in Poland.  I mention I’m a psychologist, and they say, ‘Did you hear about the study?’  It’s got a life of its own now.  If he wants to say it was all a hoax, that’s up to him.  I’m not going to defend it anymore.  The defense is its longevity."

Which, of course, is not much of a defense.  Some really stupid ideas (I'm lookin' at you, homeopathy) have been around for ages.  I do find it rather upsetting, though, and not just because I've been teaching an experiment for years that turns out not to have gone down the way the researchers claimed.  It's a stain on science as a whole -- that we accepted the results of an experiment that failed replication, mostly because its outcome seemed so comforting.  People aren't inherently immoral; they act immorally when they're placed in situations where it's expected.  Alter situations, it implied, and people will rise to higher motives.

Well, maybe.  There are still a lot of questions about morality, and the other four experiments I teach have borne up to scrutiny.  We do harm more easily when we're one step removed from the person being harmed, when an authority figure tells us to, when the harmed person doesn't belong to our "tribe," and when the recipient of punishment is perceived to have deserved it.  But simply banding together, Lord of the Flies-style, to visit harm upon the helpless -- the evidence for that is far slimmer.

And I suppose the Zimbardo experiment will have to be transferred to a different lecture next year -- the one I do on examples of scientific fraud and researcher malfeasance.

******************************

This week's Skeptophilia book recommendation is a classic: the late Oliver Sacks's The Man Who Mistook His Wife for a Hat.  It's required reading for anyone who is interested in the inner workings of the human mind, and highlights how fragile our perceptual apparatus is -- and how even minor changes in our nervous systems can result in our interacting with the world in what appear from the outside to be completely bizarre ways.  Broken up into short vignettes about actual patients Sacks worked with, it's a quick and completely fascinating read.





No comments:

Post a Comment