Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, March 1, 2016

The origins of moral outrage

Here in the United States, we're in the middle of an increasingly nasty presidential race, which means that besides political posturing, we're seeing a lot of another facet of human behavior:

Moral outrage.

We all tend to feel some level of disbelief that there are people who don't believe in the same standards of morality and ethics that we do.  As Kathryn Schulz points out, in her wonderful TED talk "On Being Wrong," "We walk around in a little bubble of feeling right about everything...  We all accept that we can be wrong in the abstract.  Of course we could be wrong.  But when we try to think of one single thing we're wrong about, here and now, we can't do it."

So what this does is to drive us to some really ugly assumptions about our fellow humans.  If they disagree with us, they must be (check all that apply): deluded, misguided, uninformed, ignorant, immoral, or plain old stupid.

[image courtesy of photographer Joost J. Bakker and the Wikimedia Commons]

But a recent paper in Nature shows that we have another, and darker, driver for moral outrage than our inability to conceive of the existence of people who disagree with us.  Jillian J. Jordan, Moshe Hoffman, Paul Bloom, and David G. Rand, in a collaboration between the Departments of Psychology at Harvard and Yale, released the results of a fairly grim study in "Third-Party Punishment as a Costly Signal of Trustworthiness," in which we find out that those who call out (or otherwise punish) bad behavior or negative actions do so in part because afterwards, they are perceived as more trustworthy themselves.

In the words of the researchers:
Third-party punishment (TPP), in which unaffected observers punish selfishness, promotes cooperation by deterring defection.  But why should individuals choose to bear the costs of punishing?  We present a game theoretic model of TPP as a costly signal of trustworthiness.  Our model is based on individual differences in the costs and/or benefits of being trustworthy.  We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP).  We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves... 
We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers.  Furthermore, as predicted by our model, introducing a more informative signal—the opportunity to help directly—attenuates these signalling effects.  When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness.  Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible.  Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one’s trustworthiness.
Calling out people who transgress not only makes the transgression less likely to happen again; it also strengthens the position of the one who called out the transgressor.  It's unlikely that people do this consciously, but Jordan et al. have shown that punishing selfishness isn't necessarily selfless itself.

All of which makes the whole group dynamics thing a little scary.  As social primates, we have a strong innate vested interest in remaining part of the in-group, and this sometimes casts a veneer of high morality over actions that are actually far more complex.  As Philip Zimbardo showed in his infamous "Stanford Prison Experiment," we will do a great deal both to conform to the expectations of the group we belong to, and to exclude and vilify those in an opposing group.  And now the study by Jordan et al. has showed that we do this not only to eradicate behaviors we consider immoral, but to appear more moral to our fellow group members.

Which leaves me wondering how we can tease apart morality from the sketchier side of human behavior.  Probably we can't.  It will, however, make me a great deal more careful to be sure I'm on solid ground before I call someone else out on matters of belief.  I'm nowhere near sure enough of the purity of my own motives most of the time to be at all confident, much less self-righteous, about proclaiming to the world what I think is right and wrong.

2 comments:

  1. Kind of reinforces the bs of morality and shows how grey it really is.

    ReplyDelete
  2. One unfortunate but persistent feature of morality is the tendency of individuals to consider themselves to have finally reached the moral high ground. They are willing to consider changes they have made along the way but unwilling to consider changes they may make in the future.

    ReplyDelete