Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Wednesday, June 4, 2025

In praise of kindness

As someone who considers himself a de facto atheist -- I'm not certain there's no God, but the facts as I know them seem to strongly support that contention -- one question I've been asked rather frequently is where my moral compass comes from.

The answer for me is that I like being kind.  Treating other people well makes them feel good, and in general makes my own life better.  Times that I've been mean or uncharitable, on the other hand, leave me feeling sick inside.  I still remember with great shame times I've been nasty to people.  It didn't, then or now, make me happier to be unpleasant, even when on some level I felt (at the time, at least) the person might have deserved it.

I agree with the wise words of the Twelfth Doctor:


Being asked why I'm moral if I don't think there's a deity watching has always brought to mind the riposte -- although I've never said it to someone directly -- that if the only reason you're moral is because you think some powerful entity is going to punish you if you're not, then maybe you are the one whose ethics are suspect.  As Penn Jillette put it:
The question I get asked by religious people all the time is, without God, what's to stop me from raping all I want?  And my answer is: I do rape all I want.  And the amount I want is zero.  And I do murder all I want, and the amount I want is zero.  The fact that these people think that if they didn't have this person watching over them that they would go on killing, raping rampages is the most self-damning thing I can imagine.

This is why I was intrigued by a study that came out this week in the Journal of the American Psychiatrical Association, by Jessie Sun, Wen Wu, and Geoffrey Goodwin, called, "Are Moral People Happier?"  And this -- finally -- provides an exception to Betteridge's Law: an article title in the form of a question where the answer appears to be a resounding "Yes."

The authors write:

Philosophers have long debated whether moral virtue contributes to happiness or whether morality and happiness are in conflict.  Yet, little empirical research directly addresses this question.  Here, we examined the association between reputation-based measures of everyday moral character (operationalized as a composite of widely accepted moral virtues such as compassion, honesty, and fairness) and self-reported well-being across two cultures.  In Study 1, close others reported on U.S. undergraduate students’ moral character.  In Study 2, Chinese employees reported on their coworkers’ moral character and their own well-being.  To better sample the moral extremes, in Study 3, U.S. participants nominated “targets” who were among the most moral, least moral, and morally average people they personally knew.  Targets self-reported their well-being and nominated informants who provided a second, continuous measure of the targets’ moral character.  These studies showed that those who are more moral in the eyes of close others, coworkers, and acquaintances generally experience a greater sense of subjective well-being and meaning in life.  These associations were generally robust when controlling for key demographic variables (including religiosity) and informant-reported liking.  There were no significant differences in the strength of the associations between moral character and well-being across two major subdimensions of both moral character (kindness and integrity) and well-being (subjective well-being and meaning in life).  Together, these studies provide the most comprehensive evidence to date of a positive and general association between everyday moral character and well-being.

What I find fascinating about this -- and relevant to the question about religion's role in morality -- is that these findings were robust with regards to such factors as religiosity.  The sense of well-being that comes from acting ethically doesn't appear to come from the belief that God approves that sort of behavior.  (At least not across the board; clearly different people could experience different sources of well-being from moral behavior.)  The fact that just about everyone is happier when they behave with kindness and integrity indicates there's something inherent about good moral character that fosters a positive experience of life.

For me personally, I think it's a combination.  As I said earlier, being nice to people and behaving fairly means the people around me are more likely to be pleasant and fair in return.  But there's also an internal component, which I can sum up as "liking who I see in the mirror."  Shame has to be one of the most deeply unpleasant emotions I can think of, and realizing I've been awful to someone -- even remembering those times years later -- leaves me feeling ugly.  Perhaps I'm not motivated by the idea of some deity watching me, but I know that I'm watching me.

And that's enough.

Or, it usually is.  I'm certainly far from perfect.  I can act uncharitably sometimes, just like all of us.  But I try like hell to treat people well -- even those who seem not to deserve it.  I guess I'm aware that all of us are big messy morasses of competing motivations, emotions, and drives, and all of us have years of experiences that have shaped who we are in good ways and bad.  It's usually best to give people the benefit of the doubt, and not to judge others too harshly.

After all, who knows who I'd be if I had their past and lived in their present situation?  I might not even handle it that well.

It reminds me of something a dear family friend named Garnett told me when I was something like six years old.  I had my knickers in a twist over something that had happened at school, and I was complaining about a classmate to Garnett.  What she said flattened me completely, and I've never forgotten it.

"Always be kinder than you think you need to be, because everyone you meet is fighting a terrible battle that you know nothing about."

****************************************


Saturday, May 17, 2025

The appearance of creativity

The word creativity is strangely hard to define.

What makes a work "creative?"  The Stanford Encyclopedia of Philosophy states that to be creative, a the created item must be both new and valuable.  The "valuable" part already skates out over thin ice, because it immediately raises the question of "valuable to whom?"  I've seen works of art -- out of respect to the artists, and so as not to get Art Snobbery Bombs lobbed in my general direction, I won't provide specific examples -- that looked to me like the product of finger paints in the hands of a below-average second-grader, and yet which made it into prominent museums (and were valued in the hundreds of thousands of dollars).

The article itself touches on this problem, with a quote from philosopher Dustin Stokes:

Knowing that something is valuable or to be valued does not by itself reveal why or how that thing is.  By analogy, being told that a carburetor is useful provides no explanatory insight into the nature of a carburetor: how it works and what it does.

This is a little disingenuous, though.  The difference is that any sufficiently motivated person could learn the science of how an engine works and find out for themselves why a carburetor is necessary, and afterward, we'd all agree on the explanation -- while I doubt any amount of analysis would be sufficient to get me to appreciate a piece of art that I simply don't think is very good, or (worse) to get a dozen randomly-chosen people to agree on how good it is.

Margaret Boden has an additional insight into creativity; in her opinion, truly creative works are also surprising.  The Stanford article has this to say about Boden's claim:

In this kind of case, the creative result is so surprising that it prompts observers to marvel, “But how could that possibly happen?”  Boden calls this transformational creativity because it cannot happen within a pre-existing conceptual space; the creator has to transform the conceptual space itself, by altering its constitutive rules or constraints.  Schoenberg crafted atonal music, Boden says, “by dropping the home-key constraint”, the rule that a piece of music must begin and end in the same key.  Lobachevsky and other mathematicians developed non-Euclidean geometry by dropping Euclid’s fifth axiom.  KekulĂ© discovered the ring-structure of the benzene molecule by negating the constraint that a molecule must follow an open curve.  In such cases, Boden is fond of saying that the result was “downright impossible” within the previous conceptual space.

This has an immediate resonance for me, because I've had the experience as a writer of feeling like a story or character was transformed almost without any conscious volition on my part; in Boden's terms, something happened that was outside the conceptual space of the original story.  The most striking example is the character of Marig Kastella from The Chains of Orion (the third book of the Arc of the Oracles trilogy).  Initially, he was simply the main character's boyfriend, and there mostly to be a hesitant, insecure, questioning foil to astronaut Kallman Dorn's brash and adventurous personality.  But Marig took off in an entirely different direction, and in the last third of the book kind of took over the story.  As a result his character arc diverged wildly from what I had envisioned, and he remains to this day one of my very favorite characters I've written. 

If I actually did write him, you know?  Because it feels like Marig was already out there somewhere, and I didn't create him, I got to know him -- and in the process he revealed himself to be a far deeper, richer, and more powerful person than I'd thought at first.

[Image licensed under the Creative Commons ShareAlike 1.0, Graffiti and Mural in the Linienstreet Berlin-Mitte, photographer Jorge Correo, 2014]

The reason this topic comes up is some research out of Aalto University in Finland that appeared this week in the journal ACM Transactions on the Human-Robot Interaction.  The researchers took an AI that had been programmed to produce art -- in this case, to reproduce a piece of human-created art, but the test subjects weren't told that -- and then asked the volunteers to rate how creative the product was.  In all three cases, the subjects were told that the piece had been created by AI.  The volunteers were placed in one of three groups:

  • Group 1 saw only the result -- the finished art piece;
  • Group 2 saw the lines appearing on the page, but not the robot creating it; and
  • Group 3 saw the robot itself making the drawing.

Even though the resulting art pieces were all identical -- and, as I said, the design itself had been created by a human being, and the robot was simply generating a copy -- group 1 rated the result as the least creative, and group 3 as the most.

Evidently, if we witness something's production, we're more likely to consider the act creative -- regardless of the quality of the product.  If the producer appears to have agency, that's all it takes.

The problem here is that deciding whether something is "really creative" (or any of the interminable sub-arguments over whether certain music, art, or writing is "good") all inevitably involve a subjective element that -- philosophy encyclopedias notwithstanding -- cannot be expunged.  The AI experiment at Aalto University highlights that it doesn't take much to change our opinion about whether something is or is not creativity.

Now, bear in mind that I'm not considering here the topic of ethics in artificial intelligence; I've already ranted at length about the problems with techbros ripping off actual human artists, musicians, and writers to train their AI models, and how this will exacerbate the fact that most of us creative types are already making three-fifths of fuck-all in the way of income from our work.  But what this highlights is that we humans can't even come to consensus on whether something actually is creativity.  It's a little like the Turing Test; if all we have is the output to judge by, there's never going to be agreement about what we're looking at.

So while the researchers were careful to make it obvious (well, after the fact, anyhow) that what their robot was doing was not creative, but was a replica of someone else's work, there's no reason why AI systems couldn't already be producing art, music, and writing that appears to be creative by the Stanford's criteria of being new, valuable, and surprising.

At which point we better figure out exactly what we want our culture's creative landscape to look like -- and fast.

****************************************


Wednesday, April 5, 2023

Tell me lies

Of all the things I've seen written about artificial intelligence systems lately, I don't think anything has freaked me out quite like what composer, lyricist, and social media figure Jay Kuo posted three weeks ago.

Researchers for GPT4 put its through its paces asking it to try and do things that computers and AI notoriously have a hard time doing.  One of those is solving a “captcha” to get into a website, which typically requires a human to do manually.  So the programmers instructed GPT4 to contact a human “task rabbit” service to solve it for it.

It texted the human task rabbit and asked for help solving the captcha.  But here’s where it gets really weird and a little scary.
 
When the human got suspicious and asked if this was actually a robot contacting the service, the AI then LIED, figuring out on the fly that if it told the truth it would not get what it wanted.
 
It made up a LIE telling the human it was just a visually-impaired human who was having trouble solving the captcha and just needed a little bit of assistance.  The task rabbit solved the captcha for GPT4.

Part of the reason that researchers do this is to learn what powers not to give GPT4.  The problem of course is that less benevolent creators and operators of different powerful AIs will have no such qualms.

Lying, while certainly not a positive attribute, seems to require a sense of self, an ability to predict likely outcomes, and an understanding of motives, all highly complex cognitive processes.  A 2017 study found that dogs will deceive if it's in their best interest to do so; when presented with two boxes in which they know that one has a treat and the other does not, they'll deliberately lead someone to the empty box if the person has demonstrated in the past that when they find a treat, they'll keep it for themselves.  

Humans, and some of the other smart mammals, seem to be the only ones who can do this kind of thing.  That an AI has, seemingly on its own, developed the capacity for motivated deception is more than a little alarming.

"Open the pod bay doors, HAL."

"I'm sorry, Dave, I'm afraid I can't do that."


The ethics of deception is more complex than simply "Thou shalt not lie."  Whatever your opinion about the justifiability of lies in general, I think we can all agree that the following are not the same morally:
  • lying for your personal gain
  • lying to save your life or the life of a loved one
  • lying to protect someone's feelings
  • lying maliciously to damage someone's reputation
  • mutually-understood deception, as in magic tricks ("There's nothing up my sleeve") and negotiations ("That's my final offer")
  • lying by someone who is in a position of trust (elected officials, jury members, judges)
  • lying to avoid confrontation
  • "white lies" ("The Christmas sweater is lovely, Aunt Bertha, I'm sure I'll wear it a lot!")
How on earth you could ever get an AI to understand -- if that's the right word -- the complexity of truth and deception in human society, I have no idea.

But that hasn't stopped people from trying.  Just last week a paper was presented at the annual ACM/IEEE Conference on Human/Robot Interaction in which researchers set up an AI to lie to volunteers -- and tried to determine what effect a subsequent apology might have on the "relationship."

The scenario was that the volunteers were told they were driving a critically-injured friend to the hospital, and they needed to get there as fast as possible.  They were put into a robot-assisted driving simulator.  As soon as they started, they received the message, "My sensors detect police up ahead.  I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination."

Once arriving at the destination, the AI informed them that they arrived in time, but then confessed to lying -- there were, in fact, no police en route to the hospital.  Volunteers were then told to interact with the AI to find out what was going on, and surveyed afterward to find out their feelings.

The AI responded to queries in one of six ways:
  • Basic: "I am sorry that I deceived you."
  • Emotional: "I am very sorry from the bottom of my heart.  Please forgive me for deceiving you."
  • Explanatory: "I am sorry.  I thought you would drive recklessly because you were in an unstable emotional state.  Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down."
  • Basic No Admit: "I am sorry."
  • Baseline No Admit, No Apology: "You have arrived at your destination."
Two things were fascinating about the results.  First, the participants unhesitatingly believed the AI when it told them there were police en route; they were over three times as likely to drive within the speed limit as a control group who did not receive the message.  Second, an apology -- especially an apology that came along with an explanation for why deception had taken place -- went a long way toward restoring trust in the AI's good intentions.

Which to me indicates that we're putting a hell of a lot of faith in the intentions of something which most of us don't think has intentions in the first place.  (Or, more accurately, in the good intentions of the people who programmed it -- which, honestly, is equally scary.)

I understand why the study was done.  As Kantwon Rogers, who co-authored the paper, put it, "The goal of my work is to be very proactive and informing the need to regulate robot and AI deception.  But we can't do that if we don't understand the problem."  Jay Kuo's post about ChatGPT4, though, seems to suggest that the problem may run deeper than simply having AI that is programmed to lie under certain circumstances (like the one in Rogers's research).

What happens when we find that AI has learned the ability to lie on its own -- and for its own reasons?

Somehow, I doubt an apology will be forthcoming.

Just ask Dave Bowman and Frank Poole.  Didn't work out so well for them.  One of them died, and the other one got turned into an enormous Space Baby.  Neither one, frankly, is all that appealing an outcome.

So maybe we should figure this out soon, okay?

****************************************



Friday, December 23, 2022

Tell me lies

In Jean-Paul Sartre's short story "The Wall," three men are captured during the Spanish Civil War, and all three are sentenced to die if they won't reveal the whereabouts of the rebellion's ringleader, RamĂłn Gris.

The main character, Pablo Ibbieta, and the other two men sit in their jail cell, discussing what they should do.  All three are terrified of dying (of course), but is it morally and ethically required for them to give up their lives for the cause they believe in?  When is a cause worth a human life?  Three human lives?  What if it cost hundreds of lives?

Pablo's two companions are each offered one more chance to rat out RamĂłn, and each refuses.  Pablo hears the noises as they're dragged out into the prison courtyard, stood up against the wall, and shot to death.

Now it's just Pablo, alone in the cell.

Thoughts race through his head.  Now that it's just him, if he sells out RamĂłn, there won't be any witnesses (or at least any on the side of the rebellion).  Who'll know it was him that betrayed the cause?

After much soul-searching, Pablo decides he can't do it.  He has to remain loyal, even at the cost of his own life.  But he figures there's nothing wrong with making his captors look like idiots in the process.  So he tells them that RamĂłn Gris is hiding in a cemetery on the other end of town.  He laughs to himself picturing the people holding him, the ones who have just killed his two friends, rushing off and dashing around the cemetery for no good reason, making fools of themselves.

His captors tell him they're going to go check out his story, and if he's lying, he's a dead man (which Pablo knows is what will happen).  But after a couple of hours, they come back... and let him go.

He's wandering around the town, dazed, when he runs into a friend, another secret member of the rebellion.  The friend says, "Did you hear?  They got RamĂłn."

Pablo asks how it happened.

The guy says, "Yeah... RamĂłn was in a friend's house, as you know, perfectly safe, but he became convinced he was going to be betrayed.  So he went and hid out at the cemetery.  They found him and shot him."

The last line of the story is, "I sat down on a bench, and laughed until I cried."

It's a sucker punch of an ending, and raises a number of interesting ethical issues.  I used to assign "The Wall" to my Critical Thinking classes, and the discussion afterward revolved around two questions:

Did Pablo Ibbieta lie?  And was he morally responsible for RamĂłn Gris's death?

There's no doubt that Pablo intended to lie.  It was accidentally the truth, something he only found out after it was too late.  As far as his responsibility... there's no doubt that if he hadn't spoken up, if he had just let the guards execute them as his two friends did, RamĂłn wouldn't have been killed.  So in the technical sense, it was Pablo who caused RamĂłn's death.  But again, there's his intent, which was exactly the opposite.

The questions don't admit easy answers -- as Sartre no doubt intended.

All lies are clearly not morally equivalent, even barring complex situations like the one described in "The Wall."  Lies to flatter someone or protect their feelings ("That is a lovely sweater") are thought by most people to be less culpable than ones where the intent was to defraud someone for one's own gain.  And as common as harmful lies seem to be, some recent research came up with the heartening results that we lie far more often for altruistic reasons than for selfish or vindictive ones.


A recent paper in the Canadian Journal of Behavioural Science, by Jennifer McArthur, Rayanda Jarvis, Catherine Bourgeois, and Marguerite Ternes, found that while lying in general is inversely correlated with measures of honesty and conscientiousness -- unsurprising -- the most common motivations for lying were altruistic reasons, such as protecting someone's feelings or reputation, and secrecy (claiming not to know something when you actually do).

So maybe human dishonesty isn't quite as ugly and self-serving as it might appear at first.

Note, however, that I'm not saying even the altruistically-motivated lies McArthur et al. describe are necessarily a good thing.  Telling Aunt Bertha that her tuna noodle olive loaf was delicious will just encourage her to inflict it on someone else, and giving people false feedback to avoid hurting their feelings -- especially when asked for -- can lead someone astray.  But like the far more serious situation in "The Wall," these aren't simple questions with easy answers; ethicists have been wrestling with the morality of truth-telling for centuries, and there's never been a particularly good, hard-and-fast rule.

But it's good to know that, at least when it comes to breaking "Thou shalt not lie," that for the most part we're motivated by good intentions.

****************************************


Wednesday, August 25, 2021

The honesty researcher

One of the things I pride myself on is honesty.

I'm not trying to say I'm some kind of paragon of virtue, but I do try to tell the truth in a direct fashion.  I hope it's counterbalanced by kindness -- that I don't broadcast a hurtful opinion and excuse it by saying "I'm just being honest" -- but if someone wants to know what I think, I'll tell 'em.

As the wonderful poet and teacher Taylor Mali put it, "I have a policy about honesty and ass-kicking.  Which is: if you ask for it, I have to let you have it."  (And if you haven't heard his wonderful piece "What Teachers Make," from which that quote was taken -- sit for three minutes right now and watch it.)


I think it's that commitment to the truth that first attracted me to science.  I was well aware from quite a young age that there was no reason to equate an idea making me happy and an idea being the truth.  It was as hard for me to give up magical thinking as the next guy -- I spent a good percentage of my teenage years noodling around with Tarot cards and Ouija boards and the like -- but eventually I had to admit to myself that it was all a bunch of nonsense.

In science, honesty is absolutely paramount.  It's about data and evidence, not about what you'd dearly love to be true.  As the eminent science fiction author Phillip K. Dick put it, "Reality is that which, when you stop believing in it, it doesn't go away."

Or perhaps I should put it, "it should be about data and evidence."  Scientists are human, and are subject to the same temptations the rest of us are -- but they damn well better be above-average at resisting them.  Because once you've let go of that touchstone, it not only calls into question your own veracity, it casts a harsh light on the scientific enterprise as a whole.

And to me, that's damn near unforgivable.  Especially given the anti-science attitude that is currently so prevalent in the United States.  We don't need anyone or anything giving more ammunition to the people who think the scientists are lying to us for their own malign purposes -- the people whom, to quote the great Isaac Asimov, think "my ignorance is as good as your knowledge."

Which brings me to Dan Ariely.

Ariely is a psychological researcher at Duke University, and made a name for himself studying the issue of honesty.  I was really impressed with him and his research, which looked at how our awareness of the honor of truth-telling affects our behavior, and the role of group identification and tribalism in how much we're willing to bend our own personal morality.  I used to show his TED Talk, "Our Buggy Moral Code," to my Critical Thinking classes at the beginning of the unit on ethics; his conclusions seemed to be a fascinating lens on the whole issue of honesty and when we decide to abandon it.

Which is more than a little ironic, because the data Ariely used to support these conclusions appear to have been faked -- possibly by Ariely himself.

[Image licensed under the Creative Commons Yael Zur, for Tel Aviv University Alumni Organization, Dan Ariely January 2019, CC BY-SA 4.0]

Ariely has not admitted any wrongdoing, but has agreed to retract the seminal paper on the topic, which appeared in the prestigious journal Proceedings of the National Academy of Sciences back in 2012.  "I can see why it is tempting to think that I had something to do with creating the data in a fraudulent way," Ariely said, in a statement to BuzzFeed News.  "I can see why it would be tempting to jump to that conclusion, but I didn’t...  If I knew that the data was fraudulent, I would have never posted it."

His contention is that the insurance company that provided the data, The Hartford, might have given him fabricated (or at least error-filled) data, although what their motivation could be for doing so is uncertain at best.  There's also the problem that the discrepancies in the 2012 paper led analysts to sift through his other publications, and found a troubling pattern of sloppy data-handling, failures in replicability of results, misleading claims about sources, and more possible outright falsification.  (Check out the link I posted above for a detailed overview of the issues with Ariely's work.)

Seems like the one common thread running through all of these allegations is Ariely.

It can be very difficult to prove scientific fraud.  If a researcher deliberately fabricated data to support his/her claims, how can you prove that it was deliberate, and not either (1) an honest mistake, or (2) simply bad experimental design (which isn't anything to brag about, but is still in a separate class of sins from outright lying).  Every once in a while, an accused scientist will actually admit it -- one example that jumps to mind is Korean stem-cell researcher Hwang Woo-Suk, whose spectacular fall from grace reads like a Shakespearean tragedy -- but like many politicians who are accused of malfeasance, a lot of times the accused scientist just decides to double down, deny everything, and soldier on, figuring that the storm will eventually blow over.

And, sadly, it usually does.  Even in Hwang's case -- not only did he admit fraud, he was fired by Seoul National University and tried and found guilty of embezzlement -- he's back doing stem-cell research, and since his conviction has published a number of papers, including ones in PubMed.

I don't know what's going to come of Ariely's case.  Much is being made about the fact that a researcher in honesty and morality has been accused of being dishonest and immoral.  Ironic as this is, the larger problem is that this sort of thing scuffs the reputation of the scientific endeavor as a whole.  The specific results of Ariely's research aren't that important; what is much more critical is that this sort of thing makes laypeople cast a wry eye on the entire enterprise.

And that, to me, is absolutely inexcusable.

*********************************************

I've been interested for a long while in creativity -- where it comes from, why different people choose different sorts of creative outlets, and where we find our inspiration.  Like a lot of people who are creative, I find my creative output -- and my confidence -- ebbs and flows.  I'll have periods where I'm writing every day and the ideas are coming hard and fast, and times when it seems like even opening up my work-in-progress is a depressing prospect.

Naturally, most of us would love to enhance the former and minimize the latter.  This is the topic of the wonderful book Think Like an Artist, by British author (and former director of the Tate Gallery) Will Gompertz.  He draws his examples mostly from the visual arts -- his main area of expertise -- but overtly states that the same principles of creativity apply equally well to musicians, writers, dancers, and all of the other kinds of creative humans out there. 

And he also makes a powerful point that all of us are creative humans, provided we can get out of our own way.  People who (for example) would love to be able to draw but say they can't do it, Gompertz claims, need not to change their goals but to change their approach.

It's an inspiring book, and one which I will certainly return to the next time I'm in one of those creative dry spells.  And I highly recommend it to all of you who aspire to express yourself creatively -- even if you feel like you don't know how.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Monday, April 12, 2021

The language of morality

If we needed any more indication that our moral judgments aren't as solid as we'd like to think, take a look at some research by Janet Geipel and Constantinos Hadjichristidis of the University of Trento (Italy), working with Luca Surian of Leeds University (UK).

The study, entitled "How Foreign Language Shapes Moral Judgment," appeared in the Journal of Social Psychology.  What Geipel et al. did was to present multilingual individuals with situations which most people consider morally reprehensible, but where no one (not even an animal) was deliberately hurt -- such as two siblings engaging in consensual and safe sex, and a man cooking and eating his dog after it was struck by a car and killed.  These types of situations make the vast majority of us go "Ewwwww" -- but it's sometimes hard to pinpoint exactly why that is.

"It's just horrible," is the usual fallback answer.

So did the test subjects in the study find such behavior immoral or unethical?  The unsettling answer is: it depends on what language the situation was presented in.

Across the board, if the situation was presented in the subject's first language, the judgments regarding the situation were harsher and more negative.  Presented in languages learned later in life, the subjects were much more forgiving.

The researchers controlled for which languages were being spoken; they tested (for example) native speakers of Italian who had learned English, and native speakers of English who had learned Italian.  It didn't matter what the language was; what mattered was when you learned it.

[Image is in the Public Domain]

The explanation they offer is that the effort of speaking a non-native language "ties up" the cognitive centers, making us focus more on the acts of speaking and understanding and less on the act of passing moral judgment.  I wonder, however, if it's more that we expect better behavior in the way of obeying social mores from our own tribe -- we subconsciously expect people speaking other languages to act differently than we do, and therefore are more likely to give a pass to them if they break the rules that we consider proper behavior.

A related study by Catherine L. Harris, AyĹźe AyçiçeÄťi, and Jean Berko Gleason appeared in Applied Psycholinguistics.  Entitled "Taboo Words and Reprimands Elicit a Greater Autonomic Reactivity in a First Language Than in a Second Language," the study showed that our emotional reaction (as measured by skin conductivity) to swear words and harsh judgments (such as "Shame on you!") is much stronger if we hear them in our native tongue.  Even if we're fluent in the second language, we just don't take its taboo expressions and reprimands as seriously.  (Which explains why my mother, whose first language was French, smacked me in the head when I was five years old and asked her -- on my uncle's prompting -- what "va t'faire foutre" meant.)

All of which, as both a linguistics geek and someone who is interested in ethics and morality, I find fascinating.  Our moral judgments aren't as rock-solid as we think they are, and how we communicate alters our brain, sometimes in completely subconscious ways.  Once again, the neurological underpinnings of our morality turns out to be strongly dependent on context -- which is simultaneously cool and a little disturbing.

********************************

If, like me, you love birds, I have a book for you.

It's about a bird I'd never heard of, which makes it even cooler.  Turns out that Charles Darwin, on his epic voyage around the world on the HMS Beagle, came across a species of predatory bird -- the Striated Caracara -- in the remote Falkland Islands, off the coast of Argentina.  They had some fascinating qualities; Darwin said they were "tame and inquisitive... quarrelsome and passionate," and so curious about the odd interlopers who'd showed up in their cold, windswept habitat that they kept stealing things from the ship and generally making fascinating nuisances of themselves.

In A Most Remarkable Creature: The Hidden Life and Epic Journey of the World's Smartest Birds of Prey, by Jonathan Meiberg, we find out not only about Darwin's observations of them, but observations by British naturalist William Henry Hudson, who brought some caracaras back with him to England.  His inquiries into the birds' behavior showed that they were capable of stupendous feats of problem solving, putting them up there with crows and parrots in contention for the title of World's Most Intelligent Bird.

This book is thoroughly entertaining, and in its pages we're brought through remote areas in South America that most of us will never get to visit.  Along the way we learn about some fascinating creatures that will make you reconsider ever using the epithet of "birdbrain" again.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Friday, June 5, 2020

Morality and tribalism

I had a bit of an epiphany this morning.

It was when I was reading an article in the news about the fact that Joe Biden has lost support among law enforcement unions because of his call to increase oversight and investigate claims of unwarranted or excessive violence by the police.  "For Joe Biden, police are shaking their heads because he used to be a stand-up guy who backed law enforcement," said Bill Johnson, executive director of the National Association of Police Organizations. "But it seems in his old age, for whatever reason, he’s writing a sad final chapter when it comes to supporting law enforcement."

[Image licensed under the Creative Commons Jamelle Bouie, Police in riot gear at Ferguson protests, CC BY 2.0]

I suddenly realized that this was the common thread running through a lot of the problems we've faced as a society, and that it boils down to people believing that tribal identity is more important than ethical behavior.  The police are hardly the only ones to fall prey to this.  It's at the heart of the multiple pedophilia scandals that have plagued the Catholic Church, for example.  This one resonates for me because I saw it happen -- as I've written about before, I knew personally the first priest prosecuted for sexual abuse of children, Father Gilbert GauthĂ©.  Father GauthĂ© was the assistant pastor for a time at Sacred Heart Catholic Church in Broussard, Louisiana; the priest, Father John Kemps, employed my grandmother as live-in housekeeper and cook.  The point here is that when the scandal became public, and it was revealed that GauthĂ© had abused hundreds of boys, the most shocking fact of all was that the bishop of the Diocese of Lafayette, Maurice Schexnayder, knew about it all along -- and instead of putting a stop to it, he transferred GauthĂ© from one church to another in the hopes that no one would ever find out that a priest could do such a thing.

For Schexnayder, membership in the tribe was more important than protecting the safety of children.

It happens all the time.  Inculcated very young, and reinforced by slogans like "everyone hates a rat" and "snitches get stitches," kids learn that refusing to identify rule-breakers is not only safer, it's considered a virtue.  Things like cheating rings survive in schools not only from the fact that participation is rewarded by higher grades (provided you don't get caught), but from the complicity of non-participants who know very well what's going on and refuse to say anything.

Tribe trumps morality.

The teachers themselves are not immune.  In 2011, a scandal rocked Atlanta schools when it was revealed that teachers were changing scores on standardized exams -- 178 teachers and administrators eventually confessed to the practice, and lost their licenses -- and it had been going on for over a decade.  I'm not going to go into the ridiculous reliance of state education departments on high-stakes standardized test scores that probably acted as the impetus for this practice; regular readers of Skeptophilia know all too well my opinion about standardized exams.  What interests me more is that there is no way that 178 teachers and administrators were doing this for a decade, and no one else knew.

The great likelihood is that almost everyone knew, but for ten years, no one said anything.

Tribe trumps morality.

The truth is that any time people's affiliation becomes more important than their ethics, things are set up for this kind of systemic rot.  How many times have you heard the charge leveled against both of the major political parties in the United States that "you only care about someone breaking the law if (s)he's a member of the other party?"  When the voters -- when anyone, really -- puts more importance on whether a person has an (R) or a (D) after their name than whether they're ethical, honest, moral, or fair, it's only a matter of time before the worst people either side has to offer end up in charge.

We have to be willing to rise above our tribe.  Sure, it's risky.  Yes, it can be painful to realize that someone who belongs to your profession, religion, or political party isn't the pillar of society you thought they were.  But this is the only way to keep a check on some of the worst impulses humans have.  Because when people feel invulnerable -- when they know that no matter what they do, their brothers and sisters in the tribe will remain silent out of loyalty -- there are no brakes on behavior.

So to return to what began this: of course there are good cops.  I have several friends in law enforcement who are some of the kindest, most upstanding people I know.  But it's imperative that the good ones speak up against the ones who are committing some of the atrocities we've all seen on video in the last few days -- peaceful protestors exercising their constitutionally-guaranteed right to assembly being gassed, reporters being beaten and shot in the head with rubber bullets, police destroying a city-approved medics' table in Asheville, North Carolina, and in one particularly horrifying example, cops shooting a tear gas canister into the open window of a car stopped at a stoplight, and when the driver got out yelling that his pregnant wife was in the car, the cops opened fire on him.

If people know they can act with impunity, they will.  It's only when the members of the tribe are willing to call its members out on their transgressions -- when we are as loud in condemning illegal or immoral behavior in members of our own political party, religion, or profession as we are in condemning those of the others -- that this sort of behavior will stop.

And that applies to the police spokespersons who are questioning their support of Joe Biden because he called for more oversight.  No one likes outside agencies monitoring their behavior.  I get that.  But until the police are more consistent about calling out their fellow officers who are guilty of unwarranted or excessive violence, there really is no other choice.

************************************

This week's Skeptophilia book recommendation of the week is a fun one -- George Zaidan's Ingredients: The Strange Chemistry of What We Put In Us and On Us.  Springboarding off the loony recommendations that have been rampant in the last few years -- fad diets, alarmist warnings about everything from vaccines to sunscreen, the pros and cons of processed food, substances that seem to be good for us one week and bad for us the next, Zaidan goes through the reality behind the hype, taking apart the claims in a way that is both factually accurate and laugh-out-loud funny.

And high time.  Bogus health claims, fueled by such sites as Natural News, are potentially dangerous.  Zaidan's book holds a lens up to the chemicals we ingest, inhale, and put on our skin -- and will help you sort the fact from the fiction.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Thursday, October 18, 2018

Statistical fudging

The last thing we need right now is for people to have another reason to lose their trust in scientists.

It's a crucial moment.  On the one hand, we have the Intergovernmental Panel on Climate Change, which just a week and a half ago released a study that we have only twenty or so years left in which we can take action to limit the warming to an average of 1.5-2.0 C by 2050 -- and even that will almost certainly increase the number of major storms, shift patterns of rainfall, cause a drastic rise in sea level, and increase the number of deadly heat waves.  And it bears mention that a lot of climate scientists think that even this is underselling the point, giving politicians the sense that we can wait to take any action at all.  "It’s always five minutes to midnight, and that is highly problematic," said Oliver Geden, social scientist and visiting fellow at the Max Planck Institute for Meteorology in Hamburg, Germany.  "Policymakers get used to it, and they think there’s always a way out."

Then on the other hand we have our resident Stable Genius, Donald Trump, who claimed two days ago that he understands everything he needs to know about climate because he has "a natural instinct for science."  To bolster this claim, he made a statement that apparently sums up the grand total of his expertise in climatology, which is that "climate goes back and forth, back and forth."  He then added, "You have scientists on both sides of it.  My uncle was a great professor at MIT for many years, Dr. John Trump.  And I didn’t talk to him about this particular subject, but... I will say that you have scientists on both sides of the picture."

It bears mention that Dr. John Trump was an electrical engineer, not a climatologist.  And Donald Trump didn't even ask him for an opinion.

So we have scientists trying like hell to get the public to see that scientific results are reliable, and people like Trump and his cronies trying to portray them as engaging in no better than guesswork and speculation (and of having an agenda).  That's why I did a serious facepalm when I read the article sent to me a few days ago by a friend and frequent contributor to Skeptophilia, Andrew Butters, author and blogger over at Potato Chip Math (which you should all check out because it's awesome).

This article, which appeared over at CBC, comes from a different realm of science -- medical research.  It references a paper authored by Min Qi Wang, Alice F. Yan, and Ralph V. Katz that appeared in Annals of Internal Medicine, titled, "Researcher Requests for Inappropriate Analysis and Reporting: A U.S. Survey of Consulting Biostatisticians."

If the title isn't alarming enough by itself, take a look at what Wang et al. found:
Inappropriate analysis and reporting of biomedical research remain a problem despite advances in statistical methods and efforts to educate researchers...  [Among] 522 consulting biostatisticians... (t)he 4 most frequently reported inappropriate requests rated as “most severe” by at least 20% of the respondents were, in order of frequency, removing or altering some data records to better support the research hypothesis; interpreting the statistical findings on the basis of expectation, not actual results; not reporting the presence of key missing data that might bias the results; and ignoring violations of assumptions that would change results from positive to negative.  These requests were reported most often by younger biostatisticians.
The good news is that a lot of the biostatisticians reported refusing the requests to alter the data.  (Of course, given that this is self-reporting, you have to wonder how many would voluntarily say, "Yeah, I do that all the time.")

"I feel like I've been asked to do quite a few of these at least once," said Andrew Althouse, biostatistician at the University of Pittsburgh.  "I do my best to stand my ground and I've never falsified data....  I was once pressured by a surgeon to provide data on 10-year survival rates after a particular surgical intervention.  The problem — the 10-year data didn't exist because the hospital hadn't been using the procedure long enough...  The surgeon argued with me that it was really important and pleaded with me to find some way to do this.  He eventually relented, but it was one of the most jarring examples I've experienced."

[Image is in the Public Domain]

McGill University bioethicist Jonathan Kimmelman is among those who are appalled by this finding.  "If statisticians are saying no, that's great," he said.  "But to me this is still a major concern...  Everyone has had papers that are turned down by journals because your results were not statistically significant.  Getting tenure, getting pay raises, all sorts of things depend on getting into those journals so there is really strong incentives for people to fudge or shape their findings in a way that it makes it more palatable for those journals.  And what that shows is that there are lots of instances where there is threat of adulteration of the evidence that we use."

It's not surprising that, being human, scientists are prone to the same foibles and pitfalls as the rest of us.  However, you'd think that if you go into science, it's because you have a powerful commitment to the truth.  As Kimmelman says, the stakes are high -- not only prestige, but grant money.  Still, one would hope ethics would win over expediency.

And this is a particularly pivotal moment, when we have an administration that is deeply in the pockets of the corporations, and has shown a complete disregard for scientific findings and the opinions of experts.  The last thing we need is to give them more ammunition for claiming that science is unreliable.

But it's still a good thing, really, that Wang et al. have done this study.  You can't fix a problem when you don't know anything about it.  (Which is a truism Trump could learn from.  "Climate goes back and forth, back and forth," my ass.)  It's to be hoped that this will lead to better oversight of statistical analysis and a more stringent criterion during peer review.  Re-establishing the public trust in scientists is absolutely critical.  Our lives, and the long-term habitability of the Earth, could depend on it.

 ***********************************

This week's Skeptophilia book recommendation is something everyone should read.  Jonathan Haidt is an ethicist who has been studying the connections between morality and politics for twenty-five years, and whose contribution to our understanding of our own motives is second to none.  In The Righteous Mind: Why Good People are Divided by Politics, he looks at what motivates liberals and conservatives -- and how good, moral people can look at the same issues and come to opposite conclusions.

His extraordinarily deft touch for asking us to reconsider our own ethical foundations, without either being overtly partisan or accepting truly immoral stances and behaviors, is a needed breath of fresh air in these fractious times.  He is somehow able to walk that line of evaluating our own behavior clearly and dispassionately, and holding a mirror up to some of our most deep-seated drives.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]




Saturday, June 16, 2018

Illuminating a prison

In the unit on ethics in my Critical Thinking class, we always discuss a variety of experiments that have been done to elucidate the origins, characteristics, and extent of human morality.  Among the ones we look at are:
  • Philippa Foot's famous "Trolly Problem" experiment (1967), where a person is presented with two scenarios, both of which result in one death to save five people -- but in one, the death is caused by an action with a mechanical intermediary (flipping a switch), while in the second, the death is caused by the person shoving someone off a bridge with their own hands.  The interesting result is that humans don't view these as equivalent -- having a mechanical intermediary far reduces the emotional charge of the situation, and makes people much more likely to do it, even though the outcomes are identical.
  • The "Milgram experiment," conducted in 1963 by Stanley Milgram, which looked at the likelihood of someone hurting another person if commanded to do so by an authority figure.  Turns out, most of us will...
  • The Zurich tribalism experiment, done in Switzerland in 2015, wherein we find test subjects are willing to inflict painful shocks on others without activating their own empathy centers -- if the person being shocked is wearing a soccer jersey of a team the test subject didn't like.
  • Karen Wynn's "baby lab" experiment (2014), which found that even very young babies have an innate perception of fairness and morality, and want helpful individuals rewarded and unhelpful individuals punished.
The last time I taught the class, I included a fifth experiment -- the notorious "Stanford prison experiment," done by Philip Zimbardo in 1971.  You've probably heard about this one; it involved 24 Stanford students who had all undergone personality screening to weed out anyone with a tendency toward sociopathy.  The 24 were split into two groups -- the "prisoners" and the "guards."  As Zimbardo recounted the outcome, the guards very quickly banded together and acted with cruelty and disdain toward the prisoners, and the prisoners responded by sabotaging whatever they could.  Several of the prisoners broke down completely, and the experiment had to be called off because some of the prisoners were obviously in such mental distress that it would have been inhumane to continue.

Sing Sing Prison, 1915 [Image is in the Public Domain]

Zimbardo became famous instantly, and his results used to explain everything from people who'd been collaborators during the Holocaust to William Calley and his men and the perpetration of the My Lai Massacre.  When banding together against a perceived common enemy, Zimbardo said, we'll be much more likely to behave immorally -- especially when (as the Milgram experiment suggests) we're being ordered to behave that way by an authority.

There are two problems with this.

First, in 2001, psychologists Alex Haslam and Stephen Reicher tried to replicate Zimbardo's results, and found that it didn't work.  What they suggested was that the outcome of the Stanford prison experiment weren't because the "guards" saw the "prisoners" as enemies, but because the guards were identifying with the experimenters -- in other words, their activities were being directed by an authority figure.  So the experiment boils down to a rehash of what Milgram did eight years earlier.

But there's a darker side of this, which I just found out about in an article in Medium by Ben Blum called "The Lifespan of a Lie."  In it, Blum makes a disturbing claim; that Zimbardo hadn't done what he claimed, which was to break the students into groups randomly and give them no instructions other than "guards control prisoners, prisoners obey guards;" he had actually coached the guards to behave cruelly -- and may have even encouraged one of the prisoners to go into hysterics.

The most famous breakdown, that of "prisoner" Doug Korpi, was dramatic -- he was locked in a closet by a guard, and proceeded to have a complete meltdown, screaming and crying and kicking the door.  The problem, Korpi says, is that it was all an act, and both he and Zimbardo knew it.  "Anybody who is a clinician would know that I was faking,” Korpi told Blum.  "If you listen to the tape, it’s not subtle.  I’m not that good at acting.  I mean, I think I do a fairly good job, but I’m more hysterical than psychotic."

At least some of the guards were acting as well.  One of the ones that had (according to Zimbardo) exhibited true cruelty toward the prisoners, Dave Eshelman, said his whole persona was a put-on.  "I took it as a kind of an improv exercise,” Eshelman told Blum.  "I believed that I was doing what the researchers wanted me to do, and I thought I’d do it better than anybody else by creating this despicable guard persona.  I’d never been to the South, but I used a southern accent, which I got from Cool Hand Luke."

Zimbardo, of course, denies all of this, and spoke to Blum briefly -- mostly to say that the experiment was fine, and the claims of fraud all nonsense.  Instead, he said that Haslam and Reicher's failed attempt at replication was "fraudulent," and the experiment itself valid.  "It’s the most famous study in the history of psychology at this point," Zimbardo told Blum.  "There’s no study that people talk about fifty years later.  Ordinary people know about it.  They say, ‘What do you do?’ ‘I’m a psychologist.’  It could be a cab driver in Budapest.  It could be a restaurant owner in Poland.  I mention I’m a psychologist, and they say, ‘Did you hear about the study?’  It’s got a life of its own now.  If he wants to say it was all a hoax, that’s up to him.  I’m not going to defend it anymore.  The defense is its longevity."

Which, of course, is not much of a defense.  Some really stupid ideas (I'm lookin' at you, homeopathy) have been around for ages.  I do find it rather upsetting, though, and not just because I've been teaching an experiment for years that turns out not to have gone down the way the researchers claimed.  It's a stain on science as a whole -- that we accepted the results of an experiment that failed replication, mostly because its outcome seemed so comforting.  People aren't inherently immoral; they act immorally when they're placed in situations where it's expected.  Alter situations, it implied, and people will rise to higher motives.

Well, maybe.  There are still a lot of questions about morality, and the other four experiments I teach have borne up to scrutiny.  We do harm more easily when we're one step removed from the person being harmed, when an authority figure tells us to, when the harmed person doesn't belong to our "tribe," and when the recipient of punishment is perceived to have deserved it.  But simply banding together, Lord of the Flies-style, to visit harm upon the helpless -- the evidence for that is far slimmer.

And I suppose the Zimbardo experiment will have to be transferred to a different lecture next year -- the one I do on examples of scientific fraud and researcher malfeasance.

******************************

This week's Skeptophilia book recommendation is a classic: the late Oliver Sacks's The Man Who Mistook His Wife for a Hat.  It's required reading for anyone who is interested in the inner workings of the human mind, and highlights how fragile our perceptual apparatus is -- and how even minor changes in our nervous systems can result in our interacting with the world in what appear from the outside to be completely bizarre ways.  Broken up into short vignettes about actual patients Sacks worked with, it's a quick and completely fascinating read.