Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label rewards. Show all posts
Showing posts with label rewards. Show all posts

Tuesday, August 17, 2021

Reinforcing outrage

I got onto social media some years ago for two main reasons; to stay in touch with people I don't get to see frequently (which since the pandemic has been pretty much everyone), and to have a platform for marketing my books.

I'm the first to admit that I'm kind of awful at the latter.  I hate marketing myself, and even though I know I won't be successful as an author if no one ever hears about my work, it goes against the years of childhood training in such winning strategies as "don't talk about yourself" and "don't brag" and (my favorite) "no one wants to hear about that" (usually applied to whatever my current main interest was).

I'm still on Facebook, Twitter, and Instagram, although for me the last-mentioned seems to mostly involve pics of my dog being cute.  It strikes me on a daily basis, though, how quickly non-dog-pic social media can devolve into a morass of hatefulness -- Twitter seems especially bad in that regard -- and also that I have no clue how the algorithms work that decide for you what you should and should not look at.  It's baffling to me that someone will post a fascinating link or trenchant commentary and get two "likes" and one retweet, and then someone else will post a pic of their lunch and it'll get shared far and wide.

So I haven't learned how to game the system, either to promote my books or to get a thousand retweets of a pic of my own lunch.  Maybe my posts aren't angry enough.  At least that seems to be the recommendation of a study at Yale University that was published last week in Science Advances, which found that expressions of moral outrage on Twitter are more often rewarded by likes and retweets than emotionally neutral ones.

[Image licensed under the Creative Commons "Today Testing" (For derivative), Social Media Marketing Strategy, CC BY-SA 4.0]

Apparently, getting likes and retweets is the human equivalent of the bell ringing for Pavlov's dog.  When our posts are shared, it gives us incentive to post others like them.  And since political outrage gets responses, we tend to move in that direction over time.  Worse still, the effect is strongest for people who are political moderates, meaning the suspicion a lot of us have had for a while -- that social media feeds polarization -- looks like it's spot-on.

"Our studies find that people with politically moderate friends and followers are more sensitive to social feedback that reinforces their outrage expressions,” said Yale professor of psychology Molly Crockett, who co-authored the study.  "This suggests a mechanism for how moderate groups can become politically radicalized over time — the rewards of social media create positive feedback loops that exacerbate outrage...  Amplification of moral outrage is a clear consequence of social media’s business model, which optimizes for user engagement.  Given that moral outrage plays a crucial role in social and political change, we should be aware that tech companies, through the design of their platforms, have the ability to influence the success or failure of collective movements.  Our data show that social media platforms do not merely reflect what is happening in society.  Platforms create incentives that change how users react to political events over time."

Which is troubling, if not unexpected.  Social media may not just be passively encouraging polarization, but deliberately exploiting our desire for approval.  In doing so, they are not just recording the trends, but actively influencing political outcomes.

It's scary how easily manipulated we are.  The catch-22 is that any attempt to rein in politically-incendiary material on social media runs immediately afoul of the rights of free speech; it took Facebook and Twitter ages to put the brakes on posts about the alleged danger of the COVID vaccines and the "Big Lie" claims of Donald Trump and his cronies that Joe Biden stole the election last November.  (A lot of those posts are still sneaking through, unfortunately.)  So if social media is feeding social media polarization with malice aforethought, the only reasonable response is to think twice about liking and sharing sketchy stuff -- and when in doubt, err on the side of not sharing it.

Either that, or exit social media entirely, something that several friends of mine have elected to do.  I'm reluctant -- there are people, especially on Facebook, who I'd probably lose touch with entirely without it -- but I don't spend much time on it, and (except for posting links to Skeptophilia every morning) hardly post at all.  What I do post is mostly intended for humor's sake; I avoid political stuff pretty much entirely.

So that's our discouraging, if unsurprising, research of the day.  It further reinforces my determination to spend as little time doomscrolling on Twitter as I can.  Not only do I not want to contribute to the nastiness, I don't need the reward of retweets pushing me any further into outrage.  I'm outraged enough as it is.

************************************

I was an undergraduate when the original Cosmos, with Carl Sagan, was launched, and being a physics major and an astronomy buff, I was absolutely transfixed.  Me and my co-nerd buddies looked forward to the new episode each week and eagerly discussed it the following day between classes.  And one of the most famous lines from the show -- ask any Sagan devotee -- is, "If you want to make an apple pie from scratch, first you must invent the universe."

Sagan used this quip as a launching point into discussing the makeup of the universe on the atomic level, and where those atoms had come from -- some primordial, all the way to the Big Bang (hydrogen and helium), and the rest formed in the interiors of stars.  (Giving rise to two of his other famous quotes: "We are made of star-stuff," and "We are a way for the universe to know itself.")

Since Sagan's tragic death in 1996 at the age of 62 from a rare blood cancer, astrophysics has continued to extend what we know about where everything comes from.  And now, experimental physicist Harry Cliff has put together that knowledge in a package accessible to the non-scientist, and titled it How to Make an Apple Pie from Scratch: In Search of the Recipe for our Universe, From the Origin of Atoms to the Big Bang.  It's a brilliant exposition of our latest understanding of the stuff that makes up apple pies, you, me, the planet, and the stars.  If you want to know where the atoms that form the universe originated, or just want to have your mind blown, this is the book for you.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Thursday, August 27, 2020

Rewarding the daredevil

There were three magic words that used to be able to induce me to do almost anything, regardless how catastrophically stupid it was: "I dare you."

It's how I ended up walking the ridgeline of a friend's house when I was in eighth grade:
Friend: My house has such a steep roof.  I don't know how anyone could keep his balance up there.
Me:  I bet I could. 
Friend (dubiously):  You think? 
Me;  Yeah. 
Friend:  I dare you. 
Me:  Get me a ladder.
That I didn't break my neck was as much due to luck as skill, although it must be said that back then I did have a hell of a sense of balance, even if I didn't have much of any other kind of sense.

[Image licensed under the Creative Commons Øyvind Holmstad, A yellow house with a sheltering roof, CC BY-SA 3.0]

Research by neuroscientists Lei Zhang (University Medical Center Hamburg-Eppendorf) and Jan Gläscher (University of Vienna) has given us some insight into why I was prone to doing that sort of thing (beyond my parent's explanation, which boiled down to "you sure are an idiot").  Apparently the whole thing has to do with something called "reward prediction error" -- and they've identified the part of the brain where it occurs.

Reward prediction error occurs when there is a mismatch between the expected reward and the actual reward.  If expected reward occurs, prediction error is low, and you get some reinforcement via neurochemical release in the putamen and right temporoparietal junction, which form an important part of the brain's reward circuit.  A prediction error can go two ways: (1) the reward can be lower than the expectation, in which case you learn by changing your expectations; or (2) the reward can be higher than the expectation, in which case you get treated to a flood of endorphins.

Which explains my stupid roof-climbing behavior, and loads of other activities that begin with the words "hold my beer."  I wasn't nearly as fearless as I was acting; I fully expected to lose my balance and go tumbling down the roof.  When that didn't happen, and I came ambling back down the ladder afterward to the awed appreciation of my friend, I got a neurochemical bonus that nearly guaranteed that next time I heard "I dare you," I'd do the same thing again.

The structure of the researchers' experiment was interesting.  Here's how it was described in a press release in EurekAlert:
[The] researchers... placed groups of five volunteers in the same computer-based decision-making experiment, where each of them was presented with two abstract symbols.  Their objective was to find out which symbol would lead to more monetary rewards in the long run.  In each round of the experiment, every person first made a choice between the two symbols, and then they observed which symbols the other four people had selected; next, every person could decide to stick with their initial choice or switch to the alternative symbol.  Finally, a monetary outcome, either a win or a loss, was delivered to every one according to their second decision...  In fact, which symbol was related to more reward was always changing.  At the beginning of the experiment, one of the two symbols returned monetary rewards 70% of the time, and after a few rounds, it provided rewards only 30% of the time.  These changes took place multiple times throughout the experiment...  Expectedly, the volunteers switched more often when they were confronted with opposing choices from the others, but interestingly, the second choice (after considering social information) reflected the reward structure better than the first choice.
So social learning -- making your decisions according to your friends' behaviors and expectations -- is actually not a bad strategy.  "Direct learning is efficient in stable situations," said study co-author Jan Gläscher, "and when situations are changing and uncertain, social learning may play an important role together with direct learning to adapt to novel situations, such as deciding on the lunch menu at a new company."

Or deciding whether or not it's worth it to climb the roof of a friend's house.

We're social primates, so it's no surprise we rely a great deal on the members of our tribe for information about what we should and should not do.  This works well when we're looking to older and wiser individuals, and not so well when the other members of our tribe are just as dumb as we are.  (This latter bit explains a lot of the behavior we're currently seeing in the United States Senate.)  But our brains are built that way, for better or for worse.

Although for what it's worth, I no longer do ridiculous stunts when someone says "I dare you."  So if you were planning on trying it, don't get your hopes up.

*********************************

This week's Skeptophilia book recommendation of the week is a brilliant retrospective of how we've come to our understanding of one of the fastest-moving scientific fields: genetics.

In Siddhartha Mukherjee's wonderful book The Gene: An Intimate History, we're taken from the first bit of research that suggested how inheritance took place: Gregor Mendel's famous study of pea plants that established a "unit of heredity" (he called them "factors" rather than "genes" or "alleles," but he got the basic idea spot on).  From there, he looks at how our understanding of heredity was refined -- how DNA was identified as the chemical that housed genetic information, to how that information is encoded and translated, to cutting-edge research in gene modification techniques like CRISPR-Cas9.  Along each step, he paints a very human picture of researchers striving to understand, many of them with inadequate tools and resources, finally leading up to today's fine-grained picture of how heredity works.

It's wonderful reading for anyone interested in genetics and the history of science.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Wednesday, July 1, 2020

Siesta time

I'm a morning person.

I know this is pretty unusual.  I also know from first-hand experience that night owls tend to hate us morning people, who are up with the sun and at least reasonably coherent by six a.m., if not always showered and fully dressed.  (Hell, I'm retired.  Fully dressed sometimes doesn't happen at all, especially when the weather is warm.)

The result, though, is that I fade out pretty early in the evening.  I'm one of those people who, when invited to a party, seriously consider saying no if the start time is after seven in the evening.  By eight I want to be reading a book, and the times I'm still awake at ten are few and far between.

But the lowest time for me, energy-wise, is right after lunch.  Even when I get adequate sleep, I go through a serious slump in the early afternoon, even if I was chipper beforehand.  (Okay, given my personality, I'm never really chipper.  I also don't do "perky" or "bubbly."  So think about it as "chipper as compared to my baseline demeanor.")

Turns out, I'm not alone in finding the early afternoon a tough time to be productive, or even to stay awake.  As I learned from a paper in The Journal of Neuroscience, the problem is a fluctuation in the brain's reward circuit -- it, like many other human behaviors, is on a circadian rhythm that affects its function in a regular and predictable fashion.

The problem is a misalignment of the putamen (part of the brain's reward circuit) and the suprachiasmatic nucleus, which acts as a biological clock.  The putamen is most active when you receive a reward you weren't expecting, and least active when you expect a reward and don't get one.  The cycling of the suprachiasmatic nucleus stimulates the putamen to expect a reward after lunch, and then when it doesn't come -- one in the afternoon is nowhere near quitting time or happy hour, and most people's schedules don't accommodate an early afternoon nap -- the expected payoff doesn't happen.

The result: sad putamen.  Drop in motivation levels.

"The data suggest that the brain’s reward centres might be primed to expect rewards in the early afternoon, and be ‘surprised’ when they appear at the start and end of the day," said neuroscientist Jamie Byrne of Swinburne University.  "[The] brain is ‘expecting’ rewards at some times of day more than others, because it is adaptively primed by the body clock."

Me, I wonder why this priming happens at all.  What sort of reward did we receive in the early afternoon in our evolutionary history that led to this response becoming so common?  Honestly, I wonder if it was napping; an afternoon nap has been found not only to improve cognitive function, but (contrary to popular opinion) doesn't generally interfere with sleeping at night.  Having evolved on the African savanna, where the early afternoon can be miserably hot, it could be that we're built to snooze in the shade after lunch, and now that most of us are on an eight-to-five work schedule, we can't get away with it any more.  But the circadian rhythm we evolved is still there, and our energy levels plummet after lunch.

[Image licensed under the Creative Commons Jamain, Sleeping man J1, CC BY-SA 3.0]

It reminds me of the three weeks I spent in Spain and Portugal a few years ago.  I was astonished at first by the fact that no one ate dinner -- even considered eating dinner -- until nine in the evening.  (On one of our first days there, we went to a restaurant at about eight, and asked the waiter if we could be seated at a table.  His response was, "Why?"  I think he was genuinely puzzled as to why anyone might want dinner at such a ridiculously early hour.)  But once we got the hang of it -- a big lunch with a bottle of fine red wine, then a three-hour siesta during the hottest part of the day, when businesses close their doors so there's nothing much to do but sleep anyhow -- even I was able to stay up late with no problem.

All in all, a very pleasant lifestyle, I thought.

So we now know there is a neurological reason for the early-afternoon energy slump.  Kind of a fascinating thing how much we're at the mercy of our biological clock.  But anyhow, I better get busy and get some chores done.  Time's a-wasting, and I'm guessing by lunchtime I won't be feeling like doing much but hitting the hammock and conking out for a while.

************************

This week's Skeptophilia book recommendation of the week is pure fun, and a great gift for any of your friends who are cryptid fanciers: Graham Roumieu's hilarious Me Write Book: It Bigfoot Memoir.

In this short but hysterically funny book, we find out from the Big Guy's own mouth how hard it is to have the reputation for being huge, hairy, and bad-smelling.  Okay, even he admits he doesn't smell great, but it's not his fault, as showers aren't common out in the wilderness.  And think about the effect this has on his self-image, not to mention his success rate of advertising in the "Personals" section of the newspaper.

So read this first-person account of the struggles of this hirsute Everyman, and maybe even next time you're out hiking, bring along a little something for our australopithecene distant cousin.

He's very fond of peach schnapps.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Friday, April 3, 2020

The risk of knowing

One of the hallmarks of the human condition is curiosity.  We spend a lot of our early years learning by exploring, by trial-and-error, so it makes sense that curiosity should be built into our brains.

Still, it comes at a cost.  "Curiosity killed the cat" isn't a cliché for nothing.  The number of deaths in horror movies alone from someone saying, "I hear a noise in that abandoned house, I think I'll go investigate" is staggering.  People will take amazing risks out of nothing but sheer inquisitiveness -- so the gain in knowledge must be worth the cost.

[Image is in the Public Domain]

The funny thing is that we'll pay the cost even when what we gain isn't worth anything.  This was demonstrated by a clever experiment described in a paper by Johnny King Lau and Kou Murayama (of the University of Reading (U.K.)), Hiroko Ozono (of Kagoshima University) and Asuka Komiya (of Hiroshima University) that came out two days ago.  Entitled "Shared Striatal Activity in Decisions to Satisfy Curiosity and Hunger at the Risk of Electric Shocks," we hear about a set of experiments showing that humans will risk a painful shock to find out entirely useless information (in this case, how a card trick was performed).  The cleverest part of the experiments, though, is that they told test subjects ahead of time how much of a chance there was of being shocked -- so they had a chance to decide, "how much is this information worth?"

What they found was that even when told that there was a higher than 50% of being shocked, most subjects were still curious enough to take the risk.  The authors write:
Curiosity is often portrayed as a desirable feature of human faculty.  However, curiosity may come at a cost that sometimes puts people in harmful situations.  Here, using a set of behavioural and neuroimaging experiments with stimuli that strongly trigger curiosity (for example, magic tricks), we examine the psychological and neural mechanisms underlying the motivational effect of curiosity.  We consistently demonstrate that across different samples, people are indeed willing to gamble, subjecting themselves to electric shocks to satisfy their curiosity for trivial knowledge that carries no apparent instrumental value.
The researchers added another neat twist -- they used neuroimaging techniques to see what was going on in the curiosity-driven brain, and they found a fascinating overlap with another major driver of human behavior:
[T]his influence of curiosity shares common neural mechanisms with that of hunger for food.  In particular, we show that acceptance (compared to rejection) of curiosity-driven or incentive-driven gambles is accompanied by enhanced activity in the ventral striatum when curiosity or hunger was elicited, which extends into the dorsal striatum when participants made a decision.
So curiosity, then, is -- in nearly a literal sense -- a hunger.  The satisfaction we feel at taking a big bite of our favorite food when we're really hungry causes the same reaction in the brain as having a curiosity satisfied.  And like hunger, we're willing to take significant risks to satisfy our curiosity.  Even if -- to re-reiterate it -- the person in question knows ahead of time that the information they're curious about is technically useless.

I can definitely relate to this.  In me, it mostly takes the form of wasting inordinate amounts of time going down a rabbit hole online because some weird question came my way.  The result is that my brain is completely cluttered up with worthless trivia.  For example, I can tell you the scientific name of the bird you're looking at or why microbursts are common in the American Midwest or the etymology of the word "juggernaut," but went to the grocery store yesterday to buy three things and came back with only two of them.  (And didn't realize I'd forgotten 1/3 of the grocery order until I walked into the kitchen and started putting away what I'd bought.)

Our curiosity is definitely a double-edged sword.  I'm honestly fine with it, because often, knowing something is all the reward I need.  As physicist Richard Feynman put it, "The chief prize (of science) is the pleasure of finding things out."

So I suspect I'd have been one of the folks taking a high risk of getting shocked to see how the card trick was performed.  Don't forget that the corollary to the quote we started with -- "Curiosity killed the cat" -- is "...but satisfaction brought him back."

*******************************

In the midst of a pandemic, it's easy to fall into one of two errors -- to lose focus on the other problems we're facing, and to decide it's all hopeless and give up.  Both are dangerous mistakes.  We have a great many issues to deal with besides stemming the spread and impact of COVID-19, but humanity will weather this and the other hurdles we have ahead.  This is no time for pessimism, much less nihilism.

That's one of the main gists in Yuval Noah Harari's recent book 21 Lessons for the 21st Century.  He takes a good hard look at some of our biggest concerns -- terrorism, climate change, privacy, homelessness/poverty, even the development of artificial intelligence and how that might impact our lives -- and while he's not such a Pollyanna that he proposes instant solutions for any of them, he looks at how each might be managed, both in terms of combatting the problem itself and changing our own posture toward it.

It's a fascinating book, and worth reading to brace us up against the naysayers who would have you believe it's all hopeless.  While I don't think anyone would call Harari's book a panacea, at least it's the start of a discussion we should be having at all levels, not only in our personal lives, but in the highest offices of government.