Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label likes. Show all posts
Showing posts with label likes. Show all posts

Tuesday, August 17, 2021

Reinforcing outrage

I got onto social media some years ago for two main reasons; to stay in touch with people I don't get to see frequently (which since the pandemic has been pretty much everyone), and to have a platform for marketing my books.

I'm the first to admit that I'm kind of awful at the latter.  I hate marketing myself, and even though I know I won't be successful as an author if no one ever hears about my work, it goes against the years of childhood training in such winning strategies as "don't talk about yourself" and "don't brag" and (my favorite) "no one wants to hear about that" (usually applied to whatever my current main interest was).

I'm still on Facebook, Twitter, and Instagram, although for me the last-mentioned seems to mostly involve pics of my dog being cute.  It strikes me on a daily basis, though, how quickly non-dog-pic social media can devolve into a morass of hatefulness -- Twitter seems especially bad in that regard -- and also that I have no clue how the algorithms work that decide for you what you should and should not look at.  It's baffling to me that someone will post a fascinating link or trenchant commentary and get two "likes" and one retweet, and then someone else will post a pic of their lunch and it'll get shared far and wide.

So I haven't learned how to game the system, either to promote my books or to get a thousand retweets of a pic of my own lunch.  Maybe my posts aren't angry enough.  At least that seems to be the recommendation of a study at Yale University that was published last week in Science Advances, which found that expressions of moral outrage on Twitter are more often rewarded by likes and retweets than emotionally neutral ones.

[Image licensed under the Creative Commons "Today Testing" (For derivative), Social Media Marketing Strategy, CC BY-SA 4.0]

Apparently, getting likes and retweets is the human equivalent of the bell ringing for Pavlov's dog.  When our posts are shared, it gives us incentive to post others like them.  And since political outrage gets responses, we tend to move in that direction over time.  Worse still, the effect is strongest for people who are political moderates, meaning the suspicion a lot of us have had for a while -- that social media feeds polarization -- looks like it's spot-on.

"Our studies find that people with politically moderate friends and followers are more sensitive to social feedback that reinforces their outrage expressions,” said Yale professor of psychology Molly Crockett, who co-authored the study.  "This suggests a mechanism for how moderate groups can become politically radicalized over time — the rewards of social media create positive feedback loops that exacerbate outrage...  Amplification of moral outrage is a clear consequence of social media’s business model, which optimizes for user engagement.  Given that moral outrage plays a crucial role in social and political change, we should be aware that tech companies, through the design of their platforms, have the ability to influence the success or failure of collective movements.  Our data show that social media platforms do not merely reflect what is happening in society.  Platforms create incentives that change how users react to political events over time."

Which is troubling, if not unexpected.  Social media may not just be passively encouraging polarization, but deliberately exploiting our desire for approval.  In doing so, they are not just recording the trends, but actively influencing political outcomes.

It's scary how easily manipulated we are.  The catch-22 is that any attempt to rein in politically-incendiary material on social media runs immediately afoul of the rights of free speech; it took Facebook and Twitter ages to put the brakes on posts about the alleged danger of the COVID vaccines and the "Big Lie" claims of Donald Trump and his cronies that Joe Biden stole the election last November.  (A lot of those posts are still sneaking through, unfortunately.)  So if social media is feeding social media polarization with malice aforethought, the only reasonable response is to think twice about liking and sharing sketchy stuff -- and when in doubt, err on the side of not sharing it.

Either that, or exit social media entirely, something that several friends of mine have elected to do.  I'm reluctant -- there are people, especially on Facebook, who I'd probably lose touch with entirely without it -- but I don't spend much time on it, and (except for posting links to Skeptophilia every morning) hardly post at all.  What I do post is mostly intended for humor's sake; I avoid political stuff pretty much entirely.

So that's our discouraging, if unsurprising, research of the day.  It further reinforces my determination to spend as little time doomscrolling on Twitter as I can.  Not only do I not want to contribute to the nastiness, I don't need the reward of retweets pushing me any further into outrage.  I'm outraged enough as it is.

************************************

I was an undergraduate when the original Cosmos, with Carl Sagan, was launched, and being a physics major and an astronomy buff, I was absolutely transfixed.  Me and my co-nerd buddies looked forward to the new episode each week and eagerly discussed it the following day between classes.  And one of the most famous lines from the show -- ask any Sagan devotee -- is, "If you want to make an apple pie from scratch, first you must invent the universe."

Sagan used this quip as a launching point into discussing the makeup of the universe on the atomic level, and where those atoms had come from -- some primordial, all the way to the Big Bang (hydrogen and helium), and the rest formed in the interiors of stars.  (Giving rise to two of his other famous quotes: "We are made of star-stuff," and "We are a way for the universe to know itself.")

Since Sagan's tragic death in 1996 at the age of 62 from a rare blood cancer, astrophysics has continued to extend what we know about where everything comes from.  And now, experimental physicist Harry Cliff has put together that knowledge in a package accessible to the non-scientist, and titled it How to Make an Apple Pie from Scratch: In Search of the Recipe for our Universe, From the Origin of Atoms to the Big Bang.  It's a brilliant exposition of our latest understanding of the stuff that makes up apple pies, you, me, the planet, and the stars.  If you want to know where the atoms that form the universe originated, or just want to have your mind blown, this is the book for you.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Tuesday, January 31, 2017

Tell me what you like

I always wince a little when I see those silly things pop up on Facebook that say things like, "Can you see the number in the pattern?  Only geniuses can!  Click 'like' if you see it, then share."  And, "Are you one of the 5% of people who can think of a city starting with the letter E?  Reply with your answers!"

I'm certainly no expert in online data analysis, but those seem to me to be obvious attempts to get people to click or respond for some purpose other than the (stupid) stated one.  People still share these things all over the place, much to my perplexity.

What I didn't realize is how deep this particular rabbit hole can go.  Until I read an article that came out last week in Motherboard called "The Data That Turned the World Upside Down," by Hannes Grassegger and Mikael Krogerus, that illustrates a far darker reason for worry regarding where we place our online clicks.

The article describes the science of psychometrics -- using patterns of responses to predict personalities, behaviors, even things like religious affiliation and membership in political parties.  Psychometric analysis used to rely on test subjects filling out lengthy questionnaires, and even then it wasn't very accurate.

But a psychologist named Michal Kosinski found a better way to do it, using data we didn't even know we were providing -- using patterns of "likes" and "shares" on Facebook.


Kosinski had discovered something groundbreaking -- that although one person's "likes" on Facebook doesn't tell you very much, when you look at aggregate data from millions of people, you can use what people click "like" on to make startlingly accurate predictions about who they are and what they do.   Grassegger and Krogerus write:
Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan.  Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts.  While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
By 2012, Kosinski and his team had refined their model so well that it could predict race (95% accuracy), sexual orientation (88% accuracy), political party (85% accuracy), and hundreds of other metrics, up to and including whether or not your parents were divorced.  (I wrote about some of Kosinski's early results in a post back in 2013.)

The precision was frightening, and the more data they had access to, the better it got.  A study of Kosinski's algorithm showed that ten "likes" were sufficient to allow the model to know a person better than an average work colleague; seventy, and it exceeded what a person's friends knew; 150, what their parents knew; and 300, what their partner knew.  Studies showed that targeting advertisements on Facebook based on psychometric data resulted in 63% more clicks than did non-targeted ads.

So it was only a matter of time before the politicians got wind of this.  Because not only can your data be used to predict your personality, the overall data can be used to identify people with a particular set of traits -- such as undecided voters.

Enter Alexander Nix, CEO of Cambridge Analytica, an online data analysis firm, and one of the big guns with respect to both the recent U.S. election and the Brexit vote.  Because Nix started using Kosinski's algorithm to target individuals for political advertising.

"Only 18 months ago, Senator Cruz was one of the less popular candidates," Nix said in a speech political analysts in June 2016.  "Less than 40 percent of the population had heard of him...  So how did he do this?  A really ridiculous idea.  The idea that all women should receive the same message because of their gender—or all African Americans because of their race."

Nix went on to explain that through psychometrics, political candidates can create laser-focus appeals to specific people.  The approach became "different messages for different voters," and Donald Trump's team embraced the model with enthusiasm.  Grassegger and Krogerus write:
On the day of the third presidential debate between Trump and Clinton, Trump’s team tested 175,000 different ad variations for his arguments, in order to find the right versions above all via Facebook.  The messages differed for the most part only in microscopic details, in order to target the recipients in the optimal psychological way: different headings, colors, captions, with a photo or video...  In the Miami district of Little Haiti, for instance, Trump’s campaign provided inhabitants with news about the failure of the Clinton Foundation following the earthquake in Haiti, in order to keep them from voting for Hillary Clinton.  This was one of the goals: to keep potential Clinton voters (which include wavering left-wingers, African-Americans, and young women) away from the ballot box, to “suppress” their vote, as one senior campaign official told Bloomberg in the weeks before the election.  These “dark posts”—sponsored news-feed-style ads in Facebook timelines that can only be seen by users with specific profiles—included videos aimed at African-Americans in which Hillary Clinton refers to black men as predators, for example.
All in all, the Trump campaign paid between $5 and $15 million to Cambridge Analytica for their services -- the total amount is disputed.

Of course, it's impossible to know how much this swayed the results of the election, but given the amount of money Trump and others have spent to use this algorithm, it's hard to imagine that it had no effect.

All of which is not to say that you shouldn't "like" anything on Facebook.  Honestly, I'm unconcerned about what Alexander Nix might make of the fact that I like Linkin Park, H. P. Lovecraft, and various pages about running, scuba diving, and birdwatching.  It's more that we should be aware that the ads we're seeing -- especially about important things like political races -- are almost certainly not random any more.  They are crafted to appeal to our personalities, interests, and biases, using the data we've inadvertently provided, meaning that if we're not cognizant of how to view them, we're very likely to fall for their manipulation.