Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Tuesday, January 31, 2017

Tell me what you like

I always wince a little when I see those silly things pop up on Facebook that say things like, "Can you see the number in the pattern?  Only geniuses can!  Click 'like' if you see it, then share."  And, "Are you one of the 5% of people who can think of a city starting with the letter E?  Reply with your answers!"

I'm certainly no expert in online data analysis, but those seem to me to be obvious attempts to get people to click or respond for some purpose other than the (stupid) stated one.  People still share these things all over the place, much to my perplexity.

What I didn't realize is how deep this particular rabbit hole can go.  Until I read an article that came out last week in Motherboard called "The Data That Turned the World Upside Down," by Hannes Grassegger and Mikael Krogerus, that illustrates a far darker reason for worry regarding where we place our online clicks.

The article describes the science of psychometrics -- using patterns of responses to predict personalities, behaviors, even things like religious affiliation and membership in political parties.  Psychometric analysis used to rely on test subjects filling out lengthy questionnaires, and even then it wasn't very accurate.

But a psychologist named Michal Kosinski found a better way to do it, using data we didn't even know we were providing -- using patterns of "likes" and "shares" on Facebook.


Kosinski had discovered something groundbreaking -- that although one person's "likes" on Facebook doesn't tell you very much, when you look at aggregate data from millions of people, you can use what people click "like" on to make startlingly accurate predictions about who they are and what they do.   Grassegger and Krogerus write:
Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan.  Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts.  While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
By 2012, Kosinski and his team had refined their model so well that it could predict race (95% accuracy), sexual orientation (88% accuracy), political party (85% accuracy), and hundreds of other metrics, up to and including whether or not your parents were divorced.  (I wrote about some of Kosinski's early results in a post back in 2013.)

The precision was frightening, and the more data they had access to, the better it got.  A study of Kosinski's algorithm showed that ten "likes" were sufficient to allow the model to know a person better than an average work colleague; seventy, and it exceeded what a person's friends knew; 150, what their parents knew; and 300, what their partner knew.  Studies showed that targeting advertisements on Facebook based on psychometric data resulted in 63% more clicks than did non-targeted ads.

So it was only a matter of time before the politicians got wind of this.  Because not only can your data be used to predict your personality, the overall data can be used to identify people with a particular set of traits -- such as undecided voters.

Enter Alexander Nix, CEO of Cambridge Analytica, an online data analysis firm, and one of the big guns with respect to both the recent U.S. election and the Brexit vote.  Because Nix started using Kosinski's algorithm to target individuals for political advertising.

"Only 18 months ago, Senator Cruz was one of the less popular candidates," Nix said in a speech political analysts in June 2016.  "Less than 40 percent of the population had heard of him...  So how did he do this?  A really ridiculous idea.  The idea that all women should receive the same message because of their gender—or all African Americans because of their race."

Nix went on to explain that through psychometrics, political candidates can create laser-focus appeals to specific people.  The approach became "different messages for different voters," and Donald Trump's team embraced the model with enthusiasm.  Grassegger and Krogerus write:
On the day of the third presidential debate between Trump and Clinton, Trump’s team tested 175,000 different ad variations for his arguments, in order to find the right versions above all via Facebook.  The messages differed for the most part only in microscopic details, in order to target the recipients in the optimal psychological way: different headings, colors, captions, with a photo or video...  In the Miami district of Little Haiti, for instance, Trump’s campaign provided inhabitants with news about the failure of the Clinton Foundation following the earthquake in Haiti, in order to keep them from voting for Hillary Clinton.  This was one of the goals: to keep potential Clinton voters (which include wavering left-wingers, African-Americans, and young women) away from the ballot box, to “suppress” their vote, as one senior campaign official told Bloomberg in the weeks before the election.  These “dark posts”—sponsored news-feed-style ads in Facebook timelines that can only be seen by users with specific profiles—included videos aimed at African-Americans in which Hillary Clinton refers to black men as predators, for example.
All in all, the Trump campaign paid between $5 and $15 million to Cambridge Analytica for their services -- the total amount is disputed.

Of course, it's impossible to know how much this swayed the results of the election, but given the amount of money Trump and others have spent to use this algorithm, it's hard to imagine that it had no effect.

All of which is not to say that you shouldn't "like" anything on Facebook.  Honestly, I'm unconcerned about what Alexander Nix might make of the fact that I like Linkin Park, H. P. Lovecraft, and various pages about running, scuba diving, and birdwatching.  It's more that we should be aware that the ads we're seeing -- especially about important things like political races -- are almost certainly not random any more.  They are crafted to appeal to our personalities, interests, and biases, using the data we've inadvertently provided, meaning that if we're not cognizant of how to view them, we're very likely to fall for their manipulation.

Monday, January 30, 2017

Disbelieving your own eyes

In May of 2015, the brilliant and acerbic Andy Borowitz wrote a piece for The New Yorker entitled "Earth Endangered by New Strain of Fact-Resistant Humans."  Borowitz wrote:
The research, conducted by the University of Minnesota, identifies a virulent strain of humans who are virtually immune to any form of verifiable knowledge, leaving scientists at a loss as to how to combat them. 
“These humans appear to have all the faculties necessary to receive and process information,” Davis Logsdon, one of the scientists who contributed to the study, said.  “And yet, somehow, they have developed defenses that, for all intents and purposes, have rendered those faculties totally inactive.” 
More worryingly, Logsdon said, “As facts have multiplied, their defenses against those facts have only grown more powerful.”
I wonder if Borowitz realizes how literally accurate his satirical piece is.   Because Brian Schaffner, professor of political science at the University of Massachusetts, has just published research showing something that even given humanity's fact-resistance, is kind of mind-blowing.

In the first, and less surprising part of the research, Schaffner showed the now-famous aerial photographs from Obama's and Trump's inaugurations to 1,388 people, and asked them which was which.  Unsurprisingly, given the claims by Sean Spicer, Kellyanne Conway, and others, a significant percentage of Trump voters thought the Obama photograph (clearly showing more people) was Trump's, and vice versa.

Obama's inauguration [image courtesy of photographer Senior Master Sergeant Thomas Meneguin, U. S. Air Force, and the Wikimedia Commons]

Of course, all that shows is that people believed what Spicer said, and/or that the photograph itself had been misrepresented in the press.  So far, nothing too shocking.  But the amazing -- and alarming -- piece of Schaffner's research is best described in his own words:
For the other half, we asked a very simple question with one clearly correct answer: “Which photo has more people?”  Some of these people probably understood that the image on the left was from Trump’s inauguration and that the image on the right was from Obama’s, but admitting that there were more people in the image on the right would mean they were acknowledging that more people attended Obama’s inauguration. 
Would some people be willing to make a clearly false statement when looking directly at photographic evidence — simply to support the Trump administration’s claims? 
Yes.
In fact, about 15% of the Trump voters responded, with no apparent hesitation, that the photograph containing fewer people actually had more.  (I'm not sure if I find it heartening that 85% of Trump voters correctly identified the photo with the bigger audience, however, given that 41% still thought it was from Trump's inauguration.)

As Alan Levinovitz of Slate wrote:
The process of embracing a charlatan’s empowering vision is not rational, which means that rational arguments are unlikely, in isolation, to dispel it.  Studies have repeatedly demonstrated that people cling tenaciously to their worldviews, and conflicting data may actually strengthen their beliefs.  (Just look at this family who thinks Trump is “a man of faith who will bring Godliness back.”)  To renounce Trump would mean admitting that one’s worldview—of a country wracked by carnage, as the president put it in his inaugural address, and a truth-telling hero who can heal it—is fundamentally mistaken.  And that can also mean confronting existential panic without a panacea.  It is much easier to forgive Trump for not locking her up than to wrestle with such truths...  It’s also much easier to convince yourself that a crowd is larger than it appears, particularly when the man you’ve put your faith in is arguing the same thing.  And in the case of the photographs, it didn’t take much to come up with an explanation for the apparent discrepancy.  Trump himself supplied it: Mainstream media manipulated both images to make it appear as if Obama’s had more.
Okay, I know I have biases just like everyone, and (like everyone) am probably wrong about some of my beliefs.  But what I completely do not get -- to the point of complete and utter bafflement -- is how people could be so wedded to their own biases that they would take incontrovertible hard evidence that they were wrong, and instead of changing their beliefs, disbelieve the evidence.

"No," they seem to be saying.  "I can't possibly be wrong.  It must be what I'm seeing right in front of me that is a lie."

Bill Nye compares this sort of thing to a belief in astrology, which persists despite huge amounts of evidence against it.  "For example, if somebody believes in astrology, it takes them about two years to get over it," Nye said.  "You have to show them over and over there’s no such thing as astrology, it doesn’t really work, and then they let go.  But everybody’s expectation that you’ll let go in a week is not going to met...  So we have to work, I think, diligently in the science community to fight back.  Of course there are the facts, we start with those.  But there’s this human nature thing on both sides to fight back.  We have our bubble over here, they have their bubble over there."

All of which means that rationalists have their work cut out for them.  I've seen over and over the extent to which humans react to new information primarily from an emotional, not a logical, standpoint; but over and over I'm astonished at how deep this tendency runs.  Andy Borowitz's quips about "fact-resistant humans" made me laugh, but I'm afraid in the last week or so my laugh has rung rather hollow.  Because the people currently in charge of the United States seem hell-bent on using this avoidance of the facts to their benefit, in terms of consolidating power and silencing the opposition.

And if it works, I'm afraid we're in for a really, really rough few years.

Saturday, January 28, 2017

Locking yourself into error

I got in a rather interesting -- well, I suppose you could call it a "discussion" -- with a Trump supporter yesterday.

It came about because of recent posts here at Skeptophilia that have been pretty critical of the president, his appointees, and their decisions.  After a few minutes of the usual greetings and pleasantries ("You're a liberal lackey who sucks up what the lying mainstream media says without question!", stuff like that), I asked her what to me is the only pertinent question in such situations:

"What would it take to convince you that you are wrong?"

"I'm not wrong," she said.

"That's not what I asked," I responded.  "I asked what would it take to convince you that you are wrong.  About Donald Trump.  Or about anything."

"What would it take to convince you?" she shot back.

"Facts and evidence that my opinion was in error.  Or at least a good logical argument."

"People like you would never believe it anyway.  You're swallowing the lies from the media.  Thank God Donald Trump was elected despite people like you and your friends in the MSM."

"And you still haven't answered my question."

At that point, she terminated the conversation and blocked me.

Couple that with a second comment from a different person -- one I elected not to respond to, because eventually I do learn not to take the bait -- saying that of course I have a liberal bias "since I get my information from CNN," and you can see that the fan mail just keeps rolling in.

Of course, the question I asked the first individual isn't original to me; it was the single most pivotal moment in the never-to-be-forgotten debate between Ken Ham and Bill Nye over the theory of evolution in February of 2014, in which the moderator asked each man what, if anything, would change his mind.  Nye said:
We would need just one piece of evidence.  We would need the fossil that swam from one layer to another.  We would need evidence that the universe is not expanding.  We would need evidence that the stars appear to be far away but are not.  We would need evidence that rock layers could somehow form in just 4,000 years…  We would need evidence that somehow you can reset atomic clocks and keep neutrons from becoming protons.  Bring on any of those things and you would change me immediately.
Ham, on the other hand, gave a long, rambling response that can be summed up as "Nothing would change my mind.  No evidence, no logic, nothing."

The whole thing dovetails perfectly with a paper released just two days ago in the journal Political Psychology.  Entitled "Science Curiosity and Political Psychology," by Dan M. Kahan, Asheley Landrum, Katie Carpenter, Laura Helft, and Kathleen Hall Jamieson, the paper looks at the connection between scientific curiosity and a willingness to consider information that runs counter to one's own political biases and preconceived notions.  The authors write:
[S]ubjects high in science curiosity display a marked preference for surprising information—that is, information contrary to their expectations about the current state of the best available evidence—even when that evidence disappoints rather than gratifies their political predispositions.  This is in marked contrast, too, to the usual style of information-search associated with [politically-motivated reasoning], in which partisans avoid predisposition-threatening in favor of predisposition-affirming evidence. 
Together these two forms of evidence paint a picture—a flattering one indeed—of individuals of high science curiosity. In this view, individuals who have an appetite to be surprised by scientific information—who find it pleasurable to discover that the world does not work as they expected—do not turn this feature of their personality off when they engage political information but rather indulge it in that setting as well, exposing themselves more readily to information that defies their expectations about facts on contested issues.  The result is that these citizens, unlike their less curious counterparts, react more open-mindedly and respond more uniformly across the political spectrum to the best available evidence.
And maybe that's what's at the heart of all this.  I've always thought that the opposite of curiosity is fear -- those of us who are scientifically curious (and I will engage in a bit of self-congratulation and include myself in this group) tend to be less afraid about being found to be wrong, and more concerned with making sure we have all our facts straight.

[image courtesy of the Wikimedia Commons]

So I'll reiterate my question, aimed not only toward Trump supporters, but to everyone: what would it take to convince you that you are wrong?  About your political beliefs, religious beliefs, moral stances, anything?  It's a question we should keep in the forefront of our minds all the time.

Because once you answer that question with a defiant "nothing could convince me," you have effectively locked yourself into whatever errors you may have made, and insulated yourself from facts, logic, evidence -- and the truth.

Friday, January 27, 2017

State-approved brain drain

In the early 1930s, a cadre of scientists in Germany saw the handwriting on the wall with respect to the rising forces of German nationalism, and founded a model for scientific research that they called Deutsche Physik (German physics) or Arische Physik (Aryan physics).  The proponents of this model for science -- including physicists Philipp Lenard and Johannes Stark -- claimed that research had to be state-approved and in line with the ideology of the German nationalist movement, in contrast to the Jüdische Physik (Jewish physics) of Albert Einstein, Erwin Schrödinger, and others.

As the Deutsche Physik movement's stranglehold on science increased, researchers who flouted the new rules were the targets of suppression and outright harassment.  The powers-that-be responded by clamping down further.  All scientific papers had to be approved by a board made up not of scientific peers but of party loyalists.  Because of this, many of the finest minds in Germany fled the country, including not only Einstein and Schrödinger but Leo Szilard, Hans Bethe, Lise Meitner, James Franck, and computer scientist John von Neumann.

When a German journalist spoke to Adolf Hitler about this loss of scientific talent, and asked him who would be the brains of the country if the trend continued, Hitler responded blithely, "I will be the brains."

The new administration here in the United States is evidently taking a page from the Deutsche Physik playbook.  Just yesterday they announced that all research work by scientists associated with the Environmental Protection Agency would have to be evaluated before release on a "case-by-case basis" -- by a panel of non-scientist party loyalists.

"We'll take a look at what's happening so that the voice coming from the EPA is one that's going to reflect the new administration," Doug Ericksen, head of communications for the Trump administration's EPA transition team, told reporters.  "Obviously with a new administration coming in, the transition time, we'll be taking a look at the web pages and the Facebook pages and everything else involved here at EPA. Everything is subject to review."

And if that doesn't draw the comparison with pre-World War II Germany starkly enough, yesterday the chairman of the House Science, Space, and Technology Committee -- Lamar Smith, who has had this position for years despite having no scientific training whatsoever -- said, "The national liberal media won’t print [the truth about scientific research], or air it, or post it.  Better to get your news directly from the president.  In fact, it might be the only way to get the unvarnished truth."

Who will be the brains of America once all the scientists have fled harassment and the suppression of their research?  Donald Trump will, of course.  Just listen to Dear Leader and all will be well.

Listen and believe.

The thing is, the universe is not compelled to conform with the political ideology of today's Deutsche Physik movement any more than it was compelled to conform to the one back in 1933. Einstein's Jüdische Physik Theory of Relativity turned out to be correct, and Hitler's insistence on state approval of research, and the resultant brain drain, hamstrung German science for decades.

At least we have some scientists and organizations that are speaking up rather than being cowed.  When the EPA, USDA, and National Parks Service were forbidden to use Twitter and other social media to communicate with the tax-paying public about research and current events (i.e., the facts), many of their staff set up "rogue Twitter accounts" -- allowing free and unfettered communication instead of the two other choices open to them -- quoting the party line, or silence.

But this, of course, is not the way science is supposed to be.  As astrophysicist Katie Mack put it:


Neither the climate nor anything else in the scientific world is responsive to political spin.  Eventually, of course, this will become apparent regardless, as it did with the German physicists -- when they found out that their anti-relativity, anti-quantum mechanics version of things was simply wrong.  The risk is that by the time that happens, it may well be that our best and brightest will have fled to places where scientific research is supported instead of oppressed.

What we have to ask ourselves is whether this is a risk we're willing to take.

And we also need to ask why it makes sense that we have placed the oversight of scientific research into the hands of non-scientists -- worse, anti-scientists -- like Lamar Smith, Dana Rohrabacher, James Inhofe, and yes, Donald Trump.  Are we truly willing to jettison the last two centuries of scientific advancement and dedication to the scientific method in favor of state-sponsored, state-approved, party-line-only pseudoscience?

Because that is the direction we're heading if we don't start speaking up.

Thursday, January 26, 2017

It's not what you say, it's how you say it...

There's a controversial idea in the realm of linguistics called the Sapir-Whorf hypothesis.  Named after linguists Edward Sapir and Benjamin Lee Whorf, the gist is that the language you speak strongly influences your brain, and your model of the world.  Thus, ideas that are easy to express in one language might be difficult or impossible to express in another.

I'm not just talking about linguistic lacunae, which are "holes" in the lexicon of a language.  An example from English is that we have no generic singular term for cattle.  Think about it; there's sow/boar/pig, billy/nanny/goat, stallion/mare/horse -- but bull/cow... what?  Oddly, we have a plural generic term -- cattle -- but no singular.

Sapir-Whorf goes much deeper than that.  It's not just talking about missing words; it implies that our entire framework for understanding can differ depending on the language we speak.  I ran into the real heart of Sapir-Whorf when I read the splendid book The Last Speakers by K. David Harrison, in which the author traveled with and interviewed people who are the last fluent speakers of some of the planet's dying languages.  The most amazing passages in the book occur when Harrison is in Siberia and is talking to some nomadic hunter-gatherers who speak a language in which there are no words for right, left, in front of, or behind.  Everything is related to the cardinal directions, and to being upstream or downstream of the river they use for travel.  Thus, the computer on which I'm writing this post isn't in front of me; it's northwest of me.  My space heater is north of me, the door of my office east of me.  When Harrison tried to explain our concept of left and right to them, they first didn't even understand what he was talking about, and when they did get it, they laughed.

What an arrogant, narcissistic culture you come from, they told Harrison.  You interpret where everything is relative to your own body?  And if you turn around, everything in the whole world changes position?  And two different people think the same object is in a different place because they're facing different directions?

In this case, it very much appears that the language these Siberian nomads speak alters the way they see the world -- and that model of the world reflects back and alters or constrains the language.

[image courtesy of the Wikimedia Commons]

Sapir-Whorf has fallen a bit out of favor in the last couple of decades, and in fact was already waning in influence when I got my M.A. in linguistics in 1996.  But a study that came out this week in the American Journal of Political Science has brought it back to the forefront -- with the claim by Efrén O. Pérez and Margit Tavits that speakers of languages that lack a distinction between present and future tense make different decisions regarding political issues that will have an impact on the future.

In their paper, titled, "Language Shapes People's Time Perception and Support for Future-Oriented Policies," their study looked at bilingually fluent speakers of Estonian (which lacks a future tense) and Russian (which has one).  They found that when those people were interviewed in Russian, they tended to be less supportive of policies that would provide benefits in the long-term future than when they were interviewed in Estonian.

The authors write:
Can the way we speak affect the way we perceive time and think about politics? Languages vary by how much they require speakers to grammatically encode temporal differences.  Futureless tongues (e.g., Estonian) do not oblige speakers to distinguish between the present and future tense, whereas futured tongues do (e.g., Russian).  By grammatically conflating “today” and “tomorrow,” we hypothesize that speakers of futureless tongues will view the future as temporally closer to the present, causing them to discount the future less and support future-oriented policies more.  Using an original survey experiment that randomly assigned the interview language to Estonian/Russian bilinguals, we find support for this proposition and document the absence of this language effect when a policy has no obvious time referent.  We then replicate and extend our principal result through a cross-national analysis of survey data.  Our results imply that language may have significant consequences for mass opinion.
Which I find absolutely fascinating.  I've long been of the opinion that our stances about many things -- not least our political opinions -- are far more fluid than most of us think.  The "well, it's my opinion, of course it's not going to change!" attitude that many of us have simply ignores that fact that most of our decisions are strongly contextual.

And now, it appears that one of those contexts may be the language you're using.

I'm aware that a lot of linguistic researchers these days have some serious doubts about the applicability of Sapir-Whorf, but I still think this is an interesting first look at a case where it may well bear out.

Anyhow, that's our look at some cool new research for today.  Me, I'm off to eat breakfast and get some coffee, which at the moment are southwest of me.

Wednesday, January 25, 2017

Talk me out of my pessimism. Please.

So I've been getting pretty political lately, here at Skeptophilia headquarters.

Some of you are probably glad to see me address more serious topics, while others might wish I'd get back to Bigfoot and ghosts and UFOs.  For those latter, I'd ask your indulgence for (at least) one more politically-oriented post, that I was spurred to write because of comments from readers.

The conservative members of my audience have responded to my admittedly liberal bias with reactions varying from encouragement to outright scorn.  Some have said, "Come on, now, it's not going to be bad.  Just wait until some of the new administration's ideas are enacted, and you'll see that it'll make things better."  Others have said "buck up, Buttercup" or "suck it up, Snowflake" or other such helpful phrases.

[image courtesy of the Wikimedia Commons]

So I thought, in the interest of trying to understand those who disagree with me -- the basic gist of yesterday's post, and more or less the overarching theme of this entire blog -- I'd address the part of my readership who are saying that things are going to be fine, and ask some specific questions.

First, it's undeniable that President Trump and his new appointees -- not to mention the Republican-controlled House and Senate -- have a lot of us pretty worried.  And despite the "Snowflake" and "Buttercup" responders, it's not simply because we're pissed at having lost.  I'm 56, and I remember vividly the presidencies of Reagan and both Bushes, and I can never recall being this specifically upset at this many things, this early into the administration.  Without even trying hard, I came up with the following, all of which happened in the last few weeks:
Okay.  You get the picture.

I've been dragged, rather unwillingly, into political discourse largely because I am so alarmed at the direction our leaders are taking.  Honestly, I used the words "liberal bias" earlier, but I'm really more of a centrist; I do think we need to rein in spending, I do think we've got a good bit of government bloat, and I do think the "nanny state" concept -- protecting people from their own stupidity and poor judgment -- has gotten out of hand.  But this?  I look at this list of actions, all in a little over two months since the election, with nothing short of horror.  I see a corporate interests über alles approach, a move toward less transparency, a morass of conflicts of interest, a complete disregard for any kind of consideration of the environment, and a reckless surge forward to reverse changes in policy on medical insurance coverage and lending practices without any clear vision of how to improve them -- or what impact those might have on low-income families.

So, conservative readers: you tell me not to worry, that everything will be fine, that Trump et al. are going to Make America Great Again.  Okay, convince me.  However I think Donald Trump is kind of repellent, personally, I have no desire to see him fail.

The stakes are way too high.

I'm a facts-and-evidence kind of guy, and I'm listening.  I promise to consider carefully what you say, if for no other reason because I hate being a gloom-and-doom pessimist. 

On the other hand, if all you have to say is "suck it up, Snowflake," my response is gonna be "go to hell."  So be forewarned.

Tuesday, January 24, 2017

Red truth, blue truth

At the same time that social media has opened up possibilities for long-distance (and cross-cultural) contact, and allowed us to befriend people we've never met, it also has had the effect of creating nearly impermeable echo chambers that do nothing but reinforce confirmation bias about our own beliefs and the worst stereotypes about those who disagree.

This is being highlighted in a rather terrifying fashion by The Wall Street Journal in their feature "Blue Feed, Red Feed," which they describe as follows:
To demonstrate how reality may differ for different Facebook users, The Wall Street Journal created two feeds, one “blue” and the other “red.”  If a source appears in the red feed, a majority of the articles shared from the source were classified as “very conservatively aligned” in a large 2015 Facebook study.  For the blue feed, a majority of each source’s articles aligned “very liberal.”  These aren't intended to resemble actual individual news feeds.  Instead, they are rare side-by-side looks at real conversations from different perspectives.
It's worth taking a look.  Here's a small sampling of a "red feed" for the recent "alternative facts" interview with Kellyanne Conway:
AWFUL LIBERAL Hack Chuck Todd Attacks #Trump – Kellyanne Conway Rips Him Apart (VIDEO)
Jim Hoft Jan 22nd, 2017 10:39 am 273 Comments
The liberal media today is in the sewer.
More Americans believe in Sasquatch than the crap coming from the liberal media.
After eight years of slobbering all over failed President and liar Barack Obama the media has suddenly decided to take on this new administration.
Today Chuck Todd went after Donald Trump advisor Kellyanne Conway on Meet the Press.
Kellyanne Conway ripped him a new one.
Notice how this condescending ass snickers as Kellyanne answers his question!

The Trump administration should boycott this horrible show immediately.
Contrast this with the "blue feed" on the same topic:
If you are puzzled by the bizarre "press conference" put on by the White House press secretary this evening (angrily claiming that Trump's inauguration had the largest audience in history, accusing them of faking photos and lying about attendance), let me help explain it. This spectacle served three purposes: 
1. Establishing a norm with the press: they will be told things that are obviously wrong and they will have no opportunity to ask questions. That way, they will be grateful if they get anything more at any press conference. This is the PR equivalent of "negging," the odious pick-up practice of a particular kind of horrible person (e.g., Donald Trump). 
2. Increasing the separation between Trump's base (1/3 of the population) from everybody else (the remaining 2/3). By being told something that is obviously wrong—that there is no evidence for and all evidence against, that anybody with eyes can see is wrong—they are forced to pick whether they are going to believe Trump or their lying eyes. The gamble here—likely to pay off—is that they will believe Trump. This means that they will regard media outlets that report the truth as "fake news" (because otherwise they'd be forced to confront their cognitive dissonance.) 
3. Creating a sense of uncertainty about whether facts are knowable, among a certain chunk of the population (which is a taking a page from the Kremlin, for whom this is their preferred disinformation tactic). A third of the population will say "clearly the White House is lying," a third will say "if Trump says it, it must be true," and the remaining third will say "gosh, I guess this is unknowable." The idea isn't to convince these people of untrue things, it's to fatigue them, so that they will stay out of the political process entirely, regarding the truth as just too difficult to determine. 
This is laying important groundwork for the months ahead. If Trump's White House is willing to lie about something as obviously, unquestionably fake as this, just imagine what else they'll lie about. In particular, things that the public cannot possibly verify the truth of. It's gonna get real bad.
It's not like they're looking at the same thing from two different angles; it's more like these people aren't living in the same universe.

[image courtesy of the Wikimedia Commons]

Add into the mix a paper published this week in PNAS Online by Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi of the Laboratory of Computational Social Science in Lucca, Italy.  The study, called "The Spreading of Misinformation Online," not only describes the dangers of the echo chamber effect apropos of social media, but the worse problem that it insulates us from correcting our own understanding  when we're in the wrong. The authors write:
Digital misinformation has become so pervasive in online social media that it has been listed by the WEF as one of the main threats to human society.  Whether a news item, either substantiated or not, is accepted as true by a user may be strongly affected by social norms or by how much it coheres with the user’s system of beliefs.  Many mechanisms cause false information to gain acceptance, which in turn generate false beliefs that, once adopted by an individual, are highly resistant to correction...  Our findings show that users mostly tend to select and share content related to a specific narrative and to ignore the rest.  In particular, we show that social homogeneity is the primary driver of content diffusion, and one frequent result is the formation of homogeneous, polarized clusters.  Most of the times the information is taken by a friend having the same profile (polarization)––i.e., belonging to the same echo chamber...  Users tend to aggregate in communities of interest, which causes reinforcement and fosters confirmation bias, segregation, and polarization.  This comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.
It would be easy to jump from there to the conclusion that there's no way to tell what the truth is, that we're all so insulated in our comfortable cocoons of self-approval that we'll never be able to see out.  That's unwarrantedly pessimistic, however.  There is a method for determining the truth; it involves using evidence (i.e. facts), logic, and an unrelenting determination to steer clear of partisan spin.  Giving up and saying "No one can know the truth" is exactly as unproductive as saying "my side is always right."

Still, all kind-hearted ecumenism aside, I'll end with a quote from the eminent Richard Dawkins: "When two opposing points of view are expressed with equal intensity, the truth does not necessarily lie somewhere in the middle.  It is possible for one side to be simply wrong."