Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Wednesday, September 21, 2016

The index case for fact-resistance

I think a standard question for anyone who holds an anti-science stance -- so climate change deniers, antivaxxers, people who are pro-homeopathy -- should be: "What would it take to convince you that you are wrong?"

I'll be up front that this idea is not original to me.  It was the single question that still stands out in my mind as the most important in the infamous Bill Nye/Ken Ham debate.  Nye responded, in essence, that one piece of information that could not be explained except by the young-Earth model is all it would take.  Ham, on the other hand, said that nothing could convince him.  No evidence, no logical argument, nada.

And therein, folks, lies the difference between the scientific and anti-scientific view of the world.

It is a question I wish had come up during a hearing this week in the House Committee on Science (controlled, as I have mentioned before, almost entirely by anti-science types).  The topic was the subpoenas being sent out to climate scientists in an attempt to intimidate them into backing down on their (at this point incontrovertible) claim that the world is warming up.  One of the people who spoke in favor of the subpoenas was Ronald Rotunda, professor of law at Chapman University.

This in itself is an odd choice.  Rotunda is a lawyer, not a scientist.  Wouldn't you want the scientists -- i.e., the people who know what the hell they're talking about -- to weigh in?  Of course, it doesn't take a genius to see that wasn't the point here.  The point was getting some talking heads to reinforce the view of the committee that climate change is a hoax.  But what happened afterwards is pretty interesting -- and heartening.

Rotunda was trying to make the case that the scientists disagree on the idea of climate change and (specifically) sea level rise, and cited research by Harvard geoscientist Jerry Mitrovica, claiming that it showed that the melting of the Greenland ice cap would actually cause the sea level to fall.  Of course, Rotunda was completely misrepresenting Mitrovica's work; Mitrovica had shown that due to a combination of gravitational effects and isostatic rebound (the lifting of land masses when a weight such as an ice cap is taken from them), the sea level around Greenland as measured from the coast of Greenland might fall.  What Rotunda conveniently forgot to mention was that the melted ice combined with the aforementioned factors would cause the sea level to rise more elsewhere.

That's not what the representatives on the committee wanted to hear, of course, so it never came up.

Coastal Greenland [image courtesy of the Wikimedia Commons]

What's encouraging in all of this depressing business is the response of one person on the committee -- Bill Foster of Illinois, the committee's only trained scientist (he started his career as a physicist).  Foster listened politely to what Rotunda was saying.

But he wasn't buying it.

What Foster did was brilliant -- he merely asked Rotunda to explain how his claim worked.  "I was fascinated by what seemed to be apparent support of an argument that the Greenland ice sheet would melt, and thereby lower the sea level," Foster said, "and I was wondering if you can expound on how exactly the physics of this works."

Rotunda, who apparently has less understanding of physics than your typical 12th grade physics student, immediately began to babble.  "When the ice sheet melts, all the gravity that was then part of the island of New Greenland [sic] disappears into the ocean, it just goes away.  And that ice has been pushing Greenland down, and now Greenland will be moving up, because the water is all over the place."

All I can say is that if I gave explanations like that in my high school classes, I would quite rightly be tarred and feathered.

So that's the next best thing to "What would it take to change your mind?" -- "Can you explain to me how that would work?"  Both of these, in my opinion, should be the immediate go-to questions in any debate on climate change -- or any other discussion that has become contaminated with anti-science.

Of course, the downside of all of this is that the climate change deniers on the Science Committee, with the exception of Bill Foster, all just nodded sagely while Rotunda spewed his bullshit.  If you already have assumed your conclusion, no amount of logic or evidence would ever sway you.

It reminds me of a brilliant satirical piece written by Andy Borowitz for New Yorker earlier this year entitled, "Scientists: Earth Endangered By New Strain of Fact-Resistant Humans."  A quote from Borowitz seems an appropriate way to end this post, especially given that the House Committee on Science -- of all groups -- seems to be the index case for fact-resistance:
The research, conducted by the University of Minnesota, identifies a virulent strain of humans who are virtually immune to any form of verifiable knowledge, leaving scientists at a loss as to how to combat them. 
“These humans appear to have all the faculties necessary to receive and process information,” Davis Logsdon, one of the scientists who contributed to the study, said.  “And yet, somehow, they have developed defenses that, for all intents and purposes, have rendered those faculties totally inactive.” 
More worryingly, Logsdon said, “As facts have multiplied, their defenses against those facts have only grown more powerful.” 
While scientists have no clear understanding of the mechanisms that prevent the fact-resistant humans from absorbing data, they theorize that the strain may have developed the ability to intercept and discard information en route from the auditory nerve to the brain.  “The normal functions of human consciousness have been completely nullified,” Logsdon said.

Tuesday, September 20, 2016

There goes the Sun

Yesterday I received a friendly email from a loyal reader of Skeptophilia of the "You think that is stupid, wait till you see this" variety.  As well-intentioned as these generally are, I always hesitate to read further, because my general impression of human foolishness and gullibility really doesn't need any further reinforcement.

This one was in response to last week's post about the Flat Earthers, so already we've set the bar for comparative idiocy pretty high.  But as I continued to read the email (yes, I succumbed to my 'satiable curiosity), I found that said bar was cleared in a single leap by this particular claim.

So without further ado: the idea that makes the Flat Earthers look sane and sensible.  Ready?

The Sun doesn't exist.

According to a group of loons calling themselves "asunists," what we're calling the Sun is just an illusion generated by light collected and beamed at the Earth by an array of curved mirrors.  You might be asking, "Light coming from where, exactly?", but that is only the first of the many problems we encounter upon delving into the situation.  Apparently the idea came about when someone googled "solar simulator" and found that there is a device that approximates the radiation spectrum and illuminance of the Sun, and is used for testing solar cells, sunscreen, plastics, and so forth.  So in a classic case of adding two and two and getting 147, they then interpreted this to mean that the Sun itself was a simulation.

[image courtesy of NASA]

Who is responsible for this?  Well, nasty old NASA, of course.  Same ones who keep the Moon hologram going and are suppressing information about the Earth being flat and/or hollow, not to mention the impending catastrophic visit by the fabled planet Nibiru.

What evidence do we have?  The producer of the above-linked YouTube video explains how he knows that the Sun isn't real, and a lot of it seems to be the fact that in some photographs, the outline of the Sun is "fuzzy."  It used to be clear and sharp, but now because of "chemicals in the air" the Sun has gotten all blurred.  So apparently we used to have a real Sun, but now it's been replaced by a simulator which just isn't as good as the real thing.

My question is -- well, among my many questions is -- don't you think someone would have noticed when the real Sun was taken down, and the simulator put in place?  Oh, and what did they do with the old Sun?  Was it sent to the stellar retirement home?  Was it just turned out into the cold vacuum of space, to wander, lost and forlorn forever?

Of course, the question that applies to all of these wacko conspiracy theories is why anyone would bother to do all of this.  Don't you think that if the Sun really was a big bunch of mirrors, the Earth was flat, or whatnot, the scientists at NASA would tell us?  What could they possibly gain by pretending that the Sun exists and the Earth is an oblate spheroid?

The oddly hilarious postscript to all of this is that the whole the-Sun-doesn't-exist conspiracy theory received a boost from none other than Ray "Mr. Banana" Comfort, the outspoken young-earth creationist who a couple of years ago got his ass handed to him when he showed up to distribute creationist literature at a talk by Richard Dawkins hosted by the Skeptic Society.  Well, Comfort has picked up on the "asunist" thing and used it as an argument against atheism (in Comfort's mind, everything is an argument against atheism).  He tells us about his perception of the "asunists" -- mischaracterizing their claim as stating that they believe we're actually in the dark -- and compares that to atheists' conclusion that god doesn't exist.

Which just shows you that there is no idea so completely stupid that you can't alter it so as to make it way stupider.

So to the loyal reader who sent me the email, all I can say is "thanks."  I now am even more convinced that Idiocracy was a non-fiction documentary.  It's time to get myself a cup of coffee and try to reboot my brain so that I make some degree of sense in class today.  Also time to start watching for the sunrise.

Or the solarsimulatorrise.  Or whatever.

Monday, September 19, 2016

Slowing down the copy-and-paste

I'm really interested in research on aging, and I'd like to think that it's not solely because I'm Of A Certain Age myself.  The whole fact of our undergoing age-related system degradation is fascinating -- moreso when you realize that other vertebrates age at dramatically different rates.  Mice and rats age out after about a year and a half to two years; dogs (sadly) rarely make it past fifteen (much less in some breeds); and the Galapagos Tortoise can still be hale and hearty at two hundred years of age.

A lot of research has gone into why different organisms age at such different speeds, and (more importantly) how to control it.  The ultimate goal, selfish though it may sound, is extending the healthy human life span.  Imagine if we reached our healthy adult physiology at (say) age 25 or so, and then went into stasis with respect to aging for two hundred or three hundred years -- or more?

Heady stuff.  For me, the attraction is not so much avoiding death (although that's nice, too).  I was just chatting with a friend yesterday about the fact that one of my biggest fears is being dependent on others for my care.  The idea of my body and/or mind degrading to the point that I can no longer care for my own needs is profoundly terrifying to me.  And when you add to the normal age-related degradation the specter of diseases such as Alzheimer's and ALS -- well, all I can say is that I agree with my dad, who said that compared with that fate, "I'd rather get run over by a truck."

A particularly interesting piece of research in this field that was published last week in the Proceedings of the National Academy of Sciences gives us one more piece of the puzzle.  But to understand it, you have to know a little bit about a peculiarity of genetics first.

Several decades ago, a geneticist named Barbara McClintock was working with patterns of seed color inheritance in "Indian corn."  In this variety, one cob can bear seeds with dozens of different colors and patterns.  After much study, she concluded that her data could only be explained by there being "transposable elements" -- genetic sequences that were either clipped out and moved, or else copied and moved -- functions similar to the "cut-and-paste" and "copy-and-paste" commands on your computer.  McClintock wrote a paper about it...

... and was immediately ignored.  For one thing, she was a woman in science, and back when she was doing her research -- in the 1960s and 1970s -- that was sufficient reason to discount it.  Her colleagues derisively nicknamed her theory "jumping genes" and laughed it into oblivion.

Except that McClintock wouldn't let it go.  She was convinced she was right, and kept doggedly pursuing more data, data that would render her conclusion incontrovertible.  She found it -- and won the Nobel Prize in Physiology and Medicine in 1983, at the age of 81.

Barbara McClintock in her laboratory at Cold Spring Harbor [image courtesy of the Wikimedia Commons]

McClintock's "transposable elements" (now called "transposons") have been found in every vertebrate studied.  They are used to provide additional copies of essential genes, so that if one copy succumbs to a mutation, there's an additional working copy that can take over.  They are also used in gene switching.  Move a gene near an on-switch called a promoter, and it turns on; move it away, and it turns off.

The problem is, like any natural process, it can go awry.  The copy-and-paste function especially seems to have that tendency.  When it malfunctions, it can be like a runaway copy-and-paste would be in your word processing software.  Imagine the havoc that would ensue if you had an important document, and the computer was inserting one phrase over and over again in random points in the text.

This should give you an idea of why it's so important to keep this process under control.

You have a way of taking care of these "rogue transposons" (as they're called).  One such mechanism is methylation, which is a chemical means of tangling up and permanently shutting down genes.  But the research just released suggests that aging is (at least in part) due to rogue transposition getting ahead of methylation -- leaving random copied chunks of DNA scattered across the genome.

A study by Jason Wood et al. of Brown University has found that fruit flies near the end of their life have a far greater number of active transposons than young flies do.  In fact, as they age, the number increases exponentially, the result being interference with gene function and a system-wide degradation.  Most interesting is that they found two genes -- Su(var)3-9 and Dicer-2 -- that when enhanced both substantially increase longevity in fruit flies.  Su(var)3-9 seems to be involved in increasing the methylation rate of rogue transposons, and Dicer-2 in suppressing the transposition process itself.  An increase in the activity of these genes raised the average longevity of fruit flies from sixty to eighty days -- an increase of 33%.

Of course, there's no guarantee that even if these genes turn out to have similar effects in humans, that the longevity increase will scale up by the same amount (if it did, it would raise the average human age at death to around 100 years).  But the whole thing is tremendously interesting anyhow.  On the other hand, I have to say that the idea that we are getting to the point that we can tinker around with fundamental processes like aging is a little frightening.  It opens up practical and ethical issues we've never had to consider before; how this would affect human population growth, who would have access to such genetic modifications if they proved effective and safe, even such things as how we approach the idea of careers and retirement.

Imagine if you reached the age of sixty and could expect another thirty or more years of active health.  Imagine if the effect on humans was greater -- and the upper bound of human life span was increased to two hundred or three hundred years.  It seems like science fiction, but with the research that is currently happening, it's not outside of the realm of possibility.

If you had the physiology and mental acuity of a twenty-five year old, who would want to retire at sixty?  At the same point, who would want to stay in the same job for another hundred years?  I love my students, but that definitely falls into the "shoot me now" category.

The whole thing would require a drastic reorganization of our society, a far more pervasive set of changes than any scientific discovery has yet caused.  And lest you think that I'm exaggerating the likelihood of such an eventuality; remember how much progress has happened in biological science in the last century.  Only a hundred years ago, children in industrialized countries were still dying by the thousands of diphtheria and measles.  There were dozens of structures in cells, and a good many organs in humans, about whose function we knew essentially nothing.  We knew that DNA existed, but had no idea that it was the genetic material, much less how it worked.

Makes you wonder what our understanding will be in another hundred years, doesn't it?

And maybe some of the people reading this right now will be around to see it.

Saturday, September 17, 2016

The language of morality

If we needed any more indication that our moral judgments aren't as solid as we'd like to think, take a look at some research by Janet Geipel and Constantinos Hadjichristidis of the University of Trento (Italy), working with Luca Surian of Leeds University (UK).

The study, entitled "How Foreign Language Shapes Moral Judgment," appeared in the Journal of Social Psychology.  What Geipel et al. did was to present multilingual individuals with situations which most people consider morally reprehensible, but where no one (not even an animal) was deliberately hurt -- such as two siblings engaging in consensual and safe sex, and a man cooking and eating his dog after it was struck by a car and killed.  These types of situations make the vast majority of us go "Ewwwww" -- but it's sometimes hard to pinpoint exactly why that is.

"It's just horrible," is the usual fallback answer.

So did the test subjects in the study find such behavior immoral or unethical?  The unsettling answer is: it depends on what language the situation was presented in.

Across the board, if the situation was presented in the subject's first language, the judgments regarding the situation were uniformly harsher and more negative.  Presented in languages learned later in life, the subjects were much more forgiving.

The researchers controlled for which languages were being spoken; they tested (for example) native speakers of Italian who had learned English, and native speakers of English who had learned Italian.  It didn't matter what the language was; what mattered was when you learned it.

[image courtesy of the Wikimedia Commons]

The explanation they offer is that the effort of speaking a non-native language "ties up" the cognitive centers, making us focus more on the acts of speaking and understanding and less on the act of passing moral judgment.  I wonder, however, if it's more that we expect more in the way of obeying social mores from our own tribe -- we subconsciously expect people speaking other languages to act differently than we do, and therefore are more likely to give a pass to them if they break the rules that we consider proper behavior.

A related study by Catherine L. Harris, Ayşe Ayçiçeĝi, and Jean Berko Gleason appeared in Applied Psycholinguistics.  Entitled "Taboo Words and Reprimands Elicit a Greater Autonomic Reactivity in a First Language Than in a Second Language," the study showed that our emotional reaction (as measured by skin conductivity) to swear words and harsh judgments (such as "Shame on you!") is much stronger if we hear them in our native tongue.  Even if we're fluent in the second language, we just don't take its taboo expressions and reprimands as seriously.  (Which explains why my mother, whose first language was French, smacked me in the head when I was five years old and asked her -- on my uncle's prompting -- what "va t'faire foutre" meant.)

All of which, as both a linguistics geek and someone who is interested in ethics and morality, I find fascinating.  Our moral judgments aren't as rock-solid as we think they are, and how we communicate alters our brain, sometimes in completely subconscious ways.  Once again, the neurological underpinnings of our morality turns out to be strongly dependent on context -- which is simultaneously cool and a little disturbing.

Friday, September 16, 2016

Medical hacking

I read something today that made me really furious, and the worst part is that I don't even know who the target of my anger is.

The story that pissed me off so completely was a CNN article about a group of Russian hackers who "outed" gymnast Simone Biles, tennis player Venus Williams, and others for being on prescription medication.  Please note that the medications these athletes were on had been previously reported to the US Anti-Doping Agency, and the athletes granted exemptions.  There has been no allegation by the USADA, the United States Olympic Committee, or any of the oversight organizations governing the individual sports that there was any wrongdoing at all on the part of the athletes.

So what that means is, these people's private medical records have been made public, for no reason whatsoever.

Biles responded to the situation with the graciousness I would expect, having watched her being interviewed during the Rio Olympics.  "I have ADHD and have taken medication for it since I was a kid," she tweeted, shortly after the story broke.  "Please know I believe in clean sport, have always followed the rules, and will continue to do so as fair play is critical to sport and is very important to me."

The first thing that outraged me about this whole situation is that these hackers, whoever they are, thought it was appropriate to violate the privacy of athletes for... for what?  I don't know.  Increasingly, hackers such as these guys (who go under the handles "Fancy Bear" and "Tsar Team") and the more famous ultra-hacker Julian Assange are making records public simply because they can, and fuck the consequences.  On one hand, I understand the motivation; I recognize the damage that has been done by covert operations, by there being no transparency and no oversight of the government and the corporate world.  There is certainly a time for whistleblowers to bring to light documents that are being hidden for immoral and unethical reasons.

But that doesn't mean that every record should be made public.  There are government documents that are quite rightly classified as top secret.  On an personal level, there is information -- and that includes medical records -- that are nobody's business but the individual's.

So, I'm sorry, but all documents are not equal.  And no, you don't have the right, simply by virtue of your existence, to see everything and anything that has ever been written down.

But there's a subtler reason why this situation infuriates me, and that's the sly implication that because Simone Biles has ADHD, she should be ashamed of it or apologize for it.  It's an attitude you find toward people with all sorts of mental and emotional illnesses and disabilities -- that somehow, you're making it all up, that you really don't need your medications, that it's not the same thing as a "real" physical ailment.  It's what gave rise to the following, which has circulated widely on social media:


I will be open, here (and note: it is my choice to be public about this; if I did not want this known, it would be entirely my right not to have it known).  I have struggled with moderate to severe depression my entire adult life.  I have been suicidal more than once.  Through a combination of therapy, the support of my friends and family, and proper medication, I now have the ability to function without feeling like I'm constantly lost in a fog of despair.  The idea that someone, under the guise of "keeping your mind open" (note the subtitle on the above photograph) would imply that my medication is a cop-out, that I should throw it away and go for a walk in the woods, is not only ignorant, it is arrogant to the point of being insulting.

And my depression is not a point of shame for me.  It's not somehow my fault, nor is it under my control.  It is no more shameful to have a mental illness than it is to have multiple sclerosis or heart disease or cancer.  The fact that we still look at mental illnesses as qualitatively different from other conditions means that we still have a long way to go, societally, in how we think about human health.

So the fact that Simone Biles and other athletes are in the position of having their personal information made public (especially since all of the athletes in question had cleared their meds with the relevant regulatory boards) is appalling; even worse is the implication is that they need to defend themselves on points that need no defense.

The whole thing, in fact, is maddening -- that hackers are now throwing our private records around just because they can, and the ongoing problem of our society's attitude toward illness and medication in general, and mental illness in particular.  How to stop the first is more of a technological problem than anything else; changing the second is something that is incumbent upon all of us.

Thursday, September 15, 2016

Sweet deals

Vested interests are a huge problem in science.

Scientists, like all humans, have biases.  Our perceptual and cognitive apparatus isn't foolproof, and our prior understanding can sometimes blind us to what is actually going on.  A darker tendency, however, is the fact that scientists (once again, like all of us) are subject to the temptations of power, notoriety, and money -- and this can sometimes lead to the publication of research that is seriously flawed.

Science journals all require the declaration by researchers of any conflicts of interest that might bias the research -- if, for example, the study was funded by a group that had motivation to make certain that the scientists reached a particular conclusion.  Conflict of interest doesn't mean that the research is flawed, of course; assuming that is called the motive fallacy.  Having a motive to lie doesn't mean that you actually did.  But a known conflict of interest would certainly make me read the research a lot more carefully -- which is the intent of the policy.

It's pretty suspect, therefore, when research is done where there was a conflict of interest -- and it wasn't declared.  And this appears to be the case with research done all the way back in the 1950s and 1960s casting doubt on the health effects of sugar apropos of heart disease -- and which a study published just last week in the Journal of the American Medical Association has showed was funded by the Sugar Research Foundation.

[image courtesy of the Wikimedia Commons]

The study, entitled "Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents," by Cristin E. Kearns, Laura A. Schmidt, and Stanton A. Glantz, casts a skew glance at the influence that industry has had on scientific (in particular, medical) research.  Studies funded by the SRF -- the research arm of the sugar industry -- not only successfully raised doubts on the role of sucrose in inflammatory diseases such as heart disease, but turned the public's eye toward dietary fat as the culprit.

In fact, sugar industry spokespeople not only suppressed information connecting carbohydrate consumption to cardiovascular disease, they suggested that increased sugar consumption would improve health.  The SRF's president, Henry Hass, said in a public speech in 1954:
Leading nutritionists are pointing out the chemical connection between [Americans'] high-fat diet and the formation of cholesterol which partly plugs our arteries and capillaries, restricts the flow of blood, and causes high blood pressure and heart trouble… if you put [the middle-aged man] on a low-fat diet, it takes just five days for the blood cholesterol to get down to where it should be…  If the carbohydrate industries were to recapture this 20 percent of the calories in the US diet (the difference between the 40 percent which fat has and the 20 percent which it ought to have) and if sugar maintained its present share of the carbohydrate market, this change would mean an increase in the per capita consumption of sugar more than a third with a tremendous improvement in general health.
Dietary scientist John Yudkin and others had published research identifying sugar as a factor in increasing the risk of heart disease, but Hass and others with ties to the sugar industry began pumping money into research which had as its goal demonstrating the opposite.

Which is, of course, antithetical to the way research should be done.  Of course scientists have their preconceived notions, their guesses as to which way the data will swing.  But the idea that you'd go into a study with the intent to support whatever your well-heeled funding agency says you should support is frightening.

And the worst part was that the scientists themselves did not openly declare their conflict of interest.  The result is that their research was not given the scrutiny it should have received -- and the industry's role in skewing the public's understanding of the role of nutrition in health has only recently been uncovered.

"This historical account of industry efforts demonstrates the importance of having reviews written by people without conflicts of interest and the need for financial disclosure," the authors write.  "Scientific reviews shape policy debates, subsequent investigations, and the funding priorities of federal agencies... Whether current conflict of interest policies are adequate to withstand the economic interests of industry remains unclear."

While discouraging, such findings are no particular surprise, given the tobacco industry's role in suppressing information about the link between smoking and cancer.  However, it should alert us to the potential for funding to bias research that is going on today -- making it even more imperative that our policymakers give careful scrutiny to "studies" of climate change by groups like the Heartland Institute, whose ties (financial and otherwise) to the fossil fuel industry run deep.

Wednesday, September 14, 2016

Tales from the flat Earth

Having steeped myself in All Things Woo-Woo for some years, you'd think I'd have it all figured out, at least with respect to why people believe weird things.  After all, the topic was the subject of one of my favorite reads, Michael Shermer's book entitled, oddly enough, Why People Believe Weird Things.  (And this book, in my opinion, should be required reading in every high school in America.)

But there's still a lot about the whole woo-woo belief system that mystifies me, and one of the things that baffles me most is why weird ideas come and go -- and then reappear.

I'm not talking about cases where the reappearance was caused by the money motive, as with all of the unreality shows now springing up like fungus after a rainstorm on networks like the This Used To Be About History But Isn't Anymore channel.  Programs with titles like Monster Quest, UFO Hunters, Ghost Adventures, Paranormal Witness, and Real Bigfoots of New Jersey.

Okay, I made the last one up.  But it's not really that much weirder than the actual ones that are out there.  And the plots are all the same; some people go out looking for whatever they're hunting, don't find it, and then high-five each other at the end as if their quest had been a raving success.

So it's no surprise that these shows resurrect interest in the paranormal.  But what is more perplexing to me is why all of a sudden woo-woo ideas from the past will catch hold and rise, zombie-like, from the grave, without there being any apparent monetary incentive involved.

In particular, I'm thinking of the Flat Earth Theory, which is only a "theory" in the sense of being "an idea that someone came up with."  Myself, I'd thought that the whole idea of the flat Earth had gone out of vogue somewhere back in the 15th century (and to be completely accurate, the fact that the Earth is a sphere had been proven without a shadow of a doubt way back in 240 B.C.E. by a Greek scientist named Eratosthenes).

I use the shadow metaphor deliberately, because what Eratosthenes did was to measure the difference in the angle of a shadow cast by a rod in Syene, Egypt, and compared it to the angle of the shadow of the same rod in Alexandria on the same day of the year -- and from the comparison, and using a little bit of trigonometry and solid geometry, came damn close to getting the circumference of the Earth right.

So you'd think that 2,200 years ago, the Flat Earthers would pretty much have said, "Oh.  Okay.  We were wrong."  But no.  They're back, and they're back with a vengeance.  As little as ten years ago, Flat Earthers were kind of a fringe group, and the Flat Earth Society was populated by a membership that seemed to be half True Believers and half people who joined it to have a good laugh.  But now, there is an increasing number of Flat Earthers out there, and they are not amused by us scoffers.

They're mad as hell, and they're not gonna take it any more.

And, according to an article in The Atlantic, they are coming up with additional wacky ideas to add to their view of the world, based upon the premise that if you believe one idiotic idea, appending other idiotic ideas onto it makes it more sensible.  According to Sam Kriss, who wrote the article, not only do they believe that NASA is leading a coverup of all of the evidence for Earth being shaped like a platter (and, therefore, all of the astronomers are too, because apparently NASA uses a substantial part of its ever-shrinking budget to pay off the scientists and keep them from spilling the beans), but the geologists are in on it, too.

Why would the geologists care, you might ask?  Well, according to a small but vocal subset of Flat Earthers, another thing that is fake about the scientific view of the world is... forests.  Because the forests we have now aren't real forests, at least not in the sense that they're like they were back eons ago.  Thousands of years ago, before humans were the common species they are now, there were actual honest-to-goodness forests made of actual honest-to-goodness trees...

... that had heights measured in miles.

What is the evidence for all of this?  Well, some of the stuff that geologists hoodwink the populace into thinking are "eroded volcanic cores," like the Devil's Tower in Wyoming, are actually the stumps of these humongous trees.

[image courtesy of photographer Colin Faulkingham and the Wikimedia Commons]

So anyhow.  I know that this is a nonsensical idea, but what puzzles me is why it's caught on so strongly just in the last year or so.  Social media has been buzzing with stridently vocal Flat Earthers who believe stuff like the aforementioned horseshit about MegaTrees, and who consider skeptics like me either deluded sheeple or else NASA shills.  (Which reminds me, NASA: where the hell is my shill check?  I'm waiting.)

I'm hoping that this is just a phase, and that this will fizzle out the same way that Ouija boards did a couple of years ago when there was a sudden flurry of people wanting to communicate with the Spirit World.  But this one is kind of annoying, because the Flat Earthers don't just quietly do their thing -- these people are cantankerous.  They gum up websites like the r/skeptic subreddit with their nonsense, engaging with people who just can't stand to ignore them.

So I'm counting on this being an example of what C. S. Lewis was talking about when he said, "Fashions come and go, but mostly they go."  And in my opinion, this one can't go soon enough.