Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Friday, March 5, 2021

Knockin' on heaven's door

It's been awhile since we've have a truly goofy claim to consider, so to take a brief diversion from more serious issues, today I bring you:

NASA space telescopes have photographed the Celestial City of New Jerusalem, as hath been prophesied in the scriptures.

I wish I was making this up.  The claim appeared on the ultra-fundamentalist site Heaven & Hell, and the post, written by one Samuel M. Wanginjogu, reads like some kind of apocalyptic wet dream.

It opens with a bang.  "Despite new repairs to the Hubble Telescope," Wanginjogu writes, "NASA refuses to release old photos or take new ones of Heaven!"

Imagine that.

He goes on to explain further:
Just days after space shuttle astronauts repaired the Hubble Space Telescope in mid December, the giant lens focused on a star cluster at the edge of the universe – and photographed heaven! 
That’s the word from author and researcher Marcia Masson, who quoted highly placed NASA insiders as having said that the telescope beamed hundreds of photos back to the command center at Goddard Space Flight Center in Greenbelt, Md., on December 26. 
The pictures clearly show a vast white city floating eerily in the blackness of space. 
And the expert quoted NASA sources as saying that the city is definitely Heaven “because life as we know it couldn’t possibly exist in icy, airless space. 
“This is it – this is the proof we’ve been waiting for,” Dr. Masson told reporters. 
“Through an enormous stroke of luck, NASA aimed the Hubble Telescope at precisely the right place at precisely the right time to capture these images on film.  I’m not particularly religious, but I don’t doubt that somebody or something influenced the decision to aim the telescope at that particular area of space.
“Was that someone or something God himself?  Given the vastness of the universe, and all the places NASA could have targeted for study, that would certainly appear to be the case.”
Unsurprisingly, NASA researchers have "declined to comment."

Then we get to see the photograph in question:



After I stopped guffawing, I read further, and I was heartened to see that Wanginjogu is all about thinking critically regarding such claims:
I am not an expert in photography, but if you scrutinize the photo carefully, you find that the city is surrounded by stars if at all it was taken in space...  If the photo is really a space photo, then it could most likely be the Celestial city of God because it is clear that what is in the photograph is not a star, a planet or any other known heavenly body.
Yes!  Surrounded by stars, and not a planet!  The only other possibility, I think you will agree, is that it is the Celestial City of God.

Wanginjogu then goes through some calculations to estimate the size of New Jerusalem:
If an aero plane [sic] passes overhead at night, you are able to see the light emitted by it.  If that aero plane [sic] was to go higher up from the surface of the earth, eventually you won’t be able to see any light from it and that is only after moving a few kilometers up.  This is because of its small size.  Yet our eyes are able to see, without any aid, stars that are millions of light years away.  This is because of their large size. 
The further away an object is from the surface of the earth, then the bigger it needs to be and the more the light it needs to emit for it to be seen from earth.
The city of New Jerusalem is much smaller than most of the stars that you see on the sky.  To be more precise, it is much smaller than our planet earth.  Remember that here we are not talking of the entire heaven where God lives but of the City of New Jerusalem.  The city of New Jerusalem is currently located in heaven.  Of course, heaven is much larger that the city itself.  The photo seems to be of the city itself rather than the entire heaven.
Some solid astrophysics, right there.  He then goes on to use the Book of Revelation to figure out how big the city prophesied therein must be, and from all of this he deduces that the Celestial City must be somewhere within our Solar System for Hubble to have captured the photograph.  He also uses the testimony of one Seneca Sodi, who apparently saw an angel and asked him how far away heaven was, and the angel said, "Not far."

So there you have it.

The best part, though, was when I got about halfway through, and I found out where Wanginjogu got the photograph from.  (Hint: not NASA.)  The photograph, and in fact the entire claim, originated in...

... wait for it...

... The Weekly World News.

Yes, that hallowed purveyor of stories about Elvis sightings, alien abductions, and Kim Kardashian being pregnant with Bigfoot's baby.  Even Wanginjogu seems to realize he's on shaky ground, here, and writes:
This magazine is known to exaggerate stories and to publish some really controversial articles.  However, it also publishes some true stories.  So we cannot trash this story just because it first appeared in The Weekly World News magazine.  It is worthwhile to consider other aspects of the story.
He's right that you can't rule something out because of the source, but this pretty much amounts to something my dad used to say, to wit, "Even stopped clocks are right twice a day."   But suffice it to say that here at Skeptophilia headquarters we have considered other aspects of the story, and it is our firmly-held opinion that to believe this requires that you have a single scoop of butter-brickle ice cream where the rest of us have a brain.

Anyway, there you are.  NASA photographing heaven.  Me, I'm waiting for them to turn the Hubble the other direction, and photograph hell.  Since that's where I'm headed anyway, might as well take a look at the real estate ahead of time.

****************************************

The advancement of technology has opened up ethical questions we've never had to face before, and one of the most difficult is how to handle our sudden ability to edit the genome.

CRISPR-Cas9 is a system for doing what amounts to cut-and-paste editing of DNA, and since its discovery by Emmanuelle Charpentier and Jennifer Doudna, the technique has been refined and given pinpoint precision.  (Charpentier and Doudna won the Nobel Prize in Chemistry last year for their role in developing CRISPR.)

Of course, it generates a host of questions that can be summed up by Ian Malcolm's quote in Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."  If it became possible, should CRISPR be used to treat devastating diseases like cystic fibrosis and sickle-cell anemia?  Most people, I think, would say yes.  But what about disorders that are mere inconveniences -- like nearsightedness?  What about cosmetic traits like hair and eye color?

What about intelligence, behavior, personality?

None of that has been accomplished yet, but it bears keeping in mind that ten years ago, the whole CRISPR gene-editing protocol would have seemed like fringe-y science fiction.  We need to figure this stuff out now -- before it becomes reality.

This is the subject of bioethicist Henry Greely's new book, CRISPR People: The Science and Ethics of Editing Humans.  It considers the thorny questions surrounding not just what we can do, or what we might one day be able to do, but what we should do.

And given how fast science fiction has become reality, it's a book everyone should read... soon.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Thursday, March 4, 2021

Doggie deception

I used to have a dog who had a conscience.

Her name was Doolin, and she was half border collie and half bluetick coonhound, which are -- and by this I mean no disparagement of Doolin, who was an awesome dog -- two breeds that should never be allowed to become friendly with one another.  The two pieces of her ancestry were at constant war.  Her hound side made her get into all manner of trouble, and her collie side made her feel horribly guilty afterward.  Like the time I got home from work, opened the front door, and the first thing I heard was Doolin's feet pattering downstairs, running away from me.  This was highly un-Doolin-like behavior -- she was ordinarily affectionate to the point of being clingy -- so I knew she'd done something she shouldn't have.

Sure enough, she'd pushed the kitchen door open, dumped the trash, and scattered its contents all over the house -- including playing kill-the-squirrel with a used coffee filter.

I stood at the head of the stairs, and said in a stern voice, "DOOLIN.  GET UP HERE."  She came to the base of the staircase, and proceeded to drag herself up on her belly, step by step, all the time her tail wagging frantically, every fiber of her being radiating, "OMG, Dad, I'm SOOOOOOO sorry, I couldn't help myself..."

At that point, I started laughing, and she immediately knew she was off the hook.  She got up and trotted the rest of the way up the stairs as if she hadn't a care in the world.

Not all dogs have this understanding of morality and consequences, however.  Our current dog, Guinness, a big, galumphing American Staffordshire terrier mix, goes through life with a cheerful insouciance regardless whether he's doing what he's supposed to or not.  When he swiped a newly-opened block of expensive French brie off the counter and snarfed the whole thing down, he reacted with a canine shoulder-shrug when we yelled at him.

"What did you expect me to do?" he seemed to say.  "I'm a dog, guys."

But just because he's a dog doesn't mean he isn't a natty dresser.

The reason this comes up is because of a paper that came out this week in Animal Cognition entitled, "Deceptive-like Behavior in Dogs," by Marianne Heberlein, Marta Manser, and Dennis Turner, of the University of Zürich.  They set up a fascinating task for dogs, where they interacted with two human partners, one of whom was cooperative (increasing the likelihood of any treats that showed up being shared) and the other competitive (who was likely to keep any treats for him/herself).  After a short training period, the dogs not only were able to tell who was cooperative and who was competitive -- they started using deceptive behavior to trick the competitive partner into losing out.

The authors write:

We investigated in a three-way choice task whether dogs are able to mislead a human competitor, i.e. if they are capable of tactical deception.  During training, dogs experienced the role of their owner, as always being cooperative, and two unfamiliar humans, one acting ‘cooperatively’ by giving food and the other being ‘competitive’ and keeping the food for themselves.  During the test, the dog had the options to lead one of these partners to one of the three potential food locations: one contained a favoured food item, the other a non-preferred food item and the third remained empty.  After having led one of the partners, the dog always had the possibility of leading its cooperative owner to one of the food locations.  Therefore, a dog would have a direct benefit from misleading the competitive partner since it would then get another chance to receive the preferred food from the owner.  On the first test day, the dogs led the cooperative partner to the preferred food box more often than expected by chance and more often than the competitive partner.  On the second day, they led the competitive partner less often to the preferred food than expected by chance and more often to the empty box than the cooperative partner.  These results show that dogs distinguished between the cooperative and the competitive partner, and indicate the flexibility of dogs to adjust their behaviour and that they are able to use tactical deception.

Psychologist Stanley Coren, writing about the research in Psychology Today, explains why this response actually requires pretty sophisticated insight -- and a basic understanding of the concept of deception:

So now you can see what the dog's dilemma is: He has been trained to lead a person to a box containing food.  He knows that if he leads the generous person to the "best treat" he will get that treat.  He also knows that if he leads the selfish person to that treat, he will not get it.  However, there is an alternative: The dog could lie or deceive the selfish person by leading her to the less preferred treat, or even better, to the box with no treat at all in it — after all, she is mean and doesn't deserve a treat.  If the dog does that, then he knows that a short time later his owner is going to take him back and give him another opportunity to choose a box.  When that happens, if he chooses the box with the good treat, his owner will give it to him.  But this will happen only if he first deceives the selfish person so that the good treat is still in the box.

Most of the dogs they tested caught on to this really quickly -- explaining behavior like my friends' dog, who has been known to stare out of the window and bark like hell until my friend stands up to see what's out there, at which point his dog will immediately stop barking and jump up into the now-vacated, and still warm, recliner.

All of which shows that humans and dogs have been in close company long enough that our canine friends have come to understand human psychology, perhaps better than we understand theirs.  My guess, though, is that Guinness doesn't really care how much we intellectualize about his behavior.  He's more focused on waiting until we leave another block of cheese unguarded on the kitchen counter.

****************************************

The advancement of technology has opened up ethical questions we've never had to face before, and one of the most difficult is how to handle our sudden ability to edit the genome.

CRISPR-Cas9 is a system for doing what amounts to cut-and-paste editing of DNA, and since its discovery by Emmanuelle Charpentier and Jennifer Doudna, the technique has been refined and given pinpoint precision.  (Charpentier and Doudna won the Nobel Prize in Chemistry last year for their role in developing CRISPR.)

Of course, it generates a host of questions that can be summed up by Ian Malcolm's quote in Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."  If it became possible, should CRISPR be used to treat devastating diseases like cystic fibrosis and sickle-cell anemia?  Most people, I think, would say yes.  But what about disorders that are mere inconveniences -- like nearsightedness?  What about cosmetic traits like hair and eye color?

What about intelligence, behavior, personality?

None of that has been accomplished yet, but it bears keeping in mind that ten years ago, the whole CRISPR gene-editing protocol would have seemed like fringe-y science fiction.  We need to figure this stuff out now -- before it becomes reality.

This is the subject of bioethicist Henry Greely's new book, CRISPR People: The Science and Ethics of Editing Humans.  It considers the thorny questions surrounding not just what we can do, or what we might one day be able to do, but what we should do.

And given how fast science fiction has become reality, it's a book everyone should read... soon.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Wednesday, March 3, 2021

The creative relationship

When I was in freshman lit -- a lot of years ago -- we were assigned to read and analyze Robert Frost's classic poem, "Stopping by Woods on a Snowy Evening."

Mostly what I remember about the discussion that ensued was the professor telling us that when an interviewer asked Frost himself what the poem meant, Frost replied that it wasn't intended to be allegorical, or symbolic of anything; it was simply a recounting of a scene, a weary traveler pausing for a moment to appreciate the beauty of a snowy woodland.

"Of course," the professor went on, cheerfully confident, "we know that a poet of Frost's stature wouldn't produce anything that simplistic -- so let's see what symbolism we can find in his poem!"

I recall being kind of appalled, mostly at the professor's hubris in thinking that his own opinions about meaning overrode what the poet himself intended.  Since then, though, I've begun to wonder.  I still think the professor was a bit of a cocky bastard, don't get me wrong; but I've come to realize that creativity implies a relationship -- it's not as simple as writer (or artist or composer) creating, and reader (or observer or listener) consuming.

This topic comes up because a couple of days ago, a friend of mine sent me a link to a video by Aldous Harding, a brilliant singer/songwriter from New Zealand, performing her song "The Barrel."


The song is weird, mesmerizing, strangely beautiful, and the video is somewhere in that gray area at the intersection of "evocative" and "fever dream."  The lyrics are downright bizarre in places:
The wave of love is a transient hut
The water's the shell and we are the nut
But I saw a hand arch out of the barrel

Look at all the peaches
How do you celebrate
I can't appearance out of nowhere
What does it mean?  Harding herself wants to leave that, at least in part, up to the listener.  In an interview with NPR, she said, “I realized that the video was a well-intended opinion of mine to just keep it loose.  I feel we’re expected to be able to explain ourselves...  But I don’t necessarily have that in me the way you might think."

It's wryly funny, especially in light of the long-ago pronouncements of my freshman lit professor, that a lot of people are weighing in on the song and interpreting it in a variety of mutually-exclusive ways.  One writer said that it's about female empowerment and escaping from abusive relationships.  Another suggests that it describes how "the scariest thing is looking in the mirror and not recognising what you see staring back at you."  A review in The Guardian lists other interpretations that have been suggested:
Depending on whose interpretation you plumped for, the video was either a homage to Alejandro Jodorowsky’s surreal 1973 film The Holy Mountain, a nod to the national dress of Wales (where [Harding's album] Designer was partly recorded and where Harding currently resides), analogous to the faintly disturbing vision of pregnancy found in Sylvia Plath’s 1960 poem "Metaphors," inspired by postmodernist poet Susan Howe’s book Singularities, which surveys the 17th-century First Nation wars in New England, [or] somehow related to menstruation.
Watch it... and see what you think.

Like my lit professor, what gets me about a lot of these interpretations is how certain they sound.  My own reaction was that the lyrics fall into the realm of "nearly making sense," and that part of why they're fascinating -- and why I've watched the video several times -- is that there's a real art to using language that way, neither being too overt about what you mean nor devolving into complete nonsense.

Creativity, I think, implies a relationship between producer and consumer, and because of that, the producer can't always control where it goes.  Readers, listeners, and observers bring to that activity their own backgrounds, opinions, and knowledge, and that is going to shape what they pull out of the creative experience.  And, of course, this is why sometimes that relationship simply fails to form.  I love the music of Stravinsky, while it leaves my wife completely cold -- she thinks it's pointless cacophony.  A lot of people are moved to tears by Mozart, but I find much of his music inspires me to say nothing more than "it's nice, I guess."

It's part of why I have zero patience for genre snobs and self-appointed tastemakers.  If some piece of creative work inspires you, or evokes emotions in you, it's done its job, and no one has the slightest right to tell you that you're wrong for feeling that way.  Honestly, I'm delighted if Mozart grabs you by the heart and swings you around; that's what music is supposed to do.  Just because I'm more likely to have that experience listening to Firebird than Eine Kleine Nachtmusik doesn't mean I'm right and you're wrong; all it means is that human creativity is complex, intricate, and endlessly intriguing.

So don't take it all that seriously if someone tells you what a poem, lyric, or piece of art or music means, even if that person is a college professor.  Enjoy what you enjoy, and bring your own creativity to the relationship.  It may be that Robert Frost didn't mean "Stopping by Woods on a Snowy Evening" to be anything more than a depiction of a scene; but that doesn't mean you can't bring more to the reading, and pull more out of the reading, yourself.

And isn't that what makes the creative experience magical?

****************************************

The advancement of technology has opened up ethical questions we've never had to face before, and one of the most difficult is how to handle our sudden ability to edit the genome.

CRISPR-Cas9 is a system for doing what amounts to cut-and-paste editing of DNA, and since its discovery by Emmanuelle Charpentier and Jennifer Doudna, the technique has been refined and given pinpoint precision.  (Charpentier and Doudna won the Nobel Prize in Chemistry last year for their role in developing CRISPR.)

Of course, it generates a host of questions that can be summed up by Ian Malcolm's quote in Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."  If it became possible, should CRISPR be used to treat devastating diseases like cystic fibrosis and sickle-cell anemia?  Most people, I think, would say yes.  But what about disorders that are mere inconveniences -- like nearsightedness?  What about cosmetic traits like hair and eye color?

What about intelligence, behavior, personality?

None of that has been accomplished yet, but it bears keeping in mind that ten years ago, the whole CRISPR gene-editing protocol would have seemed like fringe-y science fiction.  We need to figure this stuff out now -- before it becomes reality.

This is the subject of bioethicist Henry Greely's new book, CRISPR People: The Science and Ethics of Editing Humans.  It considers the thorny questions surrounding not just what we can do, or what we might one day be able to do, but what we should do.

And given how fast science fiction has become reality, it's a book everyone should read... soon.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Tuesday, March 2, 2021

Here comes the sun

If you needed further evidence of why we need sound science education -- and what happens when we don't -- look no further than "Sun Gazing: Why I Stare At the Sun," over at the site in5d Esoteric, Metaphysical, and Spiritual Database.

And in case you're thinking, "No... that headline can't really mean what it sounds like it means," unfortunately it does.

[Image is in the Public Domain courtesy of NASA]

Right out of the starting gate, we're told that all of the stuff we've been told about sunlight exposure causing skin cancer, skin damage, and sun blindness is wrong.  "All of these things," the author tells us, "have little to do really with the sun."

In fact, the opposite is true.  Sun exposure heals melanoma.

So then, what causes skin cancer and sun blindness?  Respectively, the answers are: toxins (of course), and...

... glasses.

Lest you think I'm making this up, here's the relevant passage:
Your skin is your largest eliminatory organ, whereby unprocessed toxins are released through the skin’s pores.  Interactions between the toxins and the sun’s rays, bring about what we know of, as skin cancer. 
Skin damage, such as leathering of the skin, is caused by lack of EFA’s in the diet.  Sun blindness or damage to the eyes, is caused by the use of corrective lenses.  Glasses, and contact lenses both, cause an unnatural glare on the eyes, when exposed to the sun.  This can cause serious damage to the eyes over time.
I just got a new pair of glasses, because (1) I found I was running into walls more than is recommended, and (2) my wife was tired of my handing her stuff with small print and saying, "Carol, is this actually writing?  Like, in English?  If so, what the hell does it say?"  Little did I know that it would cause me to experience "unnatural glare."  I thought all they did was help me see better.

Me, just asking for "serious eye damage"

The other thing I wondered about were "EFAs."  These are never defined in the article, but I found out that it stands for "essential fatty acids," i.e., linoleic acid and alpha-linoleic acid.  So apparently if you consume enough of those, sunburn isn't a problem.

We're also told that sunscreen causes cancer.  So use sesame oil instead.  Presumably that way you'll hear a nice crackling sound as you sit in the sun, similar to chicken wings hitting the oil in a deep fat fryer.

Then we get to the main gist of the article, which suggests that we spend up to fifteen minutes a day staring at the sun.  It has to be near sunrise or sunset, though:
The practice entails looking at the rising or setting sun one time per day only during the safe hours.  No harm will come to your eyes during the morning and evening safe hours.  The safe hours are anytime within 1-hour window after sunrise or anytime within the 1-hr window before sunset.  It is scientifically proven beyond a reasonable doubt that during these times, one is free from UV and IR rays exposure, which is harmful to your eyes.
Righty-o.  It is "scientifically proven" that the sun waits for an hour after rising to switch on its ultraviolet and infrared rays, probably after it's had its second cup of coffee.

Then we're given a variety of puzzling statements and directives:
  • Food makes us commit the maximum pain to others and exploit others.
  • You should walk barefoot for 45 minutes daily for the rest of your life.
  • The sun energy or the sunrays passing through the human eye are charging the hypothalamus tract, which is the pathway behind the retina leading to the human brain.  As the brain receives the power supply through this pathway, it is activated into a "brainutor."  [Nota bene: I am not making this word up.]  One of the software programs inherent in the brain will start running and we will begin to realize the changes since we will have no mental tension or worries.
  • 70 to 80% of the energy synthesized from food is taken by the brain and is used up in fueling tensions and worries.
  • The pineal gland has certain psychic and navigational functions.  Navigational means one can "fly like the birds."
  • After six months of sungazing you will start to "have the original form of micro food, which is our sun."  Whatever the fuck that means.  Additionally, this can avoid the toxic waste that you take into your body while you eat regular food.
  • Photosynthesis, which we misunderstand, does not in fact need chlorophyll.
So science be damned, apparently.  But that won't matter to you, because after nine months of staring at the sun, "you have become a solar cooker."

And no, I did not make that statement up, either.

It's kind of funny that despite the fact that the author is unequivocal about how wonderful sun gazing is, (s)he seems to be aware that this article is 100% unadulterated horseshit.  At the beginning of the article is the following disclaimer:
PLEASE NOTE: This sungazing information is for educational purposes only.  We do not recommend sungazing to anyone.  If you are considering sun gazing, please research this as much as possible.
I dunno, sure as hell sounded like you were recommending it to me.  But in case we were uncertain about that point, it's reiterated at the end:
Disclaimer: The information on this web site is presented for the purpose of educational and free exchange of ideas and speech in relation to health and awareness only.  It is not intended to diagnose any physical or mental condition.  It is not intended as a substitute for the advice and treatment of a licensed professional.  The author of this website is neither a legal counselor nor a health practitioner and makes no claims in this regard.
I'm no legal expert either, but what does the statement "After 3-6 months of sun gazing, physical diseases will start to be cured" sound like to you?

It's obvious that what they're trying to do is to avoid having some newly-blind person sue the shit out of them.  But as far as I understand, you can't just give people bogus medical advice and then get away with it by saying at the end, "Please note: This bogus medical advice is not actually medical advice!"

I'd like to think that no one is gullible enough to fall for this, but you just know that there will be people who are.  Right now there are probably people out there staring at the sun in order to activate the higher vibrations of their chakras, or some such nonsense, and will spend the rest of the day tripping over curbs because they've burned a hole directly through their retinas.

At this point in writing this blog, I'm beginning to lose my sympathy for the people who are getting suckered.  There are laws in place to protect people from being prey of fraudulent medical advice, but at some point you just have to learn enough science to protect yourself.  There will always be charlatans out there trying to sell the newest variety of snake oil, not to mention well-intentioned people who are (to put not too fine a point on it) insane.  So arming yourself with a little bit of science is really your best bet.

That, or a good pair of sunglasses.

****************************************

The advancement of technology has opened up ethical questions we've never had to face before, and one of the most difficult is how to handle our sudden ability to edit the genome.

CRISPR-Cas9 is a system for doing what amounts to cut-and-paste editing of DNA, and since its discovery by Emmanuelle Charpentier and Jennifer Doudna, the technique has been refined and given pinpoint precision.  (Charpentier and Doudna won the Nobel Prize in Chemistry last year for their role in developing CRISPR.)

Of course, it generates a host of questions that can be summed up by Ian Malcolm's quote in Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."  If it became possible, should CRISPR be used to treat devastating diseases like cystic fibrosis and sickle-cell anemia?  Most people, I think, would say yes.  But what about disorders that are mere inconveniences -- like nearsightedness?  What about cosmetic traits like hair and eye color?

What about intelligence, behavior, personality?

None of that has been accomplished yet, but it bears keeping in mind that ten years ago, the whole CRISPR gene-editing protocol would have seemed like fringe-y science fiction.  We need to figure this stuff out now -- before it becomes reality.

This is the subject of bioethicist Henry Greely's new book, CRISPR People: The Science and Ethics of Editing Humans.  It considers the thorny questions surrounding not just what we can do, or what we might one day be able to do, but what we should do.

And given how fast science fiction has become reality, it's a book everyone should read... soon.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Monday, March 1, 2021

Symbols, sigils, and reality

When I was little, I had a near-obsession with figuring out whether things were real.

I remember pestering my mom over and over, because I felt sure there was some essential piece of understanding I was missing.  After much questioning, I was able to abstract a few general rules:

  • People like Mom, Dad, Grandma, and our next-door neighbor were 100% real.
  • Some books were called non-fiction and were about people like Abraham Lincoln, who was real even though he wasn't alive any more.
  • For people in live-action shows, like Lost in Space,  the actors were real people, but the characters they were depicting were not real.
  • Cartoons were one step further away.  Neither Bugs Bunny's adventures, nor his appearance, were real, but his voice was produced by a real person who, unfortunately, looked nothing like Bugs Bunny.
  • Characters in fictional stories were even further removed.  The kids in The Adventures of Encyclopedia Brown weren't real, and didn't exist out there somewhere even though they seemed like they could be real humans.  
  • Winnie-the-Pooh and the Cat in the Hat were the lowest tier; they weren't even possibly real.

So that was at least marginally satisfying.  At least until the next time I went to church and started asking some uncomfortable questions about God, Jesus, the angels, et al.  At this point my mom decided I'd had about as much philosophy as was good for a five-year-old and suggested I spend more time playing outdoors.

The question of how we know something has external reality never really went away, though.  It's kind of the crypto-theme behind nearly all of my novels; a perfectly ordinary person is suddenly confronted with something entirely outside of his/her worldview, and has to decide if it's real, a hoax, or a product of the imagination -- i.e., a hallucination.  Whether it's time travel (Lock & Key), a massive and murderous conspiracy (Kill Switch), an alien invasion (Signal to Noise), a mystical, magic-imbued alternate reality (Sephirot), or the creatures of the world's mythologies come to life (The Fifth Day), it all boils down to how we can figure out if our perceptions are trustworthy.

The upshot of it all was that I landed in science largely because I realized I couldn't trust my own brain.  It gave me a rigorous protocol for avoiding the pitfalls of wishful thinking and an inherently faulty sensory-integrative system.  My stance solidified as, "I am not certain if _____ exists..." (fill in the blank: ghosts, an afterlife, psychic abilities, aliens, Bigfoot, divination, magic, God) "... but until I see some hard evidence, I'm going to be in the 'No' column."

This whole issue was brought to mind by an article in Vice sent to me by a loyal reader of Skeptophilia a couple of days ago.  In "Internet Occultists are Trying to Change Reality With a Magickal Algorithm," by Tamlin Magee, we find out that today's leading magical (or magickal, if you prefer) thinkers have moved past the ash wands and crystal balls and sacred fires of the previous generation, and are harnessing the power of technology in the service of the occult.

A group of practitioners of magic(k) have developed something called the Sigil Engine, which uses a secret algorithm to generate a sigil -- a magical symbol -- representing an intention that you type in.  The result is a geometrical design inside a circle based upon the words of your intention, which you can then use to manifest whatever that intention is.

So naturally, I had to try it.  I figured "love and compassion" was a pretty good intention, so that's what I typed in.  Here's the sigil it generated:


Afterward, what you're supposed to do is "charge" it to give it the energy to accomplish whatever it was you wanted it to do.  Here's what Magee has to say, which I'm quoting verbatim so you won't think I'm making this up:

Finally, you've got to "charge" your creation.  Methods for this vary, but you could meditate, sing at, or, most commonly, masturbate to your symbol, before finally destroying or forgetting all about it and awaiting the results.
Needless to say, I didn't do any of that with the sigil I got.  Especially the last-mentioned.  It's not that I have anything against what my dad called "shaking hands with the unemployed," but doing it while staring at a strange symbol seemed a little sketchy, especially since my intention was to write about it afterward.

Prudish I'm not, but I do have my limits.

Later on in the article, though, we learn that apparently this is a very popular method with practitioners, and in fact there is a large group of them who have what amounts to regular virtual Masturbate-o-Thons.  The idea is that if one person having an orgasm is powerful, a bunch of people all having orgasms simultaneously is even more so.  "Nobody else has synchronized literally thousand of orgasms to a single purpose, just to see what happens!" said one of the event organizers.

One has to wonder what actually did happen, other than a sudden spike in the sales of Kleenex.

In any case, what's supposed to happen is that whatever you do imbues the sigil with power.  The link Magee provided gives you a lot of options if meditating, singing, or masturbating don't work for you.  (A couple of my favorites were "draw the sigil on a balloon, blow it up, then pop it" and "draw it on your skin then take a shower and wash it away.")  

Magee interviewed a number of people who were knowledgeable about magic(k)al practices, and I won't steal her thunder by quoting them further -- her entire article is well worth reading.  But what strikes me is two things: (1) they're all extremely serious, and (2) they're completely convinced that it works.  Which brings me back to my original topic:

How would you know if any of this was real?

In my own case, for example, the intention I inputted was "love and compassion."  Suppose I had followed the guidelines and charged it up.  What confirmatory evidence would show me it'd worked?  If I acted more compassionately toward others, or them toward me?  If I started seeing more stories in the news about people being loving and kind to each other?

More to the point, how could I tell if what had happened was because of my sigil -- or if it was simply dart-thrower's bias again, that I was noticing such things more because my attempt at magic(k) had put it in the forefront of my mind?

It might be a little more telling if my intention had been something concrete and unmistakable -- if, for example, I'd typed in "I want one of my books to go to the top of the New York Times Bestseller List."  If I did that, and three weeks later it happened, even I'd have to raise an eyebrow in perplexity.  But there's still the Post Hoc fallacy -- "after this, therefore because of this" -- you can't conclude that because one thing followed another in time sequence, the first caused the second.

That said, it would certainly give me pause.

Honestly, though, I'm not inclined to test it.  However convinced the occultists are, I don't see any mechanism by which this could possibly work, and spending a lot of time running experiments would almost certainly generate negative, or at least ambiguous, results.  (I'm reminded of the answer from the Magic 8-Ball, "Reply Hazy, Try Again.")

So the whole thing seems to me to fall into the "No Harm If It Amuses You" department.  I'm pretty doubtful about sigil-charging, but there are definitely worse things you could be spending your time doing than concentrating on love and compassion.

Or, for that matter, pondering the existence of Bugs Bunny.  Okay, he's fictional, but he's also one of my personal heroes, and if that doesn't give him a certain depth of reality, I don't see what would.

****************************************

The advancement of technology has opened up ethical questions we've never had to face before, and one of the most difficult is how to handle our sudden ability to edit the genome.

CRISPR-Cas9 is a system for doing what amounts to cut-and-paste editing of DNA, and since its discovery by Emmanuelle Charpentier and Jennifer Doudna, the technique has been refined and given pinpoint precision.  (Charpentier and Doudna won the Nobel Prize in Chemistry last year for their role in developing CRISPR.)

Of course, it generates a host of questions that can be summed up by Ian Malcolm's quote in Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."  If it became possible, should CRISPR be used to treat devastating diseases like cystic fibrosis and sickle-cell anemia?  Most people, I think, would say yes.  But what about disorders that are mere inconveniences -- like nearsightedness?  What about cosmetic traits like hair and eye color?

What about intelligence, behavior, personality?

None of that has been accomplished yet, but it bears keeping in mind that ten years ago, the whole CRISPR gene-editing protocol would have seemed like fringe-y science fiction.  We need to figure this stuff out now -- before it becomes reality.

This is the subject of bioethicist Henry Greely's new book, CRISPR People: The Science and Ethics of Editing Humans.  It considers the thorny questions surrounding not just what we can do, or what we might one day be able to do, but what we should do.

And given how fast science fiction has become reality, it's a book everyone should read... soon.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Saturday, February 27, 2021

Halting the conveyor

The Irish science historian James Burke, best known for his series Connections and The Day the Universe Changed, did a less-well-known two-part documentary in 1991 called After the Warming which -- like all of his productions -- approached the issue at hand from a novel angle.

The subject was anthropogenic climate change, something that back then was hardly the everyday topic of discussion it is now.  Burke has a bit of a theatrical bent, and in After the Warming he takes the point of view of a scientist in the year 2050, looking back to see how humanity ended up where they were by the mid-21st century.

Watching this documentary now, I have to keep reminding myself that everything he says happened after 1991 was a prediction, not a recounting of actual history.  Some of his scenarios were downright prescient, more than one of them down to the year they occurred.  The Iraq War, the catastrophic Atlantic hurricane barrage in 2005, droughts and heat waves in India, East Africa, and Australia -- and the repeated failure of the United States to believe the damn scientists and get on board with addressing the issue.  He was spot-on that the last thing the climatologists themselves would be able to figure out was the effect of climate change on the deep ocean.  He had a few misses -- the drought he predicted for the North American Midwest never happened, nor did the violent repulsion of refugees from Southeast Asia by Australia.  But his batting average still is pretty remarkable.

One feature of climate science he went into detail about, that beforehand was not something your average layperson would probably have known, was the Atlantic Conveyor -- known to scientists as AMOC, the Atlantic Meridional Overturning Circulation.  The Atlantic Conveyor works more or less as follows:

The Gulf Stream, a huge surface current of warm water moving northward along the east coast of North America, evaporates as it moves, and that evaporation does two things; it cools the water, and makes it more saline.  Both have the effect of increasing its density, and just south of Iceland, it reaches the point that it becomes dense enough to sink.  This sinking mechanism is what keeps the Gulf Stream moving, drawing up more warm water from the south, and that northward transport of heat energy is why eastern Canada, western Europe, and Iceland itself are as temperate as they are.  (Consider, for example, that Oslo, Norway and Okhotsk, Siberia are at the same latitude -- 60 degrees North.)

[Image is in the Public Domain courtesy of NASA/Goddard Space Flight Center]

Just about any high school kid, though, has heard about the Gulf Stream, usually in the context of the paths of sailing ships during the European Age of Exploration.  What many people don't know, however, is that if things warm up, leading to the melting of the Greenland Ice Sheets, it will cause a drastic drop in salinity at the north end of the Gulf Stream, making that blob of water too fresh to sink.

The result: the entire Atlantic Conveyor stops in its tracks.  No more transport of heat energy northward, putting eastern Canada and northwestern Europe into the deep freeze.  The heat doesn't just go away, though -- that would break the First Law of Thermodynamics, which is strictly forbidden in most jurisdictions -- it would just cause the south Atlantic to heat up more, boosting temperatures in the southeastern United States and northern South America, and fueling hurricanes the likes of which we've never seen before.

Back in 1991, this was all speculative, based on geological records from the last time something like that happened, on the order of thirteen thousand years ago.  The possibility was far from common knowledge; in fact, I think After the Warming was the first place I ever heard about it.

Well, score yet another one for James Burke.

A paper this week in Proceedings of the National Academy of Science describes research by Johannes Lohmann and Peter Ditlevsen of the University of Copenhagen indicating the that based on current freshwater output from the melting of Arctic ice sheets, that tipping point from "saline-enough-to-sink" to "not" might be too near to do anything about.  "These tipping points have been shown previously in climate models, where meltwater is very slowly introduced into the ocean," Lohmann said, in an interview with Gizmodo.  "In reality, increases in meltwater from Greenland are accelerating and cannot be considered slow."

The authors write -- and despite the usual careful word choice for scientific accuracy's sake, you can't help picking up the urgency behind the words:

Central elements of the climate system are at risk for crossing critical thresholds (so-called tipping points) due to future greenhouse gas emissions, leading to an abrupt transition to a qualitatively different climate with potentially catastrophic consequences...  Using a global ocean model subject to freshwater forcing, we show that a collapse of the Atlantic Meridional Overturning Circulation can indeed be induced even by small-amplitude changes in the forcing, if the rate of change is fast enough.  Identifying the location of critical thresholds in climate subsystems by slowly changing system parameters has been a core focus in assessing risks of abrupt climate change...  The results show that the safe operating space of elements of the Earth system with respect to future emissions might be smaller than previously thought.

The Lohmann and Ditlevsen paper is hardly the first to sound the alarm.  Five years ago, a paper in Nature described a drop in temperature in the north Atlantic that is precisely what Burke warned about.  In that paper, written by a team led by Stefan Rahmstorf of the Potsdam Institute for Climate Impact Research, the authors write, "Using a multi-proxy temperature reconstruction for the AMOC index suggests that the AMOC weakness after 1975 is an unprecedented event in the past millennium (p > 0.99).  Further melting of Greenland in the coming decades could contribute to further weakening of the AMOC."

Once again, the sense of dismay is obvious despite being couched in deliberately cautious science-speak.

Even if the current administration in the United States explicitly says that addressing climate change is one of their top priorities, they're facing an uphill battle.  Baffling though it is to me, we are still engaged in fighting with people who don't even believe climate change exists, who understand science so little they're still at the "it was cold today, so climate change isn't happening" level of understanding.  (To quote Stephen Colbert, "And in other good news, I just ate dinner, so there's no such thing as world hunger.")  Besides outright stupidity (and apparent inability to read and comprehend scientific research), there's the added problem of elected officials being in the pockets of the fossil fuel industry, the money from which gives them a significant incentive for keeping the voting public ignorant about the issues.

Until we hit the tipping point Lohmann and Ditlevsen warn about.  At which point the effects will be obvious.

In other words, until it's too late.

If the Atlantic Conveyor shuts down, the results will no longer be arguable even by climate-change-denying knuckle-draggers like James "Senator Snowball" Inhofe.  The saddest part is that we were warned about this thirty years ago by a science historian in terms a layperson could easily understand, and -- in Burke's own words -- we sat on our hands.

And as with Cassandra, the character from Greek mythology who was blessed with the gift of foresight but cursed to have no one believe what she says, we'll only say, "Okay, I guess Burke and the rest were right all along" as the world's climate systems are collapsing around us.

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Friday, February 26, 2021

The code switchers

When I was a graduate student in the School of Oceanography at the University of Washington -- an endeavor that lasted one semester, at which point I realized that I had neither the focus nor the brainpower to succeed as a research scientist -- I found an interesting commonality amongst the graduate students I hung out with.

This group of perhaps eight or nine twenty-somethings were without question the most vulgar, profane group I have ever been part of.  Regular readers of Skeptophilia, not to mention my friends and family, will know that my own vocabulary isn't exactly what anyone would call "prim and proper;" but while I am not averse to seasoning my speech with the occasional swear word, these people basically dumped in the entire spice cabinet.

The words "fuck" and "fuckin'" were like a staccato percussive beat to just about every sentence uttered.  You didn't say, "I gotta go to class," you said, "I fuckin' gotta go to class."  It was so bad most of us didn't even hear it any more, it was just "how we talked."  (And, I might add, it had the result of making those words completely lose their punch, and thus their effectiveness as emotionally-packed language.)  I have no idea why this particular group was so prone to obscene speech -- as you might expect, they were smart, scientifically-minded people with commensurately large vocabularies to choose from -- but once that became the norm, it was what one did to fit in.

What's most interesting is that when, at the end of that semester, I switched to the School of Education and started the track toward becoming a high school science teacher (a much more felicitous choice, as it turned out), I almost instantly adjusted my vocabulary to reflect the far more squeaky-clean speech of the Future Teachers of America.  I didn't have to think much about it; it wasn't like I had to obsessively watch my mouth until I learned how to control it.  The change was quick and required very little conscious thought to maintain.

This phenomenon is called code switching.  In its broadest definition, code switching occurs when a bilingual person flips between his/her two languages depending on the language of the listeners.  But context-dependent code switching occurs whenever we jump from one group we belong to into a different one, or from a group of strangers to a group of friends.

[Image licensed under the Creative Commons JasonSWrench, Transactional comm model, CC BY 3.0]

Code switching occurs in written language, too.  I write here at Skeptophilia, I write fiction, I have written science curriculum, I write emails to family, friends, coworkers, and total strangers (like the guy at the software company helpdesk and the woman at the bank who oversees our mortgage).  In each of those, my vocabulary, sentence structure, and degree of formality are different, not only in the words I choose, but in how exactly they're used.  Some of the differences are obvious; my wife gets emails ending with "xoxoxoxoxo;" my friends, usually with "cheers, g," and people I've contacted over business matters, "thank you so much, sincerely, Gordon."  (I'm a bit absent-minded at the best of times, and I live in fear of the day I send the guy at the helpdesk an email ending with the hugs-and-kisses signoff.)

But it turns out that these differences are apparent in other, more subtle ways.  A study out of the University of Exeter that appeared in the journal Behavior Research Methods this week describes a protocol for detecting code switching that had an accuracy of 70% -- even when they didn't look at words that would be obvious giveaways.

The researchers used an automated linguistic analysis program to look at writing done by the same people in two different contexts.  The participants in the study were chosen because they were active in two different sorts of social media groups, some having to do with parenting and others gender equity, and the software was given passages they'd written in both venues -- with tipoff words like "childcare" and "feminism" removed.  It turned out the program was still able to discern which social media group the passage had been directed toward, simply by looking at structural features like use of pronouns and meaning-based characteristics like the number of emotionally-laden words used per paragraph.

"It is the first method that lets us study how people access different group identities outside the laboratory on a large scale, in a quantified way," said study lead author Miriam Koschate-Reis, in an interview with Science Daily.  "For example, it gives us the opportunity to understand how people acquire new identities, such as becoming a first-time parent, and whether difficulties 'getting into' this identity may be linked to postnatal depression and anxiety.  Our method could help to inform policies and interventions in this area, and in many others."

Koschate-Reis and her team are next going to look into whether this kind of code switching is facilitated by location -- if, for example, an informal-to-formal switch might be easier in an academic location like a library than it is in a relaxed setting like a café.

In other words, if it might be better not to work on your dissertation in Starbucks.

All of which is fascinating, and once again points out the complexity of human communication -- and why it's so hard to get an artificial neural network to mimic written conversation convincingly.  Most of us code switch automatically, without even being aware of it, as we navigate daily through the many groups to which we belong.  Most AI speech I've seen, even if responses are contextually correct and use the right vocabulary with the right structure, have a inflexible stilted quality that is lacking in the generally more sensitive, free-flowing communication that happens between real people.  But perhaps that's another application that the Koschate-Reis et al. research might have; if a linguistic analysis software can learn to detect code switching, that's the first step toward an AI actually learning how to apply it.

One step closer to passing the Turing Test.

In any case, I'd better run along and get my fuckin' day started.  I hope y'all have a good one.  Hugs & kisses. 💘

********************************

 Many of us were riveted to the screen last week watching the successful landing of the Mars Rover Perseverance, and it brought to mind the potential for sending a human team to investigate the Red Planet.  The obstacles to overcome are huge; the four-odd-year voyage there and back, requiring a means for producing food, and purifying air and water, that has to be damn near failsafe.

Consider what befell the unfortunate astronaut Mark Watney in the book and movie The Martian, and you'll get an idea of what the crew could face.

Physicist and writer Kate Greene was among a group of people who agreed to participate in a simulation of the experience, not of getting to Mars but of being there.  In a geodesic dome on the slopes of Mauna Loa in Hawaii, Greene and her crewmates stayed for four months in isolation -- dealing with all the problems Martian visitors would run into, not only the aforementioned problems with food, water, and air, but the isolation.  (Let's just say that over that time she got to know the other people in the simulation really well.)

In Once Upon a Time I Lived on Mars: Space, Exploration, and Life on Earth, Greene recounts her experience in the simulation, and tells us what the first manned mission to Mars might really be like.  It makes for wonderful reading -- especially for people like me, who are just fine staying here in comfort on Earth, but are really curious about the experience of living on another world.

If you're an astronomy buff, or just like a great book about someone's real and extraordinary experiences, pick up a copy of Once Upon a Time I Lived on Mars.  You won't regret it.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]