Wednesday, March 31, 2021
Breaking the speed limit
Tuesday, March 30, 2021
The horse warriors
I'm always drawn to a historical mystery.
The difficulty, of course, is given that a huge amount of our history has either highly unreliable records or else no records at all, a lot of mysteries will never get resolved satisfactorily. Two examples I read about recently which are as fascinating as they are frustrating are the true identity of Jack the Ripper, and the fate of the "Princes in the Tower" -- the two young sons of English King Edward IV, who disappeared in around 1483 and were probably murdered.
As a quick aside, it bears mention that in the latter case the alleged culprit, King Richard III, was not the horrific, amoral villain you might think, if your only source is the play by Shakespeare. He was actually competent and not a selfish monster, nor was he a hunchback; the Shakespearean smear job makes for great theater, and appeased the anti-Yorkist monarchy of the time, but has unfairly tarred a man who -- if Henry Tudor hadn't decided to swipe the throne -- probably would have been considered a pretty good leader. He may still have had the princes killed, though; such behavior by a king anxious to eliminate rivals and put his own claim to the throne beyond question was not at all uncommon at the time. But Shakespeare having Queen Margaret call him a "deformed, bunch-backed toad" seems a little excessive.
Sometimes there's an entire ethnic group that is mysterious, again usually because we have mostly archaeological evidence to go by, supplemented by dubiously accurate accounts written down by other (often hostile) cultures. In fact, the whole reason why the subject of historical mysteries comes up is because of a paper I read a couple of days ago about the Scythians, the central Asian "horse warriors" who bumped up against the cultures their territory bordered -- principally Greece, Rome, China, and Persia -- and whose accounts form the basis of our knowledge of who they were.
In "Ancient Genomic Time Transect from the Central Asian Steppe Unravels the History of the Scythians," which appeared last week in Science Advances and was authored by a huge team led by Guido Alberto Gnecchi-Ruscone of the Max Planck Institute for the Science of Human History, we read about a genomic study of the remains of over a hundred individuals from Scythian burial sites, and find out that they were hardly a single unified ethnic group -- their genomes show a significant diversity and represent multiple origins. So the Scythians seem more like a loose confederation of relatively unrelated people than the single unified, monolithic culture of fierce nomads depicted in the writings of their rivals.
The authors write:
The Scythians were a multitude of horse-warrior nomad cultures dwelling in the Eurasian steppe during the first millennium BCE. Because of the lack of first-hand written records, little is known about the origins and relations among the different cultures. To address these questions, we produced genome-wide data for 111 ancient individuals retrieved from 39 archaeological sites from the first millennia BCE and CE across the Central Asian Steppe. We uncovered major admixture events in the Late Bronze Age forming the genetic substratum for two main Iron Age gene-pools emerging around the Altai and the Urals respectively. Their demise was mirrored by new genetic turnovers, linked to the spread of the eastern nomad empires in the first centuries CE.
If that's not intriguing enough, last week there was also new information uncovered about an artifact from the same place but a lot earlier, the "Shigir idol," which was uncovered from a peat bog in the Ural Mountains in 1890. Its age is apparently greater than scientists have thought -- the new study suggests it's about 12,500 years old, making it the oldest wooden representation of a human figure known.
Monday, March 29, 2021
Viral reality
The gist of what the team did is to grow populations of bacteriophage Lambda (a virus that attacks and kills bacteria) in the presence of populations of two different potential food sources, more specifically E. coli that had one of two different receptors where the virus could attach. What happened was that the original bacteriophages were non-specialists -- they could attach to either receptor, but not very efficiently -- but over time, more of them accrued mutations that allowed them to specialize in attacking one receptor over the other. Ultimately, the non-specialists became extinct, leaving a split population where each new species could not survive on the other's food source.
Nevertheless, this sticks another nail in the coffin of both Intelligent Design proponents and the young-Earth creationists, the latter of whom believe that all of the Earth's species were created as-is six thousand or so years ago along with the Earth itself, and that the two hundred million year old trilobite fossils one sometimes finds simply dropped out of God's pocket while he was walking through the Garden of Eden or something.
So as usual, you can't logic your way out of a stance you didn't logic your way into. Still, I have hope that the tide is gradually turning. Certainly one cheering incident comes our way from Richard Lenski, who is justly famous for his groundbreaking study of evolution in bacteria and who co-authored the Meyer paper I began with. But Lenski will forever be one of my heroes for the way he handled Andrew Schlafly, who runs Conservapedia, a Wikipedia clone that attempts to remodel reality so that all of the ultra-conservative talking points are true. Schlafly had written a dismissive piece about Lenski's work on Conservapedia, to which Lenski responded. The ensuing exchange resulted in one of the most epic smackdowns by a scientist I've ever seen. Lenski takes apart Schlafly's objections piece by piece, citing data, kicking ass, and taking names. I excerpt the end of it below, but you can (and should) read the whole thing at the article on the "Lenski Affair" over at RationalWiki:
I know that I’ve been a bit less polite in this response than in my previous one, but I’m still behaving far more politely than you deserve given your rude, willfully ignorant, and slanderous behavior. And I’ve spent far more time responding than you deserve. However, as I said at the outset, I take education seriously, and I know some of your acolytes still have the ability and desire to think, as do many others who will read this exchange.Sincerely, Richard Lenski
I noticed that you say that one of your favorite articles on your website is the one on “Deceit.” That article begins as follows: “Deceit is the deliberate distortion or denial of the truth with an intent to trick or fool another. Christianity and Judaism teach that deceit is wrong. For example, the Old Testament says, ‘Thou shalt not bear false witness against thy neighbor.’” You really should think more carefully about what that commandment means before you go around bearing false witness against others.I can only hope that there was a mic around after that so that Lenski could drop it.
So there you have it. Science finding out cool stuff once again, because after all, that's what science does. The creationists, it is to be hoped, retreating further and further into the corner into which they've painted themselves. It's probably a forlorn wish that this'll make Ken Ham shut up, but maybe he'll eventually have to adapt his strategy to address reality instead of avoiding it.
You might even say... he'll need to evolve.
Saturday, March 27, 2021
The ghost of Robert Schumann
"And there's quite a story to go with it," he said, and proceeded to tell us how the composer had written the piece in 1853, three years before his death, for his friend and fellow musician Joseph Joachim. Joachim, however, thought the piece too dark to have any chance at popularity, and after Schumann attempted suicide in 1854 the sheet music was deposited at the Prussian State Library in Berlin, and everyone forgot about it.
In 1933, eighty years later, two women conducting a séance in London were alarmed to hear a "spirit voice" that claimed to be Schumann, and that said they were to go to the Prussian State Library to recover an "unpublished work" and see to it that it got performed. So the women went over to Berlin, and found the music -- right where the "spirit" said it would be.
Four years later, in 1937, a copy was sent anonymously to the great conductor Yehudi Menuhin. Impressed, and delighted to have the opportunity to stage a first performance of a piece from a composer who had been dead for 84 years, he premiered it in San Francisco in October of that year. But the performance was interrupted by one of the two women who had "talked to Schumann," who claimed that she had a right to first performance, since she'd been in touch with the spirit world about the piece and had received that right from the dead composer himself!
We then got to hear the piece, which is indeed dark and haunting and beautiful, and you should all give it a listen.
And -- sorry to disappoint you if you bought the whole spirit-voice thing -- there is, indeed, a lot more to the story.
Turns out that the announcer was correct that violinist Joachim, when he received the concerto, didn't like it much. He commented in a letter that the piece showed "a certain exhaustion, which attempts to wring out the last resources of spiritual energy, though certain individual passages bear witness to the deep feelings of the creative artist." And he not only tucked it away at the Prussian State Library, he included a provision in his will (1907) that the piece should not be performed until 1956, a hundred years after Schumann's death. So while it was forgotten, it wasn't perhaps as unknown as the radio announcer wanted us to think.
Which brings us up to the séance, and the spirit voice, and the finding of the manuscript -- conveniently leaving out the fact that the two woman who were at the séance, Jelly d'Arányi and Adila Fachiri, were sisters -- who were the grand-nieces of none other than Joseph Joachim himself!
Funny how leaving out one little detail like that makes a story seem like it admits of no other explanation than the supernatural, isn't it? Then you find out that detail, and... well, not so much, any more.
It's hard to imagine that d'Arányi and Fachiri, who were fourteen and nineteen years old, respectively, when their great-uncle died, wouldn't have known about his will and its mysterious clause forbidding the performance of Schumann's last major work. d'Arányi and Fachiri themselves were both violinists of some repute, so this adds to their motivation for revealing the piece, with the séance adding an extra frisson to the story, especially in the superstitious and spirit-happy 1930s. And the forwarding of the piece to Menuhin, followed by d'Arányi's melodramatic crashing of the premiere, has all of the hallmarks of a well-crafted publicity stunt.
I have to admit that I was a little disappointed to discover how easy this one was to debunk. Of course, I don't know that my explanation is correct; maybe the two sisters were visited by the ghost of Robert Schumann, who had been wandering around in the afterlife, pissed off that his last masterwork wasn't being performed. But if you cut the story up using Ockham's Razor, you have to admit that the spirit-voices-and-séance theory doesn't make nearly as much sense as the two-sisters-pulling-a-clever-hoax theory.
A pity, really, because a good spooky story always adds something to a dark, melancholy piece of music. I may have to go listen to Danse Macabre, The Drowned Cathedral, and Night on Bald Mountain, just to get myself back into the mood.
Friday, March 26, 2021
The phantom whirlpool
The universe is a dangerous place.
I'm not talking about crazy stuff happening down here on Earth, although a lot of that certainly qualifies. The violence we wreak upon each other (and by our careless actions, often upon ourselves) fades into insignificance by comparison to the purely natural violence out there in the cosmos. Familiar phenomena like black holes and supernovas come near the top of the list, but there are others equally scary whose names are hardly common topics of conversation -- Wolf-Rayet stars, gamma-ray bursters, quasars, and Thorne-Zytkow objects come to mind, not to mention the truly terrifying possibility of a "false vacuum collapse" that I wrote about here at Skeptophilia a while back.
It's why I always find it odd when people talk about the how peaceful the night sky is, or that the glory of the cosmos supports the existence of a benevolent deity. Impressive? Sure. Awe-inspiring? Definitely.
Benevolent? Hardly. The suggestion that the universe was created to be the perfectly hospitable home to humanity -- the "fine-tuning" argument, or "strong anthropic principle" -- conveniently ignores the fact that the vast majority of the universe is intrinsically deadly to terrestrial life forms, and even here on Earth, we're able to survive the conditions of less than a quarter of its surface area.
I'm not trying to scare anyone, here. But I do think it's a good idea to keep in mind how small and fragile we are. Especially if it makes us more cognizant of taking care of the congenial planet we're on.
In any case, back to astronomical phenomena that are big and scary and can kill you. Even the ones we know about don't exhaust the catalog of violent space stuff. Take, for example, the (thus far) unexplained invisible vortex that is tearing apart the Hyades.
The Hyades is a star cluster in the constellation Taurus, which gets its name from the five sisters of Hyas, a beautiful Greek youth who died tragically. Which brings up the question of whether any beautiful Greek youths actually survived to adulthood. When ancient Greeks had kids, if they had a really handsome son, did they look at him and shake their heads sadly, and say, "Well, I guess he's fucked"?
To read Greek mythology, you get the impression that the major cause of death in ancient Greek was being so beautiful it pissed the gods off.
Anyhow, Hyas's five sisters were so devastated by the loss of their beloved brother that they couldn't stop crying, so the gods took pity on them even though Zeus et al. were the ones who caused the whole problem in the first place, and turned them into stars. Which I suppose is better than nothing. But even so the sisters' weeping wouldn't stop -- which is why the appearance of the Hyades in the sky in the spring is associated with the rainy season. (In fact, in England the cluster is called "the April rainers.")
In reality, the Hyades have nothing to do with rain or tragically beautiful Greek youths. They are a group of fairly young stars, on the order of 625 million years old (the Sun is about ten times older), and like most clusters was created from a collapsing clump of gas. The Hyades are quite close to us -- 153 light years away -- and because of that have been intensively studied. Like many clusters, the tidal forces generated by the relative motion of the stars is gradually pulling them away from each other, but here there seems to be something else, something far more violent, going on.
A press release from the European Space Agency this week describes a study of the motion of the stars in the Hyades indicating that their movements aren't the ordinary gentle dissipation most clusters undergo. A team led by astrophysicist Tereza Jerabkova used data from the European Southern Observatory to map members of the cluster, and to identify other stars that once were part of the Hyades but since have been pulled away, and they found that the leading "tidal tail" -- the streamer of stars out ahead of the motion of the cluster as a whole -- has been ripped to shreds.
The only solution Jerabkova and her team found that made sense of the data is that the leading tail of the Hyades collided -- or is in the process of colliding -- with a huge blob of some sort, containing a mass ten million times that of the Sun. The problem is, an object that big, only 153 light years away, should be visible, or at least detectable, and there seems to be nothing there.
"There must have been a close interaction with this really massive clump, and the Hyades just got smashed," Jerabkova said.Thursday, March 25, 2021
A tsunami of lies
One of the ways in which the last few years have changed me is that it has made me go into an apoplectic rage when I see people sharing false information on social media.
I'm not talking about the occasional goof; I've had times myself that I've gotten suckered by parody news accounts, and posted something I thought was true that turns out to be some wiseass trying to be funny. What bothers me is the devastating flood of fake news on everything from vaccines to climate change to politics, exacerbated by "news" agencies like Fox and OAN that don't seem to give a shit about whether what they broadcast is true, only that it lines up with the agenda of their directors.
I've attributed this tsunami of lies to two reasons: partisanship and ignorance. (And to the intersection of partisanship and ignorance, where lie the aforementioned biased media sources.) If you're ignorant of the facts, of course you'll be prone to falling for an appealing falsehood; and partisanship in either direction makes you much more likely to agree unquestioningly with a headline that lines up with what you already believed to be true.
Turns out -- ironically -- the assumption that the people sharing fake news are partisan, ignorant, or both might itself be an appealing but inaccurate assessment of what's going on. A study in Nature this week has generated some curious results showing that once again, reality turns out to be more complex than our favored black-and-white assessments of the situation.
A study by Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, and David Rand (of the Massachusetts Institute of Technology) and Gordon Pennycook (of the University of Regina) decided to see what was really motivating people to share false news stories online, and they found -- surprisingly -- that sheer carelessness played a bigger role than either partisanship or ignorance. In "Shifting Attention to Accuracy Can Reduce Misinformation Online," the team describes a series of experiments involving over a thousand volunteers that leads us to the heartening conclusion that there might be a better way to stem the flood of lies online than getting people to change their political beliefs or engaging in a massive education program.
The setup of the study was as simple as it was elegant. They first tested the "ignorance" hypothesis by taking test subjects and presenting them with various headlines, some true and some false, and asked them to determine which were which. It turns out people are quite good at this; there was a full 56-point difference between the likelihood of correctly identifying true and false headlines and making a mistake.
Next, they tested the "partisanship" hypothesis. The test subjects did worse on this task, but still the error rate wasn't as big as you might guess; people were still 10% less likely to rate true statements as false (or vice versa) even if those statements agreed with the majority stance of their political parties. So partisanship plays a role in erroneous belief, but it's not the set of blinders many -- including myself -- would have guessed.
Last -- and this is the most interesting test -- they asked volunteers to assess their likelihood of sharing the news stories online, based upon their headlines. Here, the difference between sharing true versus false stories dropped to only six percentage points. Put a different way, people who are quite good at discerning false information overall, and still pretty good at recognizing it even when it runs counter to their political beliefs, will still share the news story anyhow.
What it seems to come down to is simple carelessness. It's gotten so easy to share links that we do it without giving it much thought. I know I've been a bit shame-faced when I've clicked "retweet" to a link on Twitter, and gotten the message, "Don't you want to read the article first?" (In my own defense, it's usually been because the story in question is from a source like Nature or Science, and I've gotten so excited by whatever it was that I clicked "retweet" right away even though I fully intend to read the article afterward. Another reason is the exasperating way Twitter auto-refreshes at seemingly random moments, so if you don't respond to a post right away, it might disappear forever.)
Improving the rate at which people detected (and chose not to share) fake headlines turned out to be remarkably easy to tweak. The researchers found that reminding people of the importance of accuracy at the start of the experiment decreased the volunteers' willingness to share false information, as did asking them to assess the accuracy of the headline prior to making the decision about whether to share it.
It does make me wonder, though, about the role of pivotal "nodes" in the flow of misinformation -- a few highly-motivated people who start the ball of fake news rolling, with the rest of us spreading around the links (whatever our motivation for doing so) in a more piecemeal fashion. A study by Zignal Labs, for example, found that the amount of deceptive or outright false political information on Twitter went down by a stunning 73% after Donald Trump's account was closed permanently. (Think of what effect it might have had if Twitter had made this decision back in 2015.)
In any case, to wrap this up -- and to do my small part in addressing this problem -- just remember before you share anything that accuracy matters. Truth matters. It's very easy to click "share," but with that ease comes a responsibility to make sure that what we're sharing is true. We ordinary folk can't dam the flow of bullshit singlehandedly, but each one of us has to take seriously our role in stopping up the leaks, small as they may seem.
Wednesday, March 24, 2021
The emergent mind
One of the arguments I've heard the most often in discussions of the possibility of developing a true artificial intelligence is that computers are completely mechanistic. I've heard this framed as, "You can only get out of them what you put into them." In other words, you could potentially program a machine to simulate intelligence, perhaps even simulate it convincingly. But there's nothing really in there -- it's just an input/output device no more intelligent than a pocket calculator, albeit a highly sophisticated one.
My question at this juncture is usually, "How are our brains any different?" Our neurons act on electrical voltage shifts; they fire (or not) based upon the movement of sodium and potassium ions, modulated by a complex group of chemicals called neurotransmitters that alter the neuron's ability to move those ions around. That our minds are a construct of this elaborate biochemistry is supported by the fact that if you introduce substances that alter the concentrations or reactivity of the neurotransmitters -- better known as "psychoactive drugs" -- it can radically alter perception, emotion, personality, and behavior.
But there's the nagging feeling, even amongst those of us who are diehard materialists, that there's something more in there, an ineffable ghost in the machine that is somehow independent of the biological underpinnings. Would a sufficiently complex electronic brain have this perception of self? Could an artificial intelligence eventually be capable of insight, of generating something more than the purely mechanical, rule-driven output we usually associate with computers? Of -- in other words -- creativity?
Or will that always be in the realm of science fiction?
If you doubt an artificial intelligence could ever have insight or creativity, some research out of a collaboration between Tsinghua University and the University of California - San Diego may make you want to reconsider your stance.
Ce Wang and Hui Zhai (Tsinghua) and Yi-Zhuang You (UC-San Diego) have created an artificial neural network that is able to look at raw data and figure out the equations that govern the reality. In other words, it does what scientists do -- finds a mathematical model that accounts for observations. And we're not talking about something simple like F = ma, here; the Wang et al. neural network was given experimental data of the measured position of quantum particles, and was able to develop...
... the Schrödinger Wave Equation.
To put this in perspective, the first data that gave us humans insight into the quantum-mechanical nature of subatomic particles, studies of photons by Max Planck in 1900, led to the highly non-intuitive notion that photons of light were quantized, emitted in discrete steps that were multiples of a minimum energy now known as Planck's constant. From there, further experimentation with particle momentums and positions by such luminaries as Albert Einstein, Louis de Broglie, and Werner Heisenberg led to the discovery of the weird wave/particle duality (subatomic particles are, in some sense, a wave and a particle simultaneously, and which properties you see depend on which you look for). Finally, Erwin Schrödinger put the whole thing together in the fundamental law of quantum mechanics, now called the Schrödinger Wave Equation in his honor.
But it took twenty-five years.
For those of you who aren't physics types, here's the equation we're talking about:
And to make you feel better, I majored in physics and I can't really say I understand it, either.
Here's how Wang et al. describe their neural network's accomplishment:
Can physical concepts and laws emerge in a neural network as it learns to predict the observation data of physical systems? As a benchmark and a proof-of-principle study of this possibility, here we show an introspective learning architecture that can automatically develop the concept of the quantum wave function and discover the Schrödinger equation from simulated experimental data of the potential-to-density mappings of a quantum particle. This introspective learning architecture contains a machine translator to perform the potential to density mapping, and a knowledge distiller auto-encoder to extract the essential information and its update law from the hidden states of the translator, which turns out to be the quantum wave function and the Schrödinger equation. We envision that our introspective learning architecture can enable machine learning to discover new physics in the future.
I read this with my jaw hanging open. I think I even said "holy shit" a couple of times. Because they're not stopping with the network recreating science we already know; they're talking about having it find new science that we currently don't understand fully -- or perhaps, that we know nothing about.
It's hard to imagine calling something that can do this anything other than a true intelligence. Yes, it's limited -- a neural network that discovers new physics can't write a poem or create a piece of art or hold a conversation -- but as one by one, each of those hurdles is passed, it's not hard to envision putting them together into one system that is not so far off from AI brains envisioned by science fiction.
As exciting as it is, this also makes me a little nervous. Deep thinkers such as Stephen Hawking, Nick Bostrom, Marvin Minsky, and Roman Yampolskiy have all urged caution in the development of AI, suggesting that the leap from artificial neural networks being beneath human intelligence levels to being far, far beyond them could happen suddenly. When an artificial intelligence gains the ability to modify its own source code to improve its own functionality -- or, perhaps, to engage in such human-associated behaviors as self-preservation -- we could be in serious trouble. (The Wikipedia page on the existential risk from artificial general intelligence gives a great overview of the current thought about this issue, if you're interested, or if perhaps you find you're sleeping too soundly at night.)
None of which is meant to detract from Wang et al.'s accomplishment, which is stupendous. It'll be fascinating to see what their neural network finds out when it moves beyond the proof-of-concept stage and turns its -- mind? -- onto actual unsolved problems in physics.
It does leave me wondering, though, when all is said and done, if we'll be looking at a conscious emergent intelligence that might have needs, desires, preferences... and rights. If so, it will dramatically shift our perspective as the unquestioned dominant species on Earth, not to mention generating minds who might decide that it is in the Earth's best interest to end that dominance permanently.
At which point it will be a little too late to say, "Wait, maybe this wasn't such a good idea."
Tuesday, March 23, 2021
Halos and shadows
About two weeks ago, I wrote a piece here about a Scottish cryptid called the Am Fear Liath Mòr -- which roughly translates from Gaelic as "the big gray dude" -- a horrifying apparition that has been seen in the Cairngorms of northern Scotland. It's described as a human figure, but huge and hulking, that appears in the distance, understandably creating "uneasy feelings" in the observer.
As I mentioned in my previous post, if I were to see such a thing, my "uneasy feelings" would include being so terrified I'd drop dead of a brain aneurysm. Because I'm just that brave.
Well, thanks to a friend and long-time loyal reader of Skeptophilia, I've learned that this might be an unfortunate overreaction on my part. The Am Fear Liath Mòr may have a completely rational, scientific explanation, and one that doesn't require belief in some enormous Sasquatch knock-off wandering around in the Highlands. It seems like the Scottish Big Gray Dude might be an example of a phenomenon that occurs in foggy mountains called the "Brocken spectre."
The Brocken spectre (or "Brocken bow") is an optical effect that occurs when there are eye-level uniformly-dispersed water droplets of all about the same size -- as you find in a fog bank -- and you're backlit by sunlight. This requires specific conditions, not only fog in front of you, but it being clear enough behind you that there's sufficient sunlight to cast a shadow. The result is that your shadow, or more accurately the light rays that outline it, are refracted and reflected by the water droplets in the fog, creating a hugely magnified shadow surrounded by a halo of glare, sometimes with a rainbow sheen.
The phenomenon gets its name from the Brocken, a peak in the Harz Mountains of Germany, where it has been observed for centuries, and was described in detail by scientist Johann Silberschlag in 1780. The idea of the allegedly-supernatural Brocken spectre being nothing more than an optical illusion generated by a shadow and the refractive effects of water droplets is supported by the fact that it's always seen in the fog when the Sun is behind you, and it seems to shift size unpredictably -- unsurprising if you're moving (which I sure as hell would be if I saw one), and there's a breeze making the fog bank waver and shift.
So it turns out that the Big Gray Dude of Scotland may not be a cryptid at all, just a weird -- and fascinating -- localized weather phenomenon. And it also accounts for other instances of eerie figures in the mist, such as the "Dark Watchers" of the Santa Lucia Mountains in California and the strange looming presence reported by British mountaineer Eric Shipton while climbing Mount Kenya. It's also related to the optical phenomenon called heiligenschein ("holy light") which probably accounts for instances of people being seen surrounded by what appears to be a ghostly halo. The somewhat anticlimactic explanation for this latter effect is that it's not Tongues of Fire or the Radiance of God descending upon you, it's light scattering and a thoroughly understood mechanism called retroreflection that happens regardless of the holiness level of the person involved.
In any case, one more win for the scientific approach, even if it kind of blows away the mystique of a giant scary shadow-man wandering about in the Scottish Highlands. Skeptic though I am, I have to admit to being a little disappointed. It seems like if there's anywhere that should actually be haunted, it's the Cairngorms. But even so, it's somehow fitting that the thing that has been terrifying the superstitious for centuries turns out to be nothing more than...
... their own shadows.
Monday, March 22, 2021
The imaginary scientist
The unfortunate reality is that in this "Age of Information," where we as a species have the ability to store, access, and transfer knowledge with a speed that fifty years ago would have been in the realm of science fiction, it is harder than ever to know what's true and what isn't.
The internet is as good a conduit of bullshit as it is of the truth. Not only are there plenty of well-intentioned but ill-informed people, there are lots of folks who lie deliberately for their own ends -- monetary gain, power, influence, the dubious thrill of having pulled off a hoax, or just their "five minutes of fame." It used to be that in order to be successful, these purveyors of bad information had to go to the trouble and expense of writing a book, or at least of finding a way to get speaking engagements. Now that anyone with money and access can own a webpage, there's nothing stopping cranks, liars, hoaxers, and the rest from getting their message out there to the entire electronic world simultaneously.
When I taught a high school course in critical thinking, one of my mantras was "check your sources." If you find a claim online, where did it come from? What is the originator's background -- does it seem like (s)he has sufficient knowledge and expertise? Has it been checked and corroborated by others? If it's from a journal, is it a peer-reviewed source -- or one of the all-too-common "pay to play" journals that will take damn near anything you write if you're willing to pay them to do it? Does it line up with what we already know from science and history? (Another mantra was "nearly every time someone claims 'this new theory will overturn everything we know about physics!', it turns out to be wrong.")
None of this guarantees that the claim is correct, of course; but using those questions as general guidelines will help you to navigate the intellectual minefield of science representation on the internet.
Except when it doesn't.
As an example of this, have you heard of Camille Noûs?
I hadn't, until I read a troubling story that appeared last week in Nature, written by Cathleen O'Grady. Camille Nôus first showed up as a signatory on an open letter about science policy in France early last year, and since then has been listed as a co-author on no fewer than 180 different papers. She? He? -- the name "Camille" could be either, which I don't think is accidental -- has been racking up citation after citation, in a wide range of unrelated fields, including astrophysics, ecology, chemistry, and molecular biology.
Pretty impressive accomplishments in the world of research, where increasing specialization has resulted in what a friend of mine described as "researchers knowing more and more about less and less until finally they'll know everything about nothing."
This same narrowing of focus is why the red flag of Camille Noûs's ubiquity would never become apparent to many scientists; they might find the name over and over in papers from their field of evolutionary biology, for example, and not realize -- probably never even see -- that Noûs had also, astonishingly, co-authored papers in medical biochemistry.
So what's going on here?
By this point, it probably will come as no shock that Camille Noûs doesn't exist. The last name "Noûs" was chosen because "nous" means "we" in French, and is also a play on the Greek word νοῦς, which means "reason." Noûs was the brainchild of RogueESR, a French science advocacy group, as a way to personify collective efforts and knock the elitist attitude of some leading scientists down a peg. RogueESR protested the cost-saving approach by many research institutions of eliminating tenure-track positions and making just about all available openings temporary, project-specific research, and they decided to come up with a moniker representing the human, group-cooperative side of science.
"Hundreds of articles will make this name the top author on the planet," they wrote in a newsletter, "with the consequence of distorting certain bibliometric statistics and demonstrating the absurdity of individual quantitative assessment."