Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label neurology. Show all posts
Showing posts with label neurology. Show all posts

Friday, July 8, 2022

Setting the gears in motion

A couple of weeks ago, I was out for a run on a local trail, and I almost stepped on a snake.

Fortunately, here in upstate New York, we don't have any poisonous snakes.  Unlike in my home state of Louisiana, where going for a trail run is taking your life into your hands.  It was just a garter snake, common and completely harmless, but it startled the hell out of me even though I like snakes.  What's interesting, though, is that in mid-stride I did a sudden course correction without even being consciously aware of it, put my foot down well to the snake's left (fortunately for it), and kept going with barely a stumble.  I was another three paces ahead when my conscious brain caught up and said, "Holy shit, I almost stepped on a snake!"

Thanks for the lightning-fast assessment of the situation, conscious brain.

It's kind of amazing how fast we can do these sorts of adjustments, and some recent research at the University of Michigan suggests that we do them better while running -- and more interesting still, we get better at it the faster we run.

Running apparently triggers a rapid interchange of information between the right and left sides of the brain.  It makes sense; when you run, the two sides of your body (and thus the two sides of your brain) have to coordinate precisely.  Or at least they have to if you're trying to run well.  I've seen runners who look like they're being controlled by a team of aliens who only recently learned how the human body works, and still aren't very good at it.  "Okay, move left leg forward... and move the right arm back at the same time!... No, I mean forward!  Okay, now right leg backward... um... wait..."  *crash*  "Dammit, get him up off the ground and try it again, and do it right this time!"

But to run efficiently requires that you coordinate the entire body, and do it fast.  (In fact, a 2014 study found that a proper arm swing rhythm during running creates a measurable improvement in efficiency.)  The University of Michigan study that was published this week identified a particular kind of neural cross-talk between the two brain hemispheres when you run.  They call these patterns "splines" (because they look like the interlocking teeth of a gear wheel) and found that the faster you run, the more intense the splines get.

"Previously identified brain rhythms are akin to the left brain and right brain participating in synchronized swimming: The two halves of the brain try to do the same thing at the exact same time," said Omar Ahmed, who led the study.  "Spline rhythms, on the other hand, are like the left and right brains playing a game of very fast—and very precise—pingpong.  This back-and-forth game of neural pingpong represents a fundamentally different way for the left brain and right brain to talk to each other."

Me and some other folks at a race last month, splining like hell

"These spline brain rhythms are faster than all other healthy, awake brain rhythms," said Megha Ghosh, who co-authored the paper.  "Splines also get stronger and even more precise when running faster.  This is likely to help the left brain and right brain compute more cohesively and rapidly when an animal is moving faster and needs to make faster decisions."

More fascinating still is that the researchers found spline rhythms during one other activity: dreaming during the REM (rapid eye movement) stage of sleep.  So this could be yet another function of dreams -- rehearsing the coordinating rhythms between the two brain hemispheres, so that the pathways are well established when you need them while you're awake.  

"Surprisingly, this back-and-forth communication is even stronger during dream-like sleep than it is when animals are awake and running," Ahmed said.  "This means that splines play a critical role in coordinating information during sleep, perhaps helping to solidify awake experiences into enhanced long-term memories during this dream-like state."

So that's the latest news from the intersection of two of my obsessions, neuroscience and running.  It'll give me something to think about in a few minutes when I go out for my morning run.  Maybe it'll distract me from obsessively scanning the trail for snakes.

**************************************

Monday, May 9, 2022

Oops, I did it again

The following is a direct transcript of how I got welcomed into a multi-person business-related Zoom call a couple of years ago:

Me: How are you today?

Meeting leader: I'm fine, how are you?

Me: Pretty good, how are you?

Meeting leader: ...

Me: *vows never to open his mouth in public again*

I think we can all relate to this sort of thing -- and the awful sensation of realizing, microseconds after it leaves our mouths, that what we just said was idiotic.  When my then fiancée, now wife, told a mutual friend that she was getting married -- after we'd been dating for two years -- the friend blurted out, "To who?"  Another friend ended a serious phone call with her boss by saying, "Love you, honey!"  Another -- and I witnessed this one -- was at a trailhead in a local park, preparing to go for a walk as two cyclists were mounting their bikes and putting on their helmets.  He said to them, "Enjoy your hike!"

The funniest one, though, was a friend who was in a restaurant, and the waitress asked what she'd like for dinner.  My friend said, "The half chicken bake, please."  The waitress said, "Which side?"  My friend frowned with puzzlement and said, "Um... I dunno... Left, I guess?"  There was a long pause, and the waitress, obviously trying not to guffaw, said, "No, ma'am, I mean, which side order would you like?"

I don't think my friend has been in that restaurant since.

This "oops" phenomenon probably shouldn't embarrass us as much as it does, because it's damn near ubiquitous.  The brilliant writer Jenny Lawson -- whose three wonderful books, Let's Pretend This Never Happened, Furiously Happy, and Broken (In the Best Possibly Way) should be on everyone's reading list -- posted on her Twitter (@TheBloggess -- follow her immediately if you don't already) a while back, "Airport cashier: 'Have a safe flight.'  Me: 'You too!'  I CAN NEVER COME HERE AGAIN.", and was immediately inundated by (literally) thousands of replies from followers who shared their own embarrassing, and hilarious, moments.  She devotes a whole chapter to these endearing blunders in her book Broken -- by the time I was done reading that chapter, my stomach hurt from laughing -- but here are three that struck me as particularly funny:

I walked up to a baby-holding stranger (thinking it was my sister) at my daughter's soccer game and said "Give me the baby."

A friend thanked me for coming to her husband's funeral.  My reply?  "Anytime."

A friend placed her order at drive thru.  She then heard, "Could you drive up to the speaker?  You're talking to the trash can."

Lawson responded, "How could you not love each and every member of this awkward tribe?"

This universal phenomenon -- particularly the moment of sudden realization that we've just said or done something ridiculous -- was the subject of a study at Cedars-Sinai Medical Center that came out last week, led by neurologist Ueli Rutishauser.  You'd think it'd be a difficult subject to study; how do you catch someone in one of those moments, and find out what's going on in the brain at the time?  But they got around this in a clever way, by studying patients who were epileptic and already had electrode implants to locate the focal point of their seizures, and had them perform a task that was set up to trigger people to make mistakes.  It's a famous one called the Stroop Test, after psychologist John Ridley Stroop who published a paper on it in 1935.  It's an array of names of colors, where each name is printed in a different color from the one named:


The task is to state the colors, not the names, as quickly as you can.

Most people find this really difficult to do, because we're generally taught to pay attention to what words say and ignore what color it's printed in.  "This creates conflict in the brain," Rutishauser said. "You have decades of training in reading, but now your goal is to suppress that habit of reading and say the color of the ink that the word is written in instead."  Most people, though, when they do make an error, realize it right away.  So this made it an ideal way to see what was happening in the brain in those sudden "oops" moments.

What Rutishauser et al. found is that there are two arrays of neurons that kick in when we make a mistake, a process called "performance monitoring."  The first is the domain-general network, which identifies that we've made a mistake.  Then, the domain-specific network pinpoints what exactly the mistake was.  This, of course, takes time, which is why we usually become aware of what we've just done a moment after it's too late to stop it.

"When we observed the activity of neurons in this brain area, it surprised us that most of them only become active after a decision or an action was completed," said study first author Zhongzheng Fu.  "This indicates that this brain area plays a role in evaluating decisions after the fact, rather than while making them."

Which is kind of unfortunate, because however we rationalize those kinds of blunders as being commonplace, it's hard not to feel like crawling into a hole afterward.  But I guess that, given the fact that it's hardwired into our brains, there's not much hope of changing it.

So we should just embrace embarrassing situations as being part of the human condition.  We're weird, funny, awkward beasts, fumbling along as best we can, and just about everyone can relate to the ridiculous things we say and do sometimes.

But I still don't think I'd be able to persuade my friend to eat dinner at the restaurant where she ordered the left half of a chicken.

**************************************

Tuesday, November 2, 2021

Canine gap analysis

One of the reasons that it's (generally) much easier to learn to read a second language than it is to understand it in speech has to do not with the words, but with the spaces in between them.

Students learning to understand spoken conversation in another language have the common complaint that "they talk so fast."  They don't, really, or at least no faster than the speakers of your native language.  But unfamiliarity with the lexicon of the new language makes it hard to figure out where the gaps are between adjacent words.  Unless you concentrate (and sometimes even if you do), it sounds like one continuous stream of random phonemes.

As an aside, sometimes I have the same problem with English spoken with a different accent than the one I grew up with.  The character of Yaz in the last three seasons of Doctor Who is from Yorkshire, and her accent -- especially when she's agitated and speaking quickly -- sometimes leaves me thinking, "Okay, what did she just say?"  (That's why I usually watch with the subtitles on.)  This isn't unique to accents from the UK, of course; it's why a lot of non-southerners find southern accents difficult to parse.  Say to someone from Louisiana, "Jeetyet? and they'll clearly hear "Did you eat yet?"; and one of the most common greetings is "howzyamommandem?"

I'd never really considered how important the spaces between the words are until I ran into some research last week in Current Biology in a paper entitled, "Dogs Learn About Word Boundaries as Human Infants Do," that showed dogs -- perhaps unique amongst non-human animals -- are able to use some pretty complex mental calculations to figure out where the gaps are in "Do you want to play ball?"  Say that phrase out loud, especially in an excited tone, and you'll notice that in the actual sounds there are minuscule gaps, or none at all, so what they're listening for can't be little bits of silence.

By looking at brain wave activity in pre-verbal infants presented with actual speech, speech using unfamiliar/rare words, and gibberish, scientists found that the neural activity spiked when syllables are spoken that almost always (in the infant's experience) occur together.  An example is the phrase, "Do you want breakfast now?"  The syllables /brek/ and /fǝst/ aren't used much outside of the word "breakfast," so apparently the brain is doing some complex statistical calculations to identify that as a discrete word and not adjoined to the words coming before or afterward.

What the current research finds is that dogs are doing precisely the same thing when they listen to human language.

The authors write:

To learn words, humans extract statistical regularities from speech.  Multiple species use statistical learning also to process speech, but the neural underpinnings of speech segmentation in non-humans remain largely unknown. Here, we investigated computational and neural markers of speech segmentation in dogs, a phylogenetically distant mammal that efficiently navigates humans’ social and linguistic environment.  Using electroencephalography (EEG), we compared event-related responses (ERPs) for artificial words previously presented in a continuous speech stream with different distributional statistics...  Using fMRI, we searched for brain regions sensitive to statistical regularities in speech.  Structured speech elicited lower activity in the basal ganglia, a region involved in sequence learning, and repetition enhancement in the auditory cortex.  Speech segmentation in dogs, similar to that of humans, involves complex computations, engaging both domain-general and modality-specific brain areas.
I know that when I talk to Guinness -- not using the short, clipped words or phrases recommended by dog trainers, but full complex sentences -- he has this incredibly intent, alert expression, and I get the sense that he's really trying to understand what I'm saying.  I've heard people say that outside of a few simple commands like "sit" or "stay," dogs respond only to tone of voice, not the actual words spoken.

Apparently that isn't true.


So I suppose when I say "whoozagoodboy?", he actually knows it's him.

"Keeping track of patterns is not unique to humans: many animals learn from such regularities in the surrounding world, which is called statistical learning," said Marianna Boros of Eötvös Loránd University, who co-authored the study, in an interview with Vinkmag.  "What makes speech special is its efficient processing requires complex computations.  To learn new words from continuous speech, it is not enough to count how often certain syllables occur together.  It is much more efficient to calculate the probability of those syllables occurring together.  This is exactly how humans, even eight-month-old infants, solve the seemingly difficult task of word segmentation: they calculate complex statistics about the probability of one syllable following the other.  Until now we did not know if any other mammal can also use such complex computations to extract words from speech.  We decided to test family dogs’ brain capacities for statistical learning from speech.  Dogs are the earliest domesticated animal species and probably the one we speak most often to.  Still, we know very little about the neural processes underlying their word learning capacities."

So remember this next time you talk to your dog.  He might well be understanding more than you realize.  He might not get much if you read to him from A Brief History of Time, but my guess is that common speech is less of a mystery to him than it might have seemed.

**********************************

My master's degree is in historical linguistics, with a focus on Scandinavia and Great Britain (and the interactions between them) -- so it was with great interest that I read Cat Jarman's book River Kings: A New History of Vikings from Scandinavia to the Silk Road.

Jarman, who is an archaeologist working for the University of Bristol and the Scandinavian Museum of Cultural History of the University of Oslo, is one of the world's experts on the Viking Age.  She does a great job of de-mythologizing these wide-traveling raiders, explorers, and merchants, taking them out of the caricature depictions of guys with blond braids and horned helmets into the reality of a complex, dynamic culture that impacted lands and people from Labrador to China.

River Kings is a brilliantly-written analysis of an often-misunderstood group -- beginning with the fact that "Viking" isn't an ethnic designation, but an occupation -- and tracing artifacts they left behind traveling between their homeland in Sweden, Norway, and Denmark to Iceland, the Hebrides, Normandy, the Silk Road, and Russia.  (In fact, the Rus -- the people who founded, and gave their name to, Russia -- were Scandinavian explorers who settled in what is now the Ukraine and western Russia, intermarrying with the Slavic population there and eventually forming a unique melded culture.)

If you are interested in the Vikings or in European history in general, you should put Jarman's book in your to-read list.  It goes a long way toward replacing the legendary status of these fierce, sea-going people with a historically-accurate reality that is just as fascinating.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Wednesday, May 26, 2021

Thanks for the memories

I've always been fascinated with memory. From the "tip of the tongue" phenomenon, to the peculiar (and unexplained) phenomenon of déjà vu, to why some people have odd abilities (or inabilities) to remember certain types of information, to caprices of the brain such as its capacity for recalling a forgotten item once you stop thinking about it -- the way the brain handles storage and retrieval of memories is a curious and complex subject.

Two pieces of research have given us a window into how the brain organizes memories, and their connection to emotion.  In the first, a team at Dartmouth and Princeton Universities came up with a protocol to induce test subjects to forget certain things intentionally.  While this may seem like a counterproductive ability -- most of us struggle far harder to recall memories than to forget them deliberately -- consider the applicability of this research to debilitating conditions such as post-traumatic stress disorder.

In the study, test subjects were shown images of outdoor scenes as they studied two successive lists of words.  In one case, the test subjects were told to forget the first list once they received the second; in the other, they were instructed to try to remember both.

"Our hope was the scene images would bias the background, or contextual, thoughts that people had as they studied the words to include scene-related thoughts," said Jeremy Manning, an assistant professor of psychological and brain sciences at Dartmouth, who was lead author of the study.  "We used fMRI to track how much people were thinking of scene-related things at each moment during our experiment.  That allowed us to track, on a moment-by-moment basis, how those scene or context representations faded in and out of people's thoughts over time."

What was most interesting about the results is that in the case where the test subjects were told to forget the first list, the brain apparently purged its memory of the specifics of the outdoor scene images the person had been shown as well.  When subjects were told to recall the words on both lists, they recalled the images on both sets of photographs.

"[M]emory studies are often concerned with how we remember rather than how we forget, and forgetting is typically viewed as a 'failure' in some sense, but sometimes forgetting can be beneficial, too," Manning said.  "For example, we might want to forget a traumatic event, such as soldiers with PTSD.  Or we might want to get old information 'out of our head,' so we can focus on learning new material.  Our study identified one mechanism that supports these processes."

What's even cooler is that because the study was done with subjects connected to an fMRI, the scientists were able to see what contextual forgetting looks like in terms of brain firing patterns.  "It's very difficult to specifically identify the neural representations of contextual information," Manning said.  "If you consider the context you experience something in, we're really referring to the enormously complex, seemingly random thoughts you had during that experience.  Those thoughts are presumably idiosyncratic to you as an individual, and they're also potentially unique to that specific moment.  So, tracking the neural representations of these things is extremely challenging because we only ever have one measurement of a particular context.  Therefore, you can't directly train a computer to recognize what context 'looks like' in the brain because context is a continually moving and evolving target.  In our study, we sidestepped this issue using a novel experimental manipulation -- we biased people to incorporate those scene images into the thoughts they had when they studied new words.  Since those scenes were common across people and over time, we were able to use fMRI to track the associated mental representations from moment to moment."

In the second study, a team at UCLA looked at what happens when a memory is connected to an emotional state -- especially an unpleasant one.  What I find wryly amusing about this study is that the researchers chose as their source of unpleasant emotion the stress one feels in taking a difficult math class.

I chuckled grimly when I read this, because I had the experience of completely running into the wall, vis-à-vis mathematics, when I was in college.  Prior to that, I actually had been a pretty good math student.  I breezed through high school math, barely opening a book or spending any time outside of class studying.  In fact, even my first two semesters of calculus in college, if not exactly a breeze, at least made good sense to me and resulted in solid A grades.

Then I took Calc 3.

I'm not entirely sure what happened, but when I hit three-dimensional representations of graphs, and double and triple integrals, and calculating the volume of the intersection of four different solid objects, my brain just couldn't handle it.  I  got a C in Calc 3 largely because the professor didn't want to have to deal with me again.  After that, I sort of never recovered.  I had a good experience with Differential Equations (mostly because of a stupendous teacher), but the rest of my mathematical career was pretty much a flop.

And the worst part is that I still have stress dreams about math classes.  I'm back at college, and I realize that (1) I have a major exam in math that day, and (2) I have no idea how to do what I'll be tested on, and furthermore (3) I haven't attended class for weeks.  Sometimes the dream involves homework I'm supposed to turn in but don't have the first clue about how to do.  Sometimes, I not only haven't studied for the exam I'm about to take, I can't find the classroom.

Keep in mind that this is almost forty years after my last-ever math class. And I'm still having anxiety dreams about it.



What the researchers at UCLA did was to track students who were in an advanced calculus class, keeping track of both their grades and their self-reported levels of stress surrounding the course.  Their final exam grades were recorded -- and then, two weeks after the final, they were given a retest over the same material.

The fascinating result is that stress was unrelated to students' scores on the actual final exam, but the students who reported the most stress did significantly more poorly on the retest.  The researchers call this "motivated forgetting" -- that the brain is ridding itself of memories that are associated with unpleasant emotions, perhaps in order to preserve the person's sense of being intelligent and competent.

"Students who found the course very stressful and difficult might have given in to the motivation to forget as a way to protect their identity as being good at math," said study lead author Gerardo Ramirez.  "We tend to forget unpleasant experiences and memories that threaten our self-image as a way to preserve our psychological well-being.  And 'math people' whose identity is threatened by their previous stressful course experience may actively work to forget what they learned."

So that's today's journey through the recesses of the human mind.  It's a fascinating and complex place, never failing to surprise us, and how amazing it is that we are beginning to understand how it works.  As my dear friend, Professor Emeritus Rita Calvo, Cornell University teacher and researcher in Human Genetics, put it: "The twentieth century was the century of the gene.  The twenty-first will be the century of the brain.  With respect to neuroscience, we are right now about where genetics was in the early 1900s -- we know a lot of the descriptive features of the brain, some of the underlying biochemistry, and other than that, some rather sketchy details about this and that.  We don't yet have a coherent picture of how the brain works.

"But we're heading that direction.  It is only a matter of time till we have a working model of the mind.  How tremendously exciting!"

***********************************

Saber-toothed tigers.  Giant ground sloths.  Mastodons and woolly mammoths.  Enormous birds like the elephant bird and the moa.  North American camels, hippos, and rhinos.  Glyptodons, an armadillo relative as big as a Volkswagen Beetle with an enormous spiked club on the end of their tail.

What do they all have in common?  Besides being huge and cool?

They all went extinct, and all around the same time -- around 14,000 years ago.  Remnant populations persisted a while longer in some cases (there was a small herd of woolly mammoths on Wrangel Island in the Aleutians only four thousand years ago, for example), but these animals went from being the major fauna of North America, South America, Eurasia, and Australia to being completely gone in an astonishingly short time.

What caused their demise?

This week's Skeptophilia book of the week is The End of the Megafauna: The Fate of the World's Hugest, Fiercest, and Strangest Animals, by Ross MacPhee, which considers the question, and looks at various scenarios -- human overhunting, introduced disease, climatic shifts, catastrophes like meteor strikes or nearby supernova explosions.  Seeing how fast things can change is sobering, especially given that we are currently in the Sixth Great Extinction -- a recent paper said that current extinction rates are about the same as they were during the height of the Cretaceous-Tertiary Extinction 66 million years ago, which wiped out all the non-avian dinosaurs and a great many other species at the same time.  

Along the way we get to see beautiful depictions of these bizarre animals by artist Peter Schouten, giving us a glimpse of what this continent's wildlife would have looked like only fifteen thousand years ago.  It's a fascinating glimpse into a lost world, and an object lesson to the people currently creating our global environmental policy -- we're no more immune to the consequences of environmental devastation as the ground sloths and glyptodons were.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!] 


Wednesday, May 5, 2021

Memory boost

There's one incorrect claim that came up in my biology classes more than any other, and that's the old idea that "humans only use 10% of their brain."  Or 5%.  Or 2%.  Often bolstered by the additional claim that Einstein is the one who said it.  Or Stephen Hawking.  Or Nikola Tesla.

Or maybe all three of 'em at once, I dunno.

The problem is, there's no truth to any of it, and no evidence that the claim originated with anyone remotely famous.  That at present we understand only 10% of the brain is doing -- that I can believe.  That we're using less than 100% of our brain at any given time -- of course.

But the idea that evolution has provided us with these gigantic processing units, which (according to a 2002 study by Marcus Raichle and Debra Gusnard) consume 20% of our oxygen and caloric intake, and then we only ever access 10% of its power -- nope, not buying that.  Such a waste of resources would be a significant evolutionary disadvantage, and would have weeded out the low-brain-use individuals long ago.  (It's sufficient to look at some members of Congress to demonstrate that the last bit, at least, didn't happen.)

But at least it means we may escape the fate of the world in Idiocracy.

And speaking of movies, the 2014 cinematic flop Lucy didn't help matters, as it features a woman who gets poisoned with a synthetic drug that ramps up her brain from its former 10% usage rate to... *gasp*... 100%.  Leading to her becoming able to do telekinesis and the ability to "disappear within the space/time continuum."

Whatever the fuck that even means.

All urban legends and goofy movies aside, the actual memory capacity of the brain is still the subject of contention in the field of neuroscience.  And for us dilettante science geeks, it's a matter of considerable curiosity.  I know I have often wondered how I can manage to remember the scientific names of obscure plants, the names of distant ancestors, and melodies I heard fifteen years ago, but I routinely have to return to rooms two or three times because I keep forgetting what I went there for.

So I found it exciting to read about a study in the journal eLife, by Terry Sejnowski (of the Salk Institute for Biological Studies), Kristen Harris (of the University of Texas/Austin), et al., entitled "Nanoconnectomic Upper Bound on the Variability of Synaptic Plasticity."  Put more simply, what the team found was that human memory capacity is ten times greater than previously estimated.

In computer terms, our storage ability amounts to one petabyte.  And put even more simply for non-computer types, this translates roughly into "a shitload of storage."

"This is a real bombshell in the field of neuroscience," Sejnowski said.  "We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power.  Our new measurements of the brain's memory capacity increase conservative estimates by a factor of 10 to at least a petabyte, in the same ballpark as the World Wide Web."

The discovery hinges on the fact that there is a hierarchy of size in our synapses.  The brain ramps up or down the size scale as needed, resulting in a dramatic increase in our neuroplasticity -- our ability to learn.

"We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses," said team member Tom Bartol.  "One answer is in the constant adjustment of synapses, averaging out their success and failure rates over time...  For the smallest synapses, about 1,500 events cause a change in their size/ability and for the largest synapses, only a couple hundred signaling events cause a change.  This means that every 2 or 20 minutes, your synapses are going up or down to the next size.  The synapses are adjusting themselves according to the signals they receive."

"The implications of what we found are far-reaching," Sejnowski added.  "Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us."

And the most mind-blowing thing of all is that all of this precision and storage capacity runs on a power of about 20 watts -- less than most light bulbs.

Consider the possibility of applying what scientists have learned about the brain to modeling neural nets in computers.  It brings us one step closer to something neuroscientists have speculated about for years -- the possibility of emulating the human mind in a machine.

"This trick of the brain absolutely points to a way to design better computers," Sejnowski said.  "Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains."

Which is thrilling and a little scary, considering what happened when HAL 9000 in 2001: A Space Odyssey basically went batshit crazy halfway through the movie.



That's a risk that I, for one, am willing to take, even if it means that I might end up getting turned into a Giant Space Baby.

But I digress.

In any case, the whole thing is pretty exciting, and it's reassuring to know that the memory capacity of my brain is way bigger than I thought it was.  Although it still leaves open the question of why, with a petabyte of storage, I still can't remember where I put my car keys.


****************************************

Ever get frustrated by scientists making statements like "It's not possible to emulate a human mind inside a computer" or "faster-than-light travel is fundamentally impossible" or "time travel into the past will never be achieved?"

Take a look at physicist Chiara Marletto's The Science of Can and Can't: A Physicist's Journey Through the Land of Counterfactuals.  In this ambitious, far-reaching new book, Marletto looks at the phrase "this isn't possible" as a challenge -- and perhaps, a way of opening up new realms of scientific endeavor.

Each chapter looks at a different open problem in physics, and considers what we currently know about it -- and, more importantly, what we don't know.  With each one, she looks into the future, speculating about how each might be resolved, and what those resolutions would imply for human knowledge.

It's a challenging, fascinating, often mind-boggling book, well worth a read for anyone interested in the edges of scientific knowledge.  Find out why eminent physicist Lee Smolin calls it "Hugely ambitious... essential reading for anyone concerned with the future of physics."

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]
 

Friday, September 25, 2020

Neurobabble

Confirming something that people like Deepak Chopra and Dr. Oz figured out years ago, researchers at Villanova University and the University of Oregon have shown that all you have to do to convince people is throw some fancy-sounding pseudoscientific jargon into your argument.

The specific area that Diego Fernandez-Duque, Jessica Evans, Colton Christian, and Sara D. Hodges researched was neurobabble, in particular the likelihood of increasing people's confidence in the correctness of an argument if some bogus brain-based explanation was included. Fernandez-Duque et al. write:
Does the presence of irrelevant neuroscience information make explanations of psychological phenomena more appealing?  Do fMRI pictures further increase that allure?  To help answer these questions, 385 college students in four experiments read brief descriptions of psychological phenomena, each one accompanied by an explanation of varying quality (good vs. circular) and followed by superfluous information of various types.  Ancillary measures assessed participants' analytical thinking, beliefs on dualism and free will, and admiration for different sciences.  In Experiment 1, superfluous neuroscience information increased the judged quality of the argument for both good and bad explanations, whereas accompanying fMRI pictures had no impact above and beyond the neuroscience text, suggesting a bias that is conceptual rather than pictorial.  Superfluous neuroscience information was more alluring than social science information (Experiment 2) and more alluring than information from prestigious “hard sciences” (Experiments 3 and 4).  Analytical thinking did not protect against the neuroscience bias, nor did a belief in dualism or free will.  We conclude that the “allure of neuroscience” bias is conceptual, specific to neuroscience, and not easily accounted for by the prestige of the discipline.  
So this may explain why people so consistently fall for pseudoscience as long as it's couched in seemingly technical terminology.  For example, look at the following, an excerpt from an article in which Deepak Chopra is hawking his latest creation, a meditation-inducing device called "DreamWeaver":
About two years ago I got interested in the idea that you could feed light pulses through the brain with your eyes closed and sound and music at a certain frequency.  Your brain waves would dial into it and then you could dial the instrument down so that you would decrease the brain wave frequency from what it is normally in the waking state.  And then you could slowly dial down the brainwave frequency to what it would be in the dream state, which is called theta, and then you even dial further down into delta.
What the hell does "your brain waves would dial into it" mean?   And I would like to suggest to Fernandez-Duque et al. that their next experiment should have to do with people immediately believing claims if they involve the word "frequency."

[Image licensed under the Creative Commons NascarEd, Sleep Stage N3, CC BY-SA 3.0]

Then we have the following twofer -- an excerpt of an article by Deepak Chopra that appeared on Dr. Oz's website:
Try to eat one of these three foods once a day to protect against Alzheimer’s and memory issues.  
Wheat Germ - The embryo of a wheat plant, wheat germ is loaded with B-complex vitamins that can reduce levels of homocysteine, an amino acid linked to stroke, Alzheimer’s disease and dementia.  Sprinkle wheat germ on cereal and yogurt in the morning, or enjoy it on salads or popcorn with a little butter. 
Black Currents [sic] - These dark berries are jam-packed with antioxidants that help nourish the brain cells surrounding the hippocampus.  The darker in color, the more antioxidants black currents [sic] contain.  These fruits are available fresh when in season, or can be purchased dried or frozen year-round. 
Acorn Squash - This beautiful gold-colored veggie contains high amounts of folic acid, a B-vitamin that improves memory as well as the speed at which the brain processes information.
Whenever I read this sort of thing, I'm not inclined to believe it; I'm more inclined to scream, "Source?"  For example, I looked up the whole black currant claim, and the first few sources waxed rhapsodic about black currants' ability to enhance our brain function.  But then I noticed that said sources were all from the Black Currant Foundation (I didn't even know that existed, did you?) and the website blackcurrant.co.nz.  Scrolling down a bit, I found a post on WebMD that was considerably less enthusiastic, saying that it "may be useful in Alzheimer's" (with no mention of exactly how, nor any citations to support the claim) but that it also can lower blood pressure and slow down blood clotting.

So I suppose that the only way to protect yourself against this kind of nonsense is to learn some actual science, and be willing to read some peer-reviewed papers on the subject -- which includes training yourself to recognize which sources are peer-reviewed and which are not.

But doing all this research myself leaves me feeling like I need some breakfast.  Maybe a wheat germ, black currant, and acorn squash stir-fry.  Can't have too many antioxidants, you know, when your hippocampus is having some frequency problems.

**********************************

Author Mary Roach has a knack for picking intriguing topics.  She's written books on death (Stiff), the afterlife (Spook), sex (Bonk), and war (Grunt), each one brimming with well-researched facts, interviews with experts, and her signature sparkling humor.

In this week's Skeptophilia book-of-the-week, Packing for Mars: The Curious Science of Life in Space, Roach takes us away from the sleek, idealized world of Star Trek and Star Wars, and looks at what it would really be like to take a long voyage from our own planet.  Along the way she looks at the psychological effects of being in a small spacecraft with a few other people for months or years, not to mention such practical concerns as zero-g toilets, how to keep your muscles from atrophying, and whether it would actually be fun to engage in weightless sex.

Roach's books are all wonderful, and Packing for Mars is no exception.  If, like me, you've always had a secret desire to be an astronaut, this book will give you an idea of what you'd be in for on a long interplanetary voyage.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Monday, August 3, 2020

The writing brain

As a writer of fiction, I have wondered for years where creative ideas come from.  Certainly a great many of the plots I've written have seemed to spring fully-wrought from my brain (although as any writer will tell you, generating an idea is one thing, and seeing it to fruition quite another).

What has always struck me as odd about all of this is how... unconscious it all feels.  Oh, there's a good bit of front-of-the-brain cognition that goes into it -- background knowledge, visualization of setting, and sequencing, not to mention the good old-fashioned ability to construct solid prose.  But at its base, there's always seemed to me something mysterious about creativity, something ineffable and (dare I say it?) spiritual.  It is no surprise, even to me, that many have ascribed the source of creativity to divine inspiration or, at least, to a collective unconscious.

Take, for example, the origin of the novel I just completed two weeks ago (well, the first draft, anyhow).  Descent into Ulthoa is a dark, Lovecraftian piece about a haunted forest and a man obsessed with finding out what happened to his identical twin brother, who vanished ten years earlier on a hiking trip, but the inspiration for it seemed to come out of nowhere.  In fact, at the time, I wasn't even thinking about writing at all -- but was suddenly hit by a vivid, powerful image that seemed to beg for a story.  (If you want to read more about my experience of having that idea wallop me over the head, I did a post about it over at my fiction blog last August.)

So something is going on neurologically when stuff like this happens, but what?  Martin Lotze, a neuroscientist at the University of Griefswald (Germany), has taken the first steps toward understanding what is happening in the brains of creative writers -- and the results that he and his team have uncovered are fascinating.

One of the difficulties in studying the creative process is that during any exercise of creativity, the individual generally has to be free to move around.  Writing, especially, would be hard to do in a fMRI machine, where your head has to be perfectly still, and your typical writing device, a laptop, would be first wiped clean and then flung across the room by the electromagnets.  But Lotze and his team rigged up a setup wherein subjects could lie flat, with their heads encased in the fMRI tube, and have their arms supported so that they could write with the tried-and-true paper-and-pencil method, using a set of mirrors to see what they were doing.

[Image courtesy of Martin Lotze and the University of Griefswald]

Each subject was given a minute to brainstorm, and then two minutes to write.  While all of the subjects activated their visual centers and hippocampus (a part of the brain involved in memory and spatial navigation) during the process, there was a striking difference between veteran and novice writers.  Novice writers tended to activate their visual centers first; brainstorming, for them, started with thinking of images.  Veteran writers, on the other hand, started with their speech production centers.

"I think both groups are using different strategies,” Lotze said.  "It’s possible that the novices are watching their stories like a film inside their heads, while the writers are narrating it with an inner voice."

The other contrast between veterans and novices was in the level of activity of the caudate nucleus, a part of the brain involved in the coordination of activities as we become more skilled.  The higher the level of activity in the caudate nucleus, the more fluent we have become at it, and the less conscious effort it takes -- leading to the conclusion (no surprise to anyone who is a serious writer) that writing, just like anything, becomes better and easier the more you do it.  Becoming an excellent writer, like becoming a concert pianist or a star athlete, requires practice.

All of this is also interesting from the standpoint of artificial intelligence -- because if you don't buy the Divine Inspiration or Collective Unconscious Models, or something like them (which I don't), then any kind of creative activity is simply the result of patterns of neural firings -- and therefore theoretically should be able to be emulated by a computer.  I say "theoretically," because our current knowledge of AI is in its most rudimentary stages.  (As a friend of mine put it, "True AI is ten years in the future, and always will be.")  But just knowing what is happening in the brains of writers is the first step toward both understanding it, and perhaps generating a machine that is capable of true creativity.

All of that, of course is far in the future (maybe even more than ten years), and Lotze himself is well aware that this is hardly the end of the story.  As for me, I find the whole thing fascinating, and a little humbling -- that something so sophisticated is going on in my skull when I think up a scene in a story.  It brings to mind something one of my neurology students once said, after a lecture on the workings of the brain: "My brain is so much smarter than me, I don't know how I manage to think at all!"

Indeed.

************************************

This week's Skeptophilia book recommendation is a fun and amusing discussion of a very ominous topic; how the universe will end.

In The End of Everything (Astrophysically Speaking) astrophysicist Katie Mack takes us through all the known possibilities -- a "Big Crunch" (the Big Bang in reverse), the cheerfully-named "Heat Death" (the material of the universe spread out at uniform density and a uniform temperature of only a few degrees above absolute zero), the terrifying -- but fortunately extremely unlikely -- Vacuum Decay (where the universe tears itself apart from the inside out), and others even wilder.

The cool thing is that all of it is scientifically sound.  Mack is a brilliant theoretical astrophysicist, and her explanations take cutting-edge research and bring it to a level a layperson can understand.  And along the way, her humor shines through, bringing a touch of lightness and upbeat positivity to a subject that will take the reader to the edges of the known universe and the end of time.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Monday, February 17, 2020

The universal language

Sometimes I have thoughts that blindside me.

The last time that happened was three days ago, while I was working in my office and our elderly coonhound, Lena, was snoozing on the floor.  Well, as sometimes happens to dogs, she started barking and twitching in her sleep, and followed it up with sinister-sounding growls -- all the more amusing because while awake, Lena is about as threatening as your average plush toy.

So my thought, naturally, is to wonder what she was dreaming about.  Which got me thinking about my own dreams, and recalling some recent ones.  I remembered some images, but mostly what came to mind were narratives -- first I did this, then the slimy tentacled monster did that.

That's when the blindside happened.  Because Lena, clearly dreaming, was doing all that without language.

How would thinking occur without language?  For almost all humans, our thought processes are intimately tied to words.  In fact, the experience of having an experience or thought that isn't describable using words is so unusual that we have a word for it -- ineffable.

Mostly, though, our experience is completely, um, effable.  So much so that trying to imagine how a dog (or any other animal) experiences the world without language is, for me at least, nearly impossible.

What's interesting is how powerful this drive toward language is.  There have been studies of pairs of "feral children" who grew up together but with virtually no interaction with adults, and in several cases those children invented spoken languages with which to communicate -- each complete with its own syntax, morphology, and phonetic structure.

A fascinating new study that came out last week in the Proceedings of the National Academy of Sciences, detailing research by Manuel Bohn, Gregor Kachel, and Michael Tomasello of the Max Planck Institute for Evolutionary Anthropology, showed that you don't even need the extreme conditions of feral children to induce the invention of a new mode of symbolic communication.  The researchers set up Skype conversations between monolingual English-speaking children in the United States and monolingual German-speaking children in Germany, but simulated a computer malfunction where the sound didn't work.  They then instructed the children to communicate as best they could anyhow, and gave them some words/concepts to try to get across.

They started out with some easy ones.  "Eating" resulted in the child miming eating from a plate, unsurprisingly.  But they moved to harder ones -- like "white."  How do you communicate the absence of color?  One girl came up with an idea -- she was wearing a polka-dotted t-shirt, and pointed to a white dot, and got the idea across.

But here's the interesting part.  When the other child later in the game had to get the concept of "white" across to his partner, he didn't have access to anything white to point to.  He simply pointed to the same spot on his shirt that the girl had pointed to earlier -- and she got it immediately.

Language is defined as arbitrary symbolic communicationArbitrary because with the exception of a few cases like onomatopoeic words (bang, pow, ping, etc.) there is no logical connection between the sound of a word and its referent.  Well, here we have a beautiful case of the origin of an arbitrary symbol -- in this case, a gesture -- that gained meaning only because the recipient of the gesture understood the context.

I'd like to know if such a gesture-language could gain another characteristic of true language -- transmissibility.  "It would be very interesting to see how the newly invented communication systems change over time, for example when they are passed on to new 'generations' of users," said study lead author Manuel Bohn, in an interview with Science Daily.  "There is evidence that language becomes more systematic when passed on."

Because this, after all, is when languages start developing some of the peculiarities (also seemingly arbitrary) that led Edward Sapir and Benjamin Whorf to develop the hypothesis that now bears their names -- that the language we speak alters our brains and changes how we understand abstract concepts.  In K. David Harrison's brilliant book The Last Speakers, he tells us about a conversation with some members of a nomadic tribe in Siberia who always described positions of objects relative to the four cardinal directions -- so my coffee cup wouldn't be on my right, it would be south of me.  When Harrison tried to explain to his Siberian friends how we describe positions, at first he was greeted with outright bafflement.

Then, they all erupted in laughter.  How arrogant, they told him, that you see everything as relative to your own body position -- as if when you turn around, suddenly the entire universe shifts to compensate for your movement!


Another interesting example of this was the subject of a 2017 study by linguists Emanuel Bylund and Panos Athanasopoulos, and focused not on our experience of space but of time.  And they found something downright fascinating.  Some languages (like English) are "future-in-front," meaning we think of the future as lying ahead of us and the past behind us, turning time into something very much like a spatial dimension.  Other languages retain the spatial aspect, but reverse the direction -- such as the Peruvian language of Aymara.  For them, the past is in front, because you can remember it, just as you can see what's in front of you.  The future is behind you -- therefore invisible.

Mandarin takes the spatial axis and turns it on its head -- the future is down, the past is up (so the literal translation of the Mandarin expression of "next week" is "down week").  Asked to order photographs of someone in childhood, adolescence, adulthood, and old age, they will place them vertically, with the youngest on top.  English and Swedish speakers tend to think of time as a line running from left (past) to right (future); Spanish and Greek speakers tended to picture time as a spatial volume, as if it were something filling a container (so emptier = past, fuller = future).

All of which underlines how fundamental to our thinking language is.  And further baffles me when I try to imagine how other animals think.  Because whatever Lena was imagining in her dream, she was clearly understanding and interacting with it -- even if she didn't know to attach the word "squirrel" to the concept.

*******************************

This week's book recommendation is a fascinating journey into a topic we've visited often here at Skeptophilia -- the question of how science advances.

In The Second Kind of Impossible, Princeton University physicist Paul Steinhardt describes his thirty-year-long quest to prove the existence of a radically new form of matter, something he terms quasicrystals, materials that are ordered but non-periodic.  Faced for years with scoffing from other scientists, who pronounced the whole concept impossible, Steinhardt persisted, ultimately demonstrating that an aluminum-manganese alloy he and fellow physicists Luca Bindi created had all the characteristics of a quasicrystal -- a discovery that earned them the 2018 Aspen Institute Prize for Collaboration and Scientific Research.

Steinhardt's book, however, doesn't bog down in technical details.  It reads like a detective story -- a scientist's search for evidence to support his explanation for a piece of how the world works.  It's a fascinating tale of persistence, creativity, and ingenuity -- one that ultimately led to a reshaping of our understanding of matter itself.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]





Wednesday, October 23, 2019

A chat at the pub

When I'm out in a crowded bar, I struggle with something that I think a lot of us do -- trying to isolate the voice of the person I'm talking to from all of the background noise.

I can do it, but it's a struggle.  When I'm tired, or have had one too many pints of beer, I find that my ability to hear what my friend is saying suddenly disappears, as if someone had flipped off a switch.  His voice is swallowed up by a cacophony of random noise in which I literally can't isolate a single word.

Usually my indication that it's time to call it a night.

[Image is in the Public Domain]

It's an interesting question, though, how we manage to do this at all.  Think about it; the person you're listening to is probably closer to you than the other people in the pub, but the others might well be louder.  Add to that the cacophony of glasses clinking and music blaring and whatever else might be going on around you, and the likelihood is that your friend's overall vocal volume is probably about the same as anyone or anything else picked up by your ears.

Yet most of us can isolate that one voice and hear it distinctly, and tune out all of the other voices and ambient noise.  So how do you do this?

Scientists at Columbia University got a glimpse of how our brains might accomplish this amazing task in a set of experiments described in a paper that appeared in the journal Neuron this week.  In "Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception," by James O’Sullivan, Jose Herrero, Elliot Smith, Catherine Schevon, Guy M. McKhann, Sameer A. Sheth, Ashesh D. Mehta, and Nima Mesgarani, we find out that one part of the brain -- the superior temporal gyrus (STG) -- seems to be capable of boosting the gain of a sound we want to pay attention to, and to do so virtually instantaneously.

The auditory input we receive is a complex combination of acoustic vibrations in the air received all at the same time, so sorting them out is no mean feat.  (Witness how long it's taken to develop good vocal transcription software -- which, even now, is fairly slow and inaccurate.)  Yet your brain can do it flawlessly (well, for most of us, most of the time).  What O'Sullivan et al. found was that once received by the auditory cortex, the neural signals are passed through two regions -- first the Heschl's gyrus (HG), and then the STG.  The HG seems to create a multi-dimensional neural representation of what you're hearing, but doesn't really pick out one set of sounds as being more important than another.  The STG, though, is able to sort through that tapestry of electrical signals and amplify the ones it decides are more important.

"We’ve long known that areas of auditory cortex are arranged in a hierarchy, with increasingly complex decoding occurring at each stage, but we haven’t observed how the voice of a particular speaker is processed along this path," said study lead author James O’Sullivan in a press release.  "To understand this process, we needed to record the neural activity from the brain directly...  We found that that it’s possible to amplify one speaker’s voice or the other by correctly weighting the output signal coming from HG.  Based on our recordings, it’s plausible that the STG region performs that weighting."

The research has a lot of potential applications, not only for computerized vocal recognition, but for guiding the creation of devices to help the hearing impaired.  It's long been an issue that traditional hearing aids amplify everything equally, so a hearing-impaired individual in a noisy environment has to turn up the volume to hear what (s)he wants to listen to, but this can make the ambient background noise deafeningly loud.  If software can be developed that emulates what the STG does, it might create a much more natural-sounding and comfortable experience.

All of which is fascinating, isn't it?  The more we learn about our own brains, the more astonishing they seem.  Abilities we take entirely for granted are being accomplished by incredibly complex arrays and responses in that 1.3-kilogram "meat machine" sitting inside our skulls, often using mechanisms that still amaze me even after thirty-odd years of studying neuroscience.  

And it leaves me wondering what we'll find out about our own nervous systems in the next thirty years.

**************************************

In keeping with Monday's post, this week's Skeptophilia book recommendation is about one of the most enigmatic figures in mathematics; the Indian prodigy Srinivasa Ramanujan.  Ramanujan was remarkable not only for his adeptness in handling numbers, but for his insight; one of his most famous moments was the discovery of "taxicab numbers" (I'll leave you to read the book to find out why they're called that), which are numbers that are expressible as the sum of two cubes, two different ways.

For example, 1,729 is the sum of 1 cubed and 12 cubed; it's also the sum of 9 cubed and 10 cubed.

What's fascinating about Ramanujan is that when he discovered this, it just leapt out at him.  He looked at 1,729 and immediately recognized that it had this odd property.  When he shared it with a friend, he was kind of amazed that the friend didn't jump to the same realization.

"How did you know that?" the friend asked.

Ramanujan shrugged.  "It was obvious."

The Man Who Knew Infinity by Robert Kanigel is the story of Ramanujan, whose life ended from tuberculosis at the young age of 32.  It's a brilliant, intriguing, and deeply perplexing book, looking at the mind of a savant -- someone who is so much better than most of us at a particular subject that it's hard even to conceive.  But Kanigel doesn't just hold up Ramanujan as some kind of odd specimen; he looks at the human side of a man whose phenomenal abilities put him in a class by himself.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]






Tuesday, March 12, 2019

A consummation devoutly to be wished

Like a lot of people, I'm struggling right now against sleep loss because of the silly switch from Standard to Daylight Savings Time, a switch I've heard compared to "cutting the top off a blanket and sewing the piece on the bottom to make it longer."

Don't get me wrong, I like the fact that it's still light when I get home from work, but given how far north I live, that'd have happened eventually anyhow.  And seems to me that since a lot of people like having more daylight hours after work, it'd make sense just to keep it that way, and not to return to Standard Time in November, further fucking up everyone's biological clock.

I mean, I have enough trouble sleeping as it is.  I've been an insomniac since my teenager years.  I never have trouble falling asleep -- my problem is staying asleep.  I'll wake up at 1:30 in the morning with my thoughts galloping full tilt, or (more often) with a piece of some song running on a tape-loop through my head, like a couple of nights ago when my brain thought it'd be fun to sing the Wings song "Silly Love Songs" to me over and over.

I hated that song even before this, but now I really loathe it.

[Image licensed under the Creative Commons Evgeniy Isaev from Moscow, Russia, Sleeping man. (7174597014), CC BY 2.0]

In any case, it was with great interest that I read some recent research from Bar-Ilan University (Israel) that has elucidated the purpose of sleep -- something that up till now has been something of a mystery.

In "Sleep Increases Chromosome Dynamics to Enable Reduction of Accumulating DNA Damage in Single Neurons," by David Zada, Tali Lerer-Goldshtein,  Irina Bronshtein, Yuval Garini, and Lior Appelbaum, which appeared last week in Nature, the authors write:
Sleep is essential to all animals with a nervous system.  Nevertheless, the core cellular function of sleep is unknown, and there is no conserved molecular marker to define sleep across phylogeny.  Time-lapse imaging of chromosomal markers in single cells of live zebrafish revealed that sleep increases chromosome dynamics in individual neurons but not in two other cell types.  Manipulation of sleep, chromosome dynamics, neuronal activity, and DNA double-strand breaks (DSBs) showed that chromosome dynamics are low and the number of DSBs accumulates during wakefulness.  In turn, sleep increases chromosome dynamics, which are necessary to reduce the amount of DSBs.  These results establish chromosome dynamics as a potential marker to define single sleeping cells, and propose that the restorative function of sleep is nuclear maintenance.
"It's like potholes in the road," said study co-author Lior Appelbaum in an interview with Science Daily.  "Roads accumulate wear and tear, especially during daytime rush hours, and it is most convenient and efficient to fix them at night, when there is light traffic."

This repair function is critical for cellular and organismal health.  If mutations and chromosomal breaks aren't fixed, it can trigger the death of the cell -- which, in the case of neurons, can create havoc.  You have to wonder if some of the age-related degradation of memory, not to mention more acute cases of dementia, are correlated with a reduction in sleep-induced genetic repair.

"We've found a causal link between sleep, chromosome dynamics, neuronal activity, and DNA damage and repair with direct physiological relevance to the entire organism," Appelbaum said.  "Sleep gives an opportunity to reduce DNA damage accumulated in the brain during wakefulness...  Despite the risk of reduced awareness to the environment, animals -- ranging from jellyfish to zebrafish to humans -- have to sleep to allow their neurons to perform efficient DNA maintenance, and this is possibly the reason why sleep has evolved and is so conserved in the animal kingdom."

What it doesn't explain is why some of us have so damn much trouble actually doing what we're evolved to do.  Shutting my brain off so it can do some road maintenance is really appealing, but for some reason it just doesn't cooperate most nights.


Which explains why I'm so tired this morning.  But what's wrong with that, I'd like to know?  So here I go AGAAAIIIIINNNNN....

**************************************

This week's Skeptophilia book recommendation is an entertaining one -- Bad Astronomy by astronomer and blogger Phil Plait.  Covering everything from Moon landing "hoax" claims to astrology, Plait takes a look at how credulity and wishful thinking have given rise to loony ideas about the universe we live in, and how those ideas simply refuse to die.

Along the way, Plait makes sure to teach some good astronomy, explaining why you can't hear sounds in space, why stars twinkle but planets don't, and how we've used indirect evidence to create a persuasive explanation for how the universe began.  His lucid style is both informative and entertaining, and although you'll sometimes laugh at how goofy the human race can be, you'll come away impressed by how much we've figured out.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]