Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label cognition. Show all posts
Showing posts with label cognition. Show all posts

Tuesday, December 10, 2019

Misremembering the truth

There are two distinct, but similar-sounding, cognitive biases that I've written about many times here at Skeptophilia because they are such tenacious barriers to rational thinking.

The first, confirmation bias, is our tendency to uncritically accept claims when they fit with our preconceived notions.  It's why a lot of conservative viewers of Fox News and liberal viewers of MSNBC sit there watching and nodding enthusiastically without ever stopping and saying, "... wait a moment."

The other, dart-thrower's bias, is more built-in.  It's our tendency to notice outliers (because of their obvious evolutionary significance as danger signals) and ignore, or at least underestimate, the ordinary as background noise.  The name comes from the thought experiment of being in a bar while there's a darts game going on across the room.  You'll tend to notice the game only when there's an unusual throw -- a bullseye, or perhaps impaling the bartender in the forehead -- and not even be aware of it otherwise.

Well, we thought dart-thrower's bias was more built into our cognitive processing system and confirmation bias more "on the surface" -- and the latter therefore more culpable, conscious, and/or controllable.  Now, it appears that confirmation bias might be just as hard-wired into our brains as dart-thrower's bias is.

A paper appeared this week in Human Communication Research, describing research conducted by a team led by Jason Coronel of Ohio State University.  In "Investigating the Generation and Spread of Numerical Misinformation: A Combined Eye Movement Monitoring and Social Transmission Approach," Coronel, along with Shannon Poulsen and Matthew D. Sweitzer, did a fascinating series of experiments that showed we not only tend to accept information that agrees with our previous beliefs without question, we honestly misremember information that disagrees -- and we misremember it in such a way that in our memories, it further confirms our beliefs!

The location of memories (from Memory and Intellectual Improvement Applied to Self-Education and Juvenile Instruction, by Orson Squire Fowler, 1850) [Image is in the Public Domain]

What Coronel and his team did was to present 110 volunteers with passages containing true numerical information on social issues (such as support for same-sex marriage and rates of illegal immigration).  In some cases, the passages agreed with what (according to polls) most people believe to be true, such as that the majority of Americans support same-sex marriage.  In other cases, the passages contained information that (while true) is widely thought to be untrue -- such as the fact that illegal immigration across the Mexican border has been dropping for years and is now at its lowest rates since the mid-1990s.

Across the board, people tended to recall the information that aligned with the conventional wisdom correctly, and the information that didn't incorrectly.  Further -- and what makes this experiment even more fascinating -- is that when people read the unexpected information, data that contradicted the general opinion, eye-tracking monitors recorded that they hesitated while reading, as if they recognized that something was strange.  In the immigration passage, for example, they read that the rate of immigration had decreased from 12.8 million in 2007 to 11.7 million in 2014, and the readers' eyes bounced back and forth between the two numbers as if their brains were saying, "Wait, am I reading that right?"

So they spent longer on the passage that conflicted with what most people think -- and still tended to remember it incorrectly.  In fact, the majority of people who did remember wrong got the numbers right -- 12.8 million and 11.7 million -- showing that they'd paid attention and didn't just scoff and gloss over it when they hit something they thought was incorrect.  But when questioned afterward, they remembered the numbers backwards, as if the passage had actually supported what they'd believed prior to the experiment!

If that's not bad enough, Coronel's team then ran a second experiment, where the test subjects read the passage, then had to repeat the gist to another person, who then passed it to another, and so on.  (Remember the elementary school game of "Telephone?")  Not only did the data get flipped -- usually in the first transfer -- subsequently, the difference between the two numbers got greater and greater (thus bolstering the false, but popular, opinion even more strongly).  In the case of the immigration statistics, the gap between 2007 and 2014 not only changed direction, but by the end of the game it had widened from 1.1 million to 4.7 million.

This gives you an idea what we're up against in trying to counter disinformation campaigns.  And it also illustrates that I was wrong in one of my preconceived notions; that people falling for confirmation bias are somehow guilty of locking themselves deliberately into an echo chamber.  Apparently, both dart-thrower's bias and confirmation bias are somehow built into the way we process information.  We become so certain we're right that our brain subconsciously rejects any evidence to the contrary.

Why our brains are built this way is a matter of conjecture.  I wonder if perhaps it might be our tribal heritage at work; that conforming to the norm, and therefore remaining a member of the tribe, has a greater survival value than being the maverick who sticks to his/her guns about a true but unpopular belief.  That's pure speculation, of course.  But what it illustrates is that once again, our very brains are working against us in fighting Fake News -- which these days is positively frightening, given how many powerful individuals and groups are, in a cold and calculated fashion, disseminating false information in an attempt to mislead us, frighten us, or anger us, and so maintain their positions of power.

***********************

This week's Skeptophilia book of the week is brand new; Brian Clegg's wonderful Dark Matter and Dark Energy: The Hidden 95% of the Universe.  In this book, Clegg outlines "the biggest puzzle science has ever faced" -- the evidence for the substances that provide the majority of the gravitational force holding the nearby universe together, while simultaneously making the universe as a whole fly apart -- and which has (thus far) completely resisted all attempts to ascertain its nature.

Clegg also gives us some of the cutting-edge explanations physicists are now proposing, and the experiments that are being done to test them.  The science is sure to change quickly -- every week we seem to hear about new data providing information on the dark 95% of what's around us -- but if you want the most recently-crafted lens on the subject, this is it.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]





Saturday, June 3, 2017

Face card

I ran into an article in the New York Times a couple of days ago that begins with the line, "The brain has an amazing capacity for recognizing faces."

This made me snort derisively, because as I've mentioned before, I have prosopagnosia -- face blindness.  I'm not completely face blind, as the eminent writer and neuroscientist Oliver Sacks was -- Sacks, after all, didn't even recognize his own face in a mirror.  I'm not quite that badly off, but even so, I don't have anywhere near instantaneous facial recognition.  I compensate by being good at remembering voices, and paying attention to things like gait and stance.  Beyond that, I tend to remember people as lists of features -- he's the guy with the scar through one eyebrow, she's the one with black hair and three piercings in her left ear.  But it's a front-of-the-brain, conscious cognitive thing, not quick and subconscious like it (apparently) is with most people.

And even that strategy can fail, if someone changes hair styles, gets new glasses, or begins to dress differently.  Then I have to rely on my other strategies, as I did a couple of days ago in our local pharmacy.  The check-out clerk smiled at me, and I said hi and greeted her by name.  She was a former student who had taken my neuroscience class a couple of years ago, and she grinned at me and said, "I thought you didn't recognize people's faces."

"I don't," I said.  "You're wearing a name tag."

[image courtesy of the Wikimedia Commons]

Despite my scornful snort at the first line of the article in the Times, I was pretty interested in its content, not least because it gives me an insight into my own peculiar inability.  The article describes the research of Le Chang and Doris Y. Tsao (published this week in Cell), of Caltech, who using fMRI monitoring of the brains of monkeys, have begun to elucidate how the brain processes faces.  Chang and Tsao write:
Primates recognize complex objects such as faces with remarkable speed and reliability.  Here, we reveal the brain’s code for facial identity.  Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches.  By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space.  Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces.  Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells.  Our work suggests that other objects could be encoded by analogous metric coordinate systems.
Put more simply, the brain seems to encode facial recognition in a fairly small number of cells -- possibly as few as 10,000 -- which fire in a distinctive pattern depending on the deviation of the face being observed, on various metrics, from an "average" or "baseline" face.  This creates what Chang and Tsao call a "face space" -- a mapping between facial features and a set of firing patterns in the facial recognition module in the brain.

Chang and Tsao got so good at discerning the "face space" in a monkey's brain that they could tell which face photograph a monkey was looking at simply by watching which neurons fired!

So what that means is that we don't have devoted neurons to particular faces; there is no "Jennifer Aniston cell," as the concept has often been called.  We simply respond to the dimensions and features of the face we're observing, and map that into "face space," and that allows us to uniquely identify a nearly infinite number of different faces.

Tsao suspects that there are other types of encoding in the brain that will turn out to work the same way.  "[There is in] neuroscience a sense of pessimism that the brain is similarly a black box," she said. "Our paper provides a counterexample.  We’re recording from neurons at the highest stage of the visual system and can see that there’s no black box.  My bet is that that will be true throughout the brain."

Which makes me wonder where this whole system is going wrong in my own brain.  I certainly see, and can recall, facial features; it is not (as I thought when I was younger) that I am simply inattentive or unobservant.  But somehow, even knowing features doesn't create any kind of recognizable image for me.  For people I know well, I could list off features -- round face, crooked nose, wavy brown hair, prominent chin -- but those don't come together in my brain into any sort of visual image.  The result is the odd situation that for people I know, I can often describe them, but I can't picture them at all.

So anyhow, if at some point I pass you on the street and don't say hi, or even make eye contact and have no reaction, I'm not being unfriendly, you haven't somehow pissed me off, and I'm not daydreaming.  I honestly don't know who you are.  It'd be nice if, like my former student, everyone went around wearing name tags, but failing that, I'll just have to keep muddling along in a sea of unfamiliar faces.

Thursday, October 6, 2016

Gaming the brain

I think all of us can relate to the desire to have our brains work better.

We forget things.  We get distracted.  We let worry keep us from enjoying our days and from sleeping at night.  And that's not even counting the more serious problems that some of us have to deal with -- depression, anxiety disorders, bipolar disorder, schizophrenia, dementia... the list goes on and on.

So it's only to be expected that we're attracted to anything that promises to help us out in the Mental Faculties Department.  This has given rise to companies like Lumosity, which use a variety of brain-stimulating games to activate your neural circuitry -- and, the claim goes, trigger an overall improvement in your mental acuity.

The problem is, they don't work as advertised.  Playing a brain game improves one thing and one thing only -- your ability to play that game.  This was the finding of a study that was published last week in the journal Psychological Science in the Public Interest, and describes work by seven researchers headed by Daniel J. Simons, of the University of Illinois at Urbana-Champaign.  Disturbingly, not only did Simons's team find little in the way of positive results, they found poor experimental design in previous studies that had found such results.  Simons et al. write:
Based on this examination, we find extensive evidence that brain-training interventions improve performance on the trained tasks, less evidence that such interventions improve performance on closely related tasks, and little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance.  We also find that many of the published intervention studies had major shortcomings in design or analysis that preclude definitive conclusions about the efficacy of training, and that none of the cited studies conformed to all of the best practices we identify as essential to drawing clear conclusions about the benefits of brain training for everyday activities.
Simons agrees that it's a discouraging result.  "It’s disappointing that the evidence isn’t stronger," Simons said in an interview in Science Around Michigan.  "It would be really nice if you could play some games and have it radically change your cognitive abilities, but the studies don’t show that on objectively measured real-world outcomes."

[image courtesy of the Wikimedia Commons]

If that weren't bad enough, a couple of weeks ago there was an announcement from a researcher that another brain-improvement strategy -- "power poses" -- also shows little effect.  This one achieved wide acclaim when one of its chief proponents, social psychologist Amy Cuddy, spoke about it on one of the most watched TED talks -- at present, it's been viewed over 36 million times.  The idea is that adopting a body pose of strength and courage affects your hormone levels (especially testosterone and cortisol), which then feeds back and positively affects your mood and anxiety levels; likewise, adopting a submissive or weak pose generates the opposite effects. 

The problem is, attempts in January to replicate Cuddy's experiments failed to generate results, and (most damning of all) one of the co-authors of the original study, Dana Carney, has stated outright that "I do not believe that 'power pose' effects are real."  She said the original study made use of the statistical fudging technique called "p-hacking," which (to oversimplify, but give you the general gist) amounts to running a variety of tests and only reporting on the ones that generated positive results.

All of which is not intended to stop you from playing brain games or doing power poses.  I still think there's something to be said for thinking positively, and if you approach life playfully and optimistically you're much more likely to enjoy it and (therefore) be successful at what you do.  (As my dad used to say, I'd rather be an optimist who is wrong than a pessimist who is right.)

But as far as actual measurable results in cognition, memory, or hormone levels?  Apparently not.  Which is disappointing, but perhaps not surprising.  Our brains are tremendously complex organs, and it's always struck me as a little unlikely that powerful neural firing patterns could be so readily malleable.  As usual, the simplistic approach seems to be appealing... but wrong.

Friday, August 19, 2016

I'm sure I already told you about this...

One of the most peculiar sensations in the world is déjà vu.  I typically have the auditory version -- I am completely convinced that I have had this conversation before.  Others tend to have more visual déjà vu, having a certainty that they've been in a place where they know they've never been.

I'd heard a number of explanations of the phenomenon -- that it was memory being triggered subliminally by another sense, or that it came from the fact that our sensory processing and cognitive processing were running at different speeds, so the by the time everything was integrated it created a false memory of an experience that had already occurred.  Neither of those has ever sounded all that convincing to me.

[image courtesy of the Wikimedia Commons]

Nor, I must add, did all of the woo-woo explanations, such as the idea that déjà vu was precognition, or a visitation by a ghost, or the recollection of an experience from a previous life.

Now, cognitive neuroscientists Josephine Urquhart and Akira O'Connor of the University of St. Andrews (Scotland) have devised an experiment that gives us at least a window on what might be going on -- by creating a situation where déjà vu can be induced.

The setup is simple and elegant.  You give your test subjects a list of words to memorize, and include several that have to do with sleeping -- bed, blankets, dreams, pillow.  "Sleep" itself is not included.  After studying the list, you ask the subjects if there were any words on the list beginning with the letter "s" (there weren't).  Afterwards, you ask them if the word "sleep" was on the list.

They know it couldn't have been, because they all answered in the negative regarding there being words beginning with "s" -- but when asked the question, most of the test subjects experienced an eerie sense of déjà vu, that the word "sleep" actually was on the list -- or, perhaps, on another similar list they'd seen before, somewhere else.  Urquhart and O'Connor write:
Déjà vu is a nebulous memory experience defined by a clash between evaluations of familiarity and novelty for the same stimulus.  We sought to generate it in the laboratory by pairing a DRM recognition task, which generates erroneous familiarity for critical words, with a monitoring task by which participants realise that some of these erroneously familiar words are in fact novel...  The key omission in [prior] déjà vu generation procedures... is the provision of information allowing the participant to make an evaluation of unfamiliarity or novelty to clash with the experimentally-generated familiarity.  In these procedures, there was no objective standard by which participants could verify that the stimuli provoking familiarity had in fact not previously been encountered.
Interestingly, when the subjects were being tested, they were simultaneously being monitored by an fMRI scanner -- and when the feelings of déjà vu were the most intense, the areas in the brain involved in memory (such as the hippocampus) were not very active.  Instead, the frontal cortex -- the part of the cerebrum responsible for decision-making -- was lighting up like mad.

O'Connor and Urquhart believe that the explanation for this is that déjà vu comes from our memory's error-checking procedure.  When we are forming memories, the frontal cortex is doing a continual spot-check to make sure that what is being placed into memory is accurate.  When an error is noted, it's brought to our attention.  Most of the time, the error is something that can be resolved quickly -- with a conclusion of "okay, that's not the way it happened."  But when the memory being analyzed is close in content to something else, especially something that the conscious brain knows can't have occurred, it generates a conflict that is what results in the sensation of déjà vu.

This is still a tentative finding -- there is a great deal we don't understand about memory and sensory processing, so concluding that the phenomenon of déjà vu is explained is probably premature.  But to my thinking, this is a hell of a lot better explanation than anything else I've heard.  O'Connor and Urquhart are going to continue trying to explore the phenomenon.  As a mysterious sensation that is nearly universal to all humans, it certainly begs explaining.  But look for more studies coming down the pike.  And don't forget: you heard it here first.

Tuesday, August 2, 2016

The universality of prejudice

One of the most insidious of biases is the perception that you are not biased.

Of course everyone else has their blind spots, their misapprehensions, their unquestioned assumptions about the world.  You, on the other hand?  You see the world through these perfectly clear lenses.  As Kathryn Schulz put it in her phenomenal TED Talk "On Being Wrong," "Of course we all accept that we're fallible, that we make mistakes in the abstract sense.  But try to think of one thing, one single thing, that you're wrong about now?  You can't do it."

Social psychologists Mark Brandt (of Tilburg University in the Netherlands) and Jarret Crawford (of the College of New Jersey) published a study this week in the journal Social Psychological and Personality Science that delivers a death blow to this perception, and underscores the fact that none of us are free from prejudice.  Our sense that prejudice is the bailiwick of the unintelligent turns out to be less than a half truth.  Your level of cognitive ability doesn't predict whether or not you're prejudiced -- it only predicts the sorts of things you're likely to be prejudiced about.

Through an analysis of survey data from 5,914 people in the United States, Brandt and Crawford drew conclusions that should give all of us pause.  Their results, which seem to be robust, indicate that people of low cognitive ability (as assessed by a test of verbal ability) tend to express prejudice toward groups perceived as liberal or unconventional (such as gays and atheists) and also groups for which membership is not a choice (such as ethnic minorities).  People of high cognitive ability are not less prejudiced, they simply show the opposite pattern -- showing prejudice toward groups perceived as conservative or conventional, and for which membership is by choice (such as Christians, Republicans, the military, and big business).

"There are a variety of belief systems and personality traits that people often think protect them from expressing prejudice," Brandt explains.  "In our prior work we found that people high and low in the personality trait of openness to experience show very consistent links between seeing a group as ‘different from us’ and expressing prejudice towards that group.  The same appears to be true for cognitive ability.

"Whereas prior work by others found that people with low cognitive ability express more prejudice, we found that this is limited to only some target groups.  For other target groups the relationship was in the opposite direction.  For these groups, people with high levels of cognitive ability expressed more prejudice.  So, cognitive ability also does not seem to make people immune to expressing prejudice."

[image courtesy of the Wikimedia Commons]

It's a finding that's well worth keeping in mind.  The key seems to be not in eliminating prejudice (a goal that is probably impossible) but in placing our prejudices out in the open where we can keep an eye on them.  If I (for example) have the inclination to believe that Democrats are wishy-washy bleeding hearts who don't give a rat's ass about national security, it's important for me to keep that bias in my conscious mind -- and to listen more carefully to Democrats when they speak, because I'm more likely to let my assumptions do the thinking for me.  

Even more critical, though, is to keep biases in mind when you're listening to someone you're inclined to agree with.  If, on the other hand, you're prone to thinking that Democrat = correct, be on your guard, because that assumption of righteousness is going to blind you to what you're actually being told.  How many times have we given a pass to someone who has turned out to be spouting nonsense, simply because (s)he belongs to the same political party, religion, or ethnic group as we do?

The bottom line is, be aware of your biases, and don't be afraid to challenge them.  Keep your brain turned on.  The human mind is rife with prejudice, unquestioned assumptions, and sloppy thinking, and that's not just true of the people you disagree with.  It's all of us, all of the time.  The best thinkers aren't the ones who expunge all such mental murk from their brains; they're just the ones who are the most determined to question their own mental set rather than assuming that it must be right about everything.

Saturday, July 11, 2015

The illusion of understanding

I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver.  We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.

I just ran into a paper from about thirteen years ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon.  It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all.  So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.

Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science.  Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.

The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition.  People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology.  Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.  Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity...   
The incompleteness of everyday theories should not surprise most scientists.  We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper. 
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories.  Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand.  Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions.  They believe that they can explain the world they live in fairly well.
Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way.  They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters.  They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.

Just about everyone ranked their understanding as lower after they saw the correct explanation.

You read that right.  People, across the board, think they understand things better before they actually learn about them.  On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be.  (Ever seen the inside of a wind-up clock, for example?)  But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.

It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away.  They tried it with a less-educated test group (the initial test group had been Yale students.)  Nope -- even people with less education still think they understand everything just fine.  They tried it with younger subjects.  Still no change.  They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.

The drop was less when such explicit instructions were given, but it was still there.  As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings. Importantly, the drop was still significant—the illusion held."

So does the drop in self-rating occur with purely factual knowledge?  They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall -- such as naming the capitals of various countries.  Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge.  We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.

The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load.  It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough.  The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
Put simply, when we get to "I understand this well enough," we stop thinking.  And for most of us, that point is reached far, far too soon.

And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote.  If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?

All the more reason, I think, that we should be putting more time, money, effort, and support into education.  While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline.  We still might overestimate our own understanding, but I'd still bet that the understanding itself is higher -- and that's bound to make us make better decisions.

I'll end with a quote by John Green that I think is particularly apt, here:


Saturday, September 6, 2014

Brain on fire

When a new discovery in medical science is made, there's always the danger that gullible and/or hopeful people will misinterpret the results.  The danger is especially high when the discovery has to do with something simple and accessible, such as the find that trans-cranial electrical brain stimulation leads to higher cognitive function.

[image courtesy of the Wikimedia Commons]

A year ago, some researchers at Oxford University found that a painless, non-invasive application of "electrical noise," delivered through electrodes attached to the scalp, improved attention, accuracy, and memory, and that the effect lasted for weeks or months.  The procedure is called TRNS (transcranial random noise stimulation), and shows great promise in helping individuals with cognitive impairment -- and perhaps even us ordinary folks who just want a boost in our thinking ability.

"Performance on both the calculation and rote learning tasks improved over the five days, and the former improvements were maintained until six months after training," study leader Dr. Roy Cohen Kadosh told reporters.  "Research has shown that by delivering electricity to the right part of the brain, we can change the threshold of neurons that transmit information in our brain, and by doing that we can improve cognitive abilities in different types of psychological functions...  Our neuro-imaging results suggested that TRNS increases the efficiency with which stimulated brain areas use their supplies of oxygen and nutrients...  Participants receiving TRNS showed superior long-term performance, compared to sham controls, six months later."

So far, pretty cool.  But of course, when a researcher discovers something like this, it opens the door for the greedy to take advantage of the gullible by creating their own electrical stimulation devices, and claiming that "research shows" that they'll help you to think better.

"A headset for gamers, take charge... Overclock your brain," claims one company that sells home electrical stimulation devices.  Another one states: "Can you learn 20-40% quicker, reduce pain, feel better, increase energy or reduce stress with tDCS?  Research studies say, YES!"

There's even a forum on Reddit devoted to the subject -- complete with claims that TRNS can treat everything from autism to schizophrenia.  Less publicized, though, are the accounts of people who have burned their scalps because of leaving the electrodes on too long, or using a unit that delivers a higher-than-recommended voltage.

Through all of this, there have been some voices calling for reason.  Dr. Hannah Maslin, also of Oxford, published a paper calling for regulation of these devices.  "It is becoming increasingly easy for individuals to buy brain-modulating devices online that promise to make the user’s brain work faster, or more effectively, or more creatively," Maslin writes.  "Such devices can involve passing electrical currents through one’s brain or using electromagnetic fields to penetrate the scalp and skull to make neurons fire.  Yet, when purchased outside clinical settings, these devices are unregulated, with no system in place to ensure their safety.  With the market for enhancement technologies expanding, and with devices already crossing international borders, controlling which products are approved for sale is a global issue, potentially requiring international regulatory harmonization."

Steven Novella, neurologist at Yale University, put it even more bluntly.  "Any device with medical claims that it's meant to affect our biological function should be appropriately regulated.  Regulation is the only thing that creates the motivation to spend the money and take the time to do the proper research."

Of course, I'm expecting that this will bring howls of anger from the alternative-medicine crowd, who get their jollies claiming that the medical establishment is actively trying to keep us all sick so that they can make more money.  The truth, of course, is that regulation is about protecting people from their own ignorance.  TRNS does show great promise in improving memory and cognition -- but putting those devices in the hands of people who don't know how to use them correctly is asking for trouble.

So if you're tempted by the hype, my advice is to put away your credit card and read some of the actual research.  It may be that eventually TRNS units will be available for use for ordinary folks, but right now they're (rightfully) in the hands of the medical researchers.  Heaven knows I'd like to think more clearly; but I'm not going to cave in to that desire and end up burning a hole in my scalp.

Call me a Nervous Nellie, but I'm just going to err on the side of caution in this instance.