Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label cognitive science. Show all posts
Showing posts with label cognitive science. Show all posts

Saturday, October 3, 2020

The illusion of understanding

I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver.  We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.

I just ran into a paper from about a long while ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon.  It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all.  So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.

Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science.  Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.

The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition.  People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology.  Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.  Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity... 
The incompleteness of everyday theories should not surprise most scientists.  We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper.  
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories.  Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand.  Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions.  They believe that they can explain the world they live in fairly well.
Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way.  They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters.  They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.

Just about everyone ranked their understanding as lower after they saw the correct explanation.

You read that right.  People, across the board, think they understand things better before they actually learn about them.  On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be.  (Ever seen the inside of a wind-up clock, for example?)  But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.

It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away.  They tried it with a less-educated test group (the initial test group had been Yale students.)  Nope -- even people with less education still think they understand everything just fine.  They tried it with younger subjects.  Still no change.  They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.

The drop was less when such explicit instructions were given, but it was still there.  As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings.  Importantly, the drop was still significant—the illusion held."

So does the drop in self-rating occur with purely factual knowledge?  They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall, such as naming the capitals of various countries.  Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge.  We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.

The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load.  It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough.  The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
Put simply, when we get to "I understand this well enough," we stop thinking.  And for most of us, that point is reached far, far too soon.

And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote.  If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?

All the more reason, I think, that we should be putting more time, money, effort, and support into education.  While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline.  We still might overestimate our own understanding, but I'd bet that the understanding itself is higher -- and that's bound to lead us to make better decisions.

I'll end with a quote by author and blogger John Green that I think is particularly apt, here:


*******************************

To the layperson, there's something odd about physicists' search for (amongst many other things) a Grand Unified Theory, that unites the four fundamental forces into one elegant model.

Why do they think that there is such a theory?  Strange as it sounds, a lot of them say it's because having one force of the four (gravitation) not accounted for by the model, and requiring its own separate equations to explain, is "messy."  Or "inelegant."  Or -- most tellingly -- "ugly."

So, put simply; why do physicists have the tendency to think that for a theory to be true, it has to be elegant and beautiful?  Couldn't the universe just be chaotic and weird, with different facets of it obeying their own unrelated laws, with no unifying explanation to account for it all?

This is the question that physicist Sabine Hossenfelder addresses in her wonderful book Lost in Math: How Beauty Leads Physicists Astray.  She makes a bold statement; that this search for beauty and elegance in the mathematical models has diverted theoretical physics into untestable, unverifiable cul-de-sacs, blinding researchers to the reality -- the experimental evidence.

Whatever you think about whether the universe should obey aesthetically pleasing rules, or whether you're okay with weirdness and messiness, Hossenfelder's book will challenge your perception of how science is done.  It's a fascinating, fun, and enlightening read for anyone interested in learning about the arcane reaches of physics.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Tuesday, May 2, 2017

Aesthetic synchrony

Probably most of you have had the fortunate experience of being in a situation where you were completely engaged in what you were doing.  This can be especially powerful when you are being given the chance to experience something novel -- listening to a lecture by a truly masterful speaker, attending a performance of music or theater, visiting a place of great natural beauty -- when you are having what writer Sir Ken Robinson (speaking of masterful lecturers) calls in his talk "Changing Education Paradigms" "an aesthetic experience, when your senses are operating at their peak, when you're present in the current moment, when you're resonating with the excitement of this thing you're experiencing, when you are fully alive."

When this happens, we often say we are "on the same wavelength" with others who are sharing the experience with us.   And now, a team led by Suzanne Dikker of New York University has shown that this idiom might literally be true.

Dikker's team had thirteen test subjects -- twelve high school students and their teacher -- wear portable electroencephalogram headsets for an entire semester of biology classes.  Naturally, some of the topics and activities were more engaging than others, and the researchers had students self-report daily on such factors as how focused they were, how much they enjoyed their teacher's presentation, how much they enjoyed the students they interacted with, and their satisfaction levels about the activities they were asked to take part in.

[image courtesy of the Wikimedia Commons]

Dikker et al. write:
The human brain has evolved for group living.  Yet we know so little about how it supports dynamic group interactions that the study of real-world social exchanges has been dubbed the "dark matter of social neuroscience."  Recently, various studies have begun to approach this question by comparing brain responses of multiple individuals during a variety of (semi-naturalistic) tasks. These experiments reveal how stimulus properties, individual differences, and contextual factors may underpin similarities and differences in neural activity across people...  Here we extend such experimentation drastically, beyond dyads and beyond laboratory walls, to identify neural markers of group engagement during dynamic real-world group interactions.  We used portable electroencephalogram (EEG) to simultaneously record brain activity from a class of 12 high school students over the course of a semester (11 classes) during regular classroom activities.  A novel analysis technique to assess group-based neural coherence demonstrates that the extent to which brain activity is synchronized across students predicts both student class engagement and social dynamics.  This suggests that brain-to-brain synchrony is a possible neural marker for dynamic social interactions, likely driven by shared attention mechanisms.  This study validates a promising new method to investigate the neuroscience of group interactions in ecologically natural settings.
Put simply, what the researchers found is that when the students reported feeling the most engaged, their brain activity actually synced with that of their classmates.  It squares with our subjective experience, doesn't it?  I know when I'm bored, irritated, or angered by something I'm being required to participate in, I tend to unhook my awareness from where I am -- including being less aware of those around me who are suffering through the same thing.

It's no wonder we call this kind of response "disengaging," is it?

So apparently misery doesn't love company; what loves company is engagement, appreciation, and a sense of belonging.  "The central hub seems to be attention," Dikker says.  "But whatever determines how attentive you are can stem from various sources from personality to state of mind.  So the picture that seems to emerge is that it's not just that we pay attention to the world around us; it's also what our social personalities are, and who we're with."

All the more reason we teachers should focus as much on getting our students hooked on learning as we do on the actual content of the course.  My experience is that if you can get students to "buy in" -- if (in my case) they come away thinking biology is cool, fun, and interesting -- it doesn't matter so much if they can't remember what ribosomes do.  They can fit the facts in later, these days with a thirty-second lookup on Wikipedia.

What can't be looked up is being engaged to the point that you care what ribosomes do.

Unfortunately, in the educational world we've tended to go the other direction.  The flavor of the month is micromanagement from the top down, a set syllabus full of factlets that each student must know, an end product that can fit on a bubble sheet, "quantifiable outcomes" that generate data that the b-b stackers in the Department of Education can use to see if our teachers are teaching and our students learning.  A pity that, as usual, the people who run the business of educating children are ignoring what the research says -- that the most fundamental piece of the puzzle is student engagement.

If you have that, everything else will follow.

Saturday, July 11, 2015

The illusion of understanding

I've written before about the Dunning-Kruger effect, the cognitive bias that gives rise to the perception that everyone you ask will verify being an above-average driver.  We all have the sense of being competent -- and as studies of Dunning-Kruger have shown, we generally think we're more competent than we really are.

I just ran into a paper from about thirteen years ago that I'd never seen before, and that seems to put an even finer lens on this whole phenomenon.  It explains, I think, why people settle for simplistic explanations for phenomena -- and promptly cease to question their understanding at all.  So even though this is hardly a new study, it was new to me, and (I hope) will be new to my readers.

Called "The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth," the paper was written by Leonid Rozenblit and Frank Keil of Yale University and appeared in the journal Cognitive Science.  Its results illustrate, I believe, why trying to disabuse people of poor understanding of science can be such an intensely frustrating occupation.

The idea of the paper is a simple one -- to test the degree to which people trust and rely on what the authors call "lay theories:"
Intuitive or lay theories are thought to influence almost every facet of everyday cognition.  People appeal to explanatory relations to guide their inferences in categorization, diagnosis, induction, and many other cognitive tasks, and across such diverse areas as biology, physical mechanics, and psychology.  Individuals will, for example, discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.  Theories seem to tell us what features to emphasize in learning new concepts as well as highlighting the relevant dimensions of similarity...   
The incompleteness of everyday theories should not surprise most scientists.  We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper. 
Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories.  Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand.  Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions.  They believe that they can explain the world they live in fairly well.
Rozenblit and Keil proceeded to test this phenomenon, and they did so in a clever way.  They were able to demonstrate this illusory sense that we know what's going on around us by (for example) asking volunteers to rate their understanding of how common everyday objects work -- things like zippers, piano keys, speedometers, flush toilets, cylinder locks, and helicopters.  They were then (1) asked to write out explanations of how the objects worked; (2) given explanations of how they actually do work; and (3) asked to re-rate their understanding.

Just about everyone ranked their understanding as lower after they saw the correct explanation.

You read that right.  People, across the board, think they understand things better before they actually learn about them.  On one level, that makes sense; all of us are prone to thinking things are simpler than they actually are, and can relate to being surprised at how complicated some common objects turn out to be.  (Ever seen the inside of a wind-up clock, for example?)  But what is amazing about this is how confident we are in our shallow, incomplete knowledge -- until someone sets out to knock that perception askew.

It was such a robust result that Rozenblit and Keil decided to push it a little, and see if they could make the illusion of explanatory depth go away.  They tried it with a less-educated test group (the initial test group had been Yale students.)  Nope -- even people with less education still think they understand everything just fine.  They tried it with younger subjects.  Still no change.  They even told the test subjects ahead of time that they were going to be asked to explain how the objects worked -- thinking, perhaps, that people might be ashamed to admit to some smart-guy Yale researchers that they didn't know how their own zippers worked, and were bullshitting to save face.

The drop was less when such explicit instructions were given, but it was still there.  As Rozenblit and Keil write, "Offering an explicit warning about future testing reduced the drop from initial to subsequent ratings. Importantly, the drop was still significant—the illusion held."

So does the drop in self-rating occur with purely factual knowledge?  They tested this by doing the same protocol, but instead of asking people for explanations of mechanisms, they asked them to do a task that required nothing but pure recall -- such as naming the capitals of various countries.  Here, the drop in self-rating still occurred, but it was far smaller than with explanatory or process-based knowledge.  We are, it seems, much more likely to admit we don't know facts than to admit we don't understand processes.

The conclusion that Rozenblit and Keil reach is a troubling one:
Since it is impossible in most cases to fully grasp the causal chains that are responsible for, and exhaustively explain, the world around us, we have to learn to use much sparser representations of causal relations that are good enough to give us the necessary insights: insights that go beyond associative similarity but which at the same time are not overwhelming in terms of cognitive load.  It may therefore be quite adaptive to have the illusion that we know more than we do so that we settle for what is enough.  The illusion might be an essential governor on our drive to search for explanatory underpinnings; it terminates potentially inexhaustible searches for ever-deeper understanding by satiating the drive for more knowledge once some skeletal level of causal comprehension is reached.
Put simply, when we get to "I understand this well enough," we stop thinking.  And for most of us, that point is reached far, far too soon.

And while it really isn't that critical to understand how zippers work as long as it doesn't stop you from zipping up your pants, the illusion of explanatory depth in other areas can come back to bite us pretty hard when we start making decisions on how to vote.  If most of us truly understand far less than we think we do about such issues as the safety of GMOs and vaccines, the processes involved in climate and climate change, the scientific and ethical issues surrounding embryonic stem cells, and even issues like air and water pollution, how can we possibly make informed decisions regarding the regulations governing them?

All the more reason, I think, that we should be putting more time, money, effort, and support into education.  While education doesn't make the illusion of explanatory depth go away, at least the educated are starting from a higher baseline.  We still might overestimate our own understanding, but I'd still bet that the understanding itself is higher -- and that's bound to make us make better decisions.

I'll end with a quote by John Green that I think is particularly apt, here:


Monday, September 15, 2014

Hearing through your skin

I first ran into David Eagleman when a student of mine loaned me his phenomenal book Incognito: The Secret Lives of the Brain.

Even considering that I have a decent background in neuroscience, this book was an eye-opener.  Eagleman, a researcher at Baylor College of Medicine, not only is phenomenally knowledgeable in his field, he is a fine writer (and needless to say, those two don't always go together).  His insights about how our own brains work were fascinating, revealing, and often astonishing, and for anyone with an interest in cognitive science, it's a must-read.  (The link above will bring you to the book's Amazon page, should you wish to buy it, which all of you should.)

I've since watched a number of Eagleman's videos, and always come away with the feeling, "This guy is going to turn our understanding of the mind upside down."  And just yesterday, I found out about a Kickstarter project that he's undertaking that certainly makes some strides in that direction.

It's widely known that the brain can use a variety of inputs to get sensory data, substituting another when one of them isn't working.  Back in 2009, some scientists at Wicab, Inc. developed a device called the BrainPort that gave blind people the ability to get visual information about their surroundings, through a horseshoe-shaped output device that sits on the tongue.  A camera acts as a sensor, and transmits visual data into the electrode array on the output device, which then stimulates the surface of the tongue.  After a short training period, test subjects could maneuver around obstacles in a room.

And the coolest part is that the scientists found that the device was somehow stimulating the visual cortex of the brain -- the brain figured out that it was receiving visual data, even though the information was coming through the tongue.  And the test subjects were sensing visual images of their surroundings, even though nothing whatsoever was coming through their eyes.

So Eagleman had an idea.  Could you use a tactile sense to replace any other sense?  He started with trying to substitute tactile stimulation for hearing -- because, after all, they both work more or less the same way.  Touch and hearing both function because of mechanoreceptors, which are nerves that fire due to vibration or deflection.  (Taste, which is a chemoreceptor, and sight, an electromagnetic receptor, are much further apart in how they function.)



It's a vest that's equipped with a SmartPhone, and hundreds of tiny motors -- the transducer activates the motors, turning any sounds picked up by the phone into a pattern of vibrations on your upper body.  And just as with the BrainPort, a short training period is all that's needed before your can, effectively, hear with your skin.

Trials already allowed deaf individuals to understand words at a far higher rate than chance guessing; and you can only imagine that the skill, like any, would improve with time.  Eagleman writes:
We hypothesize that our device will work for deaf individuals, and even be good enough to provide a new perception of hearing. This itself has a number of societal benefits: such a device would cost orders of magnitude less than cochlear implants (hundreds-to-thousands as a opposed to tens-of-thousands), be discrete, and give the wearer the freedom to not be attached to it all the time. The cost effectiveness of the device would also make it realistic to distribute it widely in developing countries. 
More exciting than this, however, is what this proof of principle might enable: the ability to feed all sorts of new and profound sensory information into our brains.
I find this sort of thing absolutely fascinating.  The brain, far from being the static and rigid device we used to believe it was, has amazing plasticity.  Given new sources of information, it responds by integrating those into the data set it uses to allow us to explore the world.  And even though the VEST is currently being considered primarily for restoration of a sense to individuals who have lost one, I (like Eagleman) can't help but wonder about its use in sensory enhancement.

What sorts of things are we missing, through our imperfect sensory apparatus, that such a device might allow us to see?

Consider giving Eagleman's Kickstarter your attention -- he's the sort of innovative genius who could well change the world.  Just what he's done thus far is phenomenal, moving us into possibilities that heretofore were confined to science fiction.

And man, do I want to try one of those vests.  I hear just fine, but still.  How cool would that be?

Saturday, March 29, 2014

The stone hand illusion

One of the reasons I trust science is that I have so little trust in my own brain's ability to assess correctly the nature of reality.

Those may sound like contradictions, but they really aren't.  Science is a method that allows us to evaluate hard data -- measurements by devices that are designed to have no particular biases.  By relying on measurements from machines, we are bypassing our faulty sensory equipment, which can lead us astray in all sorts of ways.  In Neil deGrasse Tyson's words, "[Our brains] are poor data-taking devices... that's why we have machines that don't care what side of the bed they woke up on that morning, that don't care what they said to their spouse that day, that don't care whether they had their morning caffeine.  They'll get the data right regardless."

But we still believe that we're seeing what's real, don't we?  "I saw it with my own eyes" is still considered the sine qua non for establishing what reality is.  Eyewitness testimony is still the strongest evidence in courts of law.  Because how could it be otherwise?  Maybe we miss minor things, but how could we get it so far wrong?

A scientist in Italy just knocked another gaping hole in our confidence that our brain can correctly interpret the sensory information it's given -- this time with an actual hammer.

Some of you may have heard of the "rubber hand illusion" that was created in an experiment back in 1998 by Matthew Botvinick and Jonathan Cohen.  In this experiment, the two scientists placed a rubber hand in view of a person whose actual hand is shielded from view by a curtain.  The rubber hand is stroked with a feather at the same time as the person's real (but out-of-sight) hand receives a similar stroke -- and within minutes, the person becomes strangely convinced that the rubber hand is his hand.

The Italian experiment, which was just written up this week in Discover Online, substitutes an auditory stimulus for the visual one -- with an even more startling result.

Irene Senna, professor of psychology at Milono-Bicocca University in Milan, rigged up a similar scenario to Botvinick and Cohen's.  A subject sits with one hand through a screen.  On the back of the subject's hand is a small piece of foil which connects an electrical lead to a computer.  The subject sees a hammer swinging toward her hand -- but the hammer stops just short of smashing her hand, and only touches the foil gently (but, of course, she can't see this).  The touch of the hammer sends a signal to the computer -- which then produces a hammer-on-marble chink sound.

And within minutes, the subject feels like her hand has turned to stone.

[image courtesy of the Wikimedia Commons]

What is impressive about this illusion is that the feeling persists even after the experiment ends, and the screen is removed -- and even though the test subjects knew what was going on.  Subjects felt afterwards as if their hands were cold, stiff, heavier, less sensitive.  They reported difficulty bending their wrists.

To me, the coolest thing about this is that our knowledge centers, the logical and rational prefrontal cortex and associated areas, are completely overcome by the sensory-processing centers when presented with this scenario.  We can know something isn't real, and simultaneously cannot shake the brain's decision that it is real.  None of the test subjects was crazy; they all knew that their hands weren't made of stone.  But presented with sensory information that contradicted that knowledge, they couldn't help but come to the wrong conclusion.

And this once again illustrates why I trust science, and am suspicious of eyewitness reports of UFOs, Bigfoot, ghosts, and the like.  Our brains are simply too easy to fool, especially when emotions (particularly fear) run high.  We can be convinced that what we're seeing or hearing is the real deal, to the point that we are unwilling to admit the possibility of a different explanation.

But as Senna's elegant little experiment shows, we just can't rely on what our senses tell us.  Data from scientific measuring devices will always be better than pure sensory information.  To quote Tyson again:  "We think that the eyewitness testimony of an authority -- someone wearing a badge, or a pilot, or whatever -- is somehow better than the testimony of an average person.  But no.  I'm sorry... but it's all bad."

Friday, March 7, 2014

Type tests, weird verbiage, and Pod'Lair

It seems like lately, self-inquiry tests are all the rage.

They range from the banal ("What Harry Potter character are you?"  "What rock star are you?"  "What Joss Whedon character are you?") to the tried and true (the Myers-Briggs Type Indicator is still really popular) to the absurd (the various sorts of astrology).  And on the face of it, there's nothing wrong with the urge to find out more about what makes you tick.  After all, the legend "Gnothi Seauton" (Know Yourself) was inscribed on the Temple of Delphi over 2,500 years ago, and those Greek philosophers were no slouches in the wisdom department.

[image courtesy of photographer Thomas Hawk and the Wikimedia Commons]

Still, some of them seem to be making unduly heavy weather out of the whole thing, and I ran into an example of this just the other day.  Called "Pod'Lair," for no reason I could find, it is described as follows:
Pod'Lair methodology reads a person's innate nature, what we call their Mojo, with an accuracy never before possible, which allows humans to know themselves in truly unprecedented ways, ending the debate on whether or not people have qualia and what it involves...

Once you understand the basics of Pod'Lair theory, and you've begun to see the Mojo phenomenon for yourself, it improves your understanding of and interaction with every facet of your life, including: education, career, relationships, community, politics, spirituality...basically all of existence.
Well, naturally, I was curious about what my Mojo was, even though it's really hard for me to take anything with the name "Mojo" particularly seriously.  And it required that I send in a ten-minute video of myself, which I wasn't going to do.  The whole thing apparently hinges on subtle facial movement cues that are supposedly indicators of personality types, a bit like Bandler & Grinder's neurolinguistic programming (which honesty compels me to mention has also been flagged as having many of the characteristics of pseudoscience).  So I went to the "About Us" page, where I read passages like the following:
The Mojo Dojo Pathway is the Universal Pathway for the Language of Mojo. This pathway is focused on Mojo Reading of yourself and others, in order to understand how Mojos interact with one another in Social Alchemy. This is the objective study of Mojo, as it applies to the relationships within the Human Matrix.
Well, I think I'm at least above average at reading comprehension, and while reading a lot of the stuff on this site I was wearing a perplexed expression, my head tilted a little, rather like my dog does when I try to explain something complex and difficult to him, like why he shouldn't try to hump the cat.  Unfortunately, unlike my dog, I wasn't able just to wag my tail and forget about it all.  Some sort of perverse drive kept me working my way through this website:
It is essential to know how to rein in your top two Powers. Modulation causes stress on the system, which is Keening. The individual Mojos begin to have shut-down mechanisms designed for self-protection and energy conservation. These are healthy to a point, but over the long term they can shut the system down in a way that is damaging, temporarily or permanently, which is known as Stress Lock.
No, really, I shouldn't read any more, I really think that's...
You can generate energy from within, but as you generate that energy, it encounters the Bubble of your home and responds to it. Much like a creature in the womb reaches out consciously to get nutrients, it needs to be a conducive womb for the creature to get what it needs. This sounds simpler than it is because in many ways humans have stepped away from their Bubble being an essential part of their harmonious existence, having been told what to do by Bubbles that are already in place.
I mean, I have other things to do this morning, and it's not necessary that I...
Spirit Forms refers to the Unconscious Genius that every human has. The unconscious portions of the psyche often present themselves as autonomous entities that when dialogued with improves a human's understanding and performance in any endeavor, be it artistic, scientific, athletic, etc. The Language of Spirit Forms includes the Pathways of Spirit Ambassador (Universal Pathway) and Temple of Spirits (Personal Pathway).
Merciful heavens, please stop...
Humanity is within Gaia, Gaia is within the Cosmos, the Cosmos is within Natural Law, and this all came to be where we are now. To attempt to tell the Human Collective, Gaia, Cosmos, and everything above it what to do is the height of arrogance.
OKAY.  Thank you very much.  So anyway, after I spent way more time trying to read this stuff than I should have, and coming away with the understanding that Humans Are Heroic Love And Cosmic Energy, or something, I did a little digging and found out that evidently some people who are cognitive psychologists think there might actually be some legitimacy to the whole thing (read one interesting thread here, where Pod'Lair is considered seriously along with MBTI and neurolinguistic programming theory).

What strikes me, though, is the question of how a skeptic, with a reasonable background in human neurology, could decide if there's anything to this at all from the outside -- the writing is so dense, and (frankly) so mixed up with woo-woo verbiage, that it's impossible for me to tell.  Even one indicator that the whole thing had been tested against other sorts of psychological assessments, and found to have value, would have made a difference.  Instead, under "Evidence," we're just given some vague hand-waving arguments coupled with a much longer section about why Jung, Maslow, MBTI, typology, and astrology (!) are all wrong, and that's supposed to be enough to go on.

Oh, and we're also given descriptions of the 32 basic Mojo types, including "Xyy'nai," which "engage the dynamics of human communities through interpersonal connection, social awareness, and shepherding, creating an attentive and diplomatic character." We are also told that example "Xyy'nais" are Barack Obama and Miley Cyrus.

Because those two clearly have so much in common.

Now, mind you, it's not that I think that there's anything wrong with pursuing self-knowledge. Far from it.   It's more that I have the sense that any test that purports to divide all of humanity into a small number of classes based upon artificial distinctions is doomed to failure.  And I also wonder if any of these type tests -- be it MBTI, Pod'Lair, or "What Dr. Who Character Are You?" -- is telling us anything about ourselves that we couldn't have figured out with an hour's honest self-reflection.

But being an inquisitive sort, I am tempted to send in a video.  I'd like to see what they'd make of my rather unfortunate face.  And to anyone who goes to the Pod'Lair site (which I linked above), and decides to participate -- do come back here and post the results.  Like I said before: there's nothing like actual results to support a conjecture.  And even if the evaluation of its accuracy would have to come from one's impression of oneself, it'd be interesting to see whether the whole thing has any basis in reality.