Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label cognitive psychology. Show all posts
Showing posts with label cognitive psychology. Show all posts

Saturday, February 19, 2022

Remembrance of things past

Like many People Of A Certain Age, I'm finding that my memory isn't what it used to be.

I walk into a room, and then say, "Why did I come in here?"  I'll think, "I don't need a grocery list, I'm just going for a few things," and come back with half of them.  We just had our dogs in for their annual checkups and shots, and there were a few things for each of them we wanted to ask the vet about.  My wife and I dutifully sat down and made a list -- and both of us forgot to put something on the list that we'd talked about only the previous day.

It's shown up, too, in more academic pursuits.  For my birthday last year my wife got me an online course through Udemy in beginning Japanese, a language I've always wanted to learn.  My dad had been stationed in Japan in the 1950s, and he learned enough of the language to get by; I grew up around the Japanese art and music my dad brought back with him, and became a Japanophile for life.  So I was thrilled to have the opportunity to study the country's unique and beautiful language.  The course starts out with a brief pronunciation guide, then launches into the hiragana -- one of three scripts used in written Japanese.  Each of the 46 characters stands for either a phoneme or a syllable, and some of them look quite a bit alike, so it's a lot to remember.  I have flash cards I made for all 46, and there are some I consistently miss, every single time I go through them.

When I flip the card over, my response is always, "Damn!  Of course!  Now I remember it!"  I recognize the character immediately, and can often even remember the mnemonic the teacher suggested to use in recalling it.  I'm getting there -- of the 46, there are about ten that I still struggle with -- but I know that twenty years ago, I'd have them all down cold by now.

Kids playing a memory game [Image is in the Public Domain]

Understandably, there's a nasty little thought in the back of my mind about senility and dementia.  My mother's sister had Alzheimer's -- to my knowledge, the only person in my extended family to suffer from that horrific and debilitating disease -- and I watched her slow slide from a smart, funny woman who could wipe the floor with me at Scrabble, did crossword puzzles in ink, and read voraciously, to a hollow, unresponsive shell.  I can think of no more terrifying fate. 

A new piece of research in Trends in Cognitive Science has to some extent put my mind at ease.  In "Cluttered Memory Representations Shape Cognition in Old Age," psychologists Tarek Amer (of Columbia University), Jordana Wynn (of Harvard University), and Lynn Hasher (of the University of Toronto) found that the forgetfulness a lot of us experience as we age isn't a simple loss of information, it's a loss of access to information that's still there, triggered by the clutter of memories from the past.

The authors write:
Wisdom and knowledge, cognitive functions that surely depend on being able to access and use memory, grow into old age.  Yet, the literature on memory shows that intentional, episodic memory declines with age.  How are we to account for this paradox?  To do so, we need to understand three aspects of memory differences associated with aging, two of which have received extensive investigation: age differences in memory encoding and in retrieval.  A third aspect, differences in the contents of memory representations, has received relatively little empirical attention.  Here, we argue that this aspect is central to a full understanding of age differences in memory and memory-related cognitive functions.  We propose that, relative to younger adults, healthy older adults (typically between 60 and 85 years of age) process and store too much information, the result of reductions in cognitive control or inhibitory mechanisms.  When efficient, these mechanisms enable a focus on target or goal-relevant information to the exclusion (or suppression) of irrelevant information.  Due to poor control (or reduced efficiency), the mnemonic representations of older adults can include: (i) recently activated but no-longer-relevant information; (ii) task-unrelated thoughts and/or prior knowledge elicited by the target information; and/or (iii) task-irrelevant information cued by the immediate environment.  This information is then automatically bound together with target information, creating cluttered memory representations that contain more information than do those of younger adults.

It's like trying to find something in a cluttered, disorganized attic.  Not only is it hard to locate what you're looking for, you get distracted by the other things you run across.  "Wow, it's been years since I've seen this!  I didn't even know this was up here!.... wait, what am looking for?"

I've noticed this exact problem in the kitchen.  I'm the chief cook in our family, and I love to make complex dinners with lots of ingredients.  I've found that unless I want to make a dozen trips to the fridge or cabinets to retrieve three items, I need to focus on one thing at a time.  Get a green pepper from the vegetable crisper.  Find the bottle of cooking sherry.  Go get the bottle of tabasco sauce from the table.  If I try to keep all three in my mind at once, I'm sure to return to the stove and think, "Okay, what the hell do I need, again?"

I wonder if this mental clutter is at the heart of my struggle with memorizing the hiragana characters in Japanese.  I've done at least a cursory study of about a dozen languages -- I'm truly fluent in only a couple, but my master's degree in historical linguistics required me to learn at least the rudiments of the languages whose history I was studying.  Could my difficulty in connecting the Japanese characters to the syllables they represent be because my Language Module is clogged with Old Norse and Welsh and Scottish Gaelic and Icelandic, and those all get in the way?

In any case, it's kind of a relief that I'm (probably) not suffering from early dementia.  It also gives me an excuse the next time my wife gets annoyed at me for forgetting something.  "I'm sorry, dear," I'll say.  "I'd have remembered it, but my brain is full.  But at least I remembered that the character yo looks like a yo-yo hanging from someone's finger!"

Nah, I doubt that'll work, and the fact that I remembered one of the Japanese characters instead of stopping by the store to pick up milk and eggs will only make it worse.  When I want to be sure not to forget something, I guess I'll have to keep making a list.

The only problem is then, I need to remember where I put the list.

***************************************

People made fun of Donald Rumsfeld for his statement that there are "known unknowns" -- things we know we don't know -- but a far larger number of "unknown unknowns," which are all the things we aren't even aware that we don't know.

While he certainly could have phrased it a little more clearly, and understand that I'm not in any way defending Donald Rumsfeld's other actions and statements, he certainly was right in this case.  It's profoundly humbling to find out how much we don't know, even about subjects about which we consider ourselves experts.  One of the most important things we need to do is to keep in mind not only that we might have things wrong, and that additional evidence may completely overturn what we thought we knew -- and more, that there are some things so far out of our ken that we may not even know they exist.

These ideas -- the perimeter of human knowledge, and the importance of being able to learn, relearn, change directions, and accept new information -- are the topic of psychologist Adam Grant's book Think Again: The Power of Knowing What You Don't Know.  In it, he explores not only how we are all riding around with blinders on, but how to take steps toward removing them, starting with not surrounding yourself with an echo chamber of like-minded people who might not even recognize that they have things wrong.  We should hold our own beliefs up to the light of scrutiny.  As Grant puts it, we should approach issues like scientists looking for the truth, not like a campaigning politician trying to convince an audience.

It's a book that challenges us to move past our stance of "clearly I'm right about this" to the more reasoned approach of "let me see if the evidence supports this."  In this era of media spin, fake news, and propaganda, it's a critical message -- and Think Again should be on everyone's to-read list.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Tuesday, February 19, 2019

The power of phonemes

Language is defined as arbitrary symbolic communication.

"Symbolic" because spoken sounds or written character strings stand for concepts, actions, or objects; "arbitrary" because those sounds or characters have no logical connection to what they represent.  The word "dog" is no more inherently doggy than the French word (chien) or Swahili word (mbwa).  The exceptions, of course, are onomatopoeic words like "bang," "pop," "splat," and so on.

That's the simple version, anyhow.  Reality is always a lot messier.  There are words that are sort-of-onomatopoeic; "scream" sounds a lot screamier than "yell" does, even though they mean approximately the same thing.  And it's the intersection between sound and meaning that is the subject of the research of cognitive psychologist Arthur Glenberg of Arizona State University.

In an article in The Conversation, Glenberg provides some interesting evidence that even in ordinary words, the sound/meaning correspondence may not be as arbitrary as it seems at first.  It's been known for a while that hearing spoken language elicits response from the parts of the brain that would be activated if what was heard was reality; in Glenberg's example, hearing the sentence "The lovers held hands as they walked along the moonlit tropical beach" causes a response not only in the emotional centers of the brain, but in the visual centers and (most strikingly) in the part of the motor center that coordinates walking.  When hearing language, then, our brains on some level become what we hear.

Glenberg wondered if it might work the other way -- if altering the sensorimotor systems might affect how we interpret language.  Turns out it does.  Working with David Havas, Karol Gutowski, Mark Lucarelli, and Richard Davidson of the University of Wisconsin-Madison, Glenberg showed that individuals who had received Botox injections into their foreheads (which temporarily paralyzes the muscles used in frowning) were less able to perceive the emotional content of written language that would have ordinarily elicited a frown of anger.

Then, there's the kiki-booba experiment, done all the way back in 1929 by Wolfgang Köhler, which showed that at least in some cases, the sound/meaning correspondence isn't arbitrary at all.  Speakers of a variety of languages were shown the following diagram:

They're told that in a certain obscure language, one of these shapes is called "kiki" and the other is called "booba," and then are asked to guess which is which.  Just about everyone -- regardless of the language they speak -- thinks the left-hand one is "kiki" and the right-hand one is "booba."  The "sharpness" of "kiki" seems to fit more naturally with a spiky shape, and the "smoothness" of "booba" with a rounded shape, to just about everyone.

So Glenberg decided to go a step further.  Working with Michael McBeath and Christine S. P. Yu, Glenberg gave native English speakers a list of ninety word pairs where the only difference was that one had the phoneme /i/ and the other the phoneme /ʌ/, such as gleam/glum, seek/suck, seen/sun, and so on.  They were then asked which of each pair they thought was more positive.  Participants picked the /i/ word 2/3 of the time -- far more than you'd expect if the relationship between sound and meaning was truly arbitrary.

"We propose that this relation arose because saying 'eee' activates the same muscles and neural systems as used when smiling – or saying 'cheese!'" Glenberg writes.  "In fact, mechanically inducing a smile – as by holding a pencil in your teeth without using your lips – lightens your mood.  Our new research shows that saying words that use the smile muscles can have a similar effect.

"We tested this idea by having people chew gum while judging the words.  Chewing gum blocks the systematic activation of the smile muscles.  Sure enough, while chewing gum, the judged difference between the 'eee' and 'uh' words was only half as strong."

Glenberg then speculates about the effect on our outlook when we hear hateful speech -- if the constant barrage of fear-talk we're currently hearing from politicians actually changes the way we think whether or not we believe what we're hearing.  "The language that you hear gives you a vocabulary for discussing the world, and that vocabulary, by producing simulations, gives you habits of mind," he writes.  "Just as reading a scary book can make you afraid to go in the ocean because you simulate (exceedingly rare) shark attacks, encountering language about other groups of people (and their exceedingly rare criminal behavior) can lead to a skewed view of reality...  Because simulation creates a sense of being in a situation, it motivates the same actions as the situation itself.  Simulating fear and anger literally makes you fearful and angry and promotes aggression.  Simulating compassion and empathy literally makes you act kindly.  We all have the obligation to think critically and to speak words that become humane actions."

To which I can only say: amen.  I've been actively trying to stay away from social media lately, especially Twitter -- considering the current governmental shitstorm in the United States, Twitter has become a non-stop parade of vitriol from both sides.  I know it's toxic to my own mood.  It's hard to break the addiction, though.  I keep checking back, hoping that there'll be some positive development, which (thus far) there hasn't been.  The result is that the ugliness saps my energy, makes everything around me look gray and hopeless.

All of it brings home a quote by Ken Keyes, which seems like a good place to end: "A loving person lives in a loving world.  A hostile person lives in a hostile world.  Everyone you meet is your mirror."  This seems to be exactly true -- all the way down to the words we choose to speak.

***************************

You can't get on social media without running into those "What Star Trek character are you?" and "Click on the color you like best and find out about your personality!" tests, which purport to give you insight into yourself and your unconscious or subconscious traits.  While few of us look at these as any more than the games they are, there's one personality test -- the Myers-Briggs Type Indicator, which boils you down to where you fall on four scales -- extrovert/introvert, sensing/intuition, thinking/feeling, and judging/perceiving -- that a great many people, including a lot of counselors and psychologists, take seriously.

In The Personality Brokers, author Merve Emre looks not only at the test but how it originated.  It's a fascinating and twisty story of marketing, competing interests, praise, and scathing criticism that led to the mother/daughter team of Katharine Briggs and Isabel Myers developing what is now the most familiar personality inventory in the world.

Emre doesn't shy away from the criticisms, but she is fair and even-handed in her approach.  The Personality Brokers is a fantastic read, especially for anyone interested in psychology, the brain, and the complexity of the human personality.






Thursday, January 3, 2019

Political change blindness

Like many of my fellow Americans, I had hoped to get through the holiday season without conflict, but was thwarted by the acrimonious times we live in.  As these things go, the conflict I was in was pretty mild.  No dishes were thrown, no fists pounded on tables, no ugly epithets hurled, and we all parted still more or less as friends.

Even so, it wasn't what I would call pleasant.  As someone who despises conflict, and doesn't enjoy either politics or argument for their own sakes, I would very much have liked to avoid it.  One of the things that struck me -- this always strikes me in these situations -- was the sheer immovability of the participants.  No one budged one iota from their positions, not after an hour of heated discussion.  To be fair, neither did I.  But the fact that no one changed even the smallest bit of their opinion brought home how utterly pointless it all was.

We're all rock-solid-sure of our views, pretty much all the time, aren't we?  Well, maybe not as much as we think, to judge from new research out of Lund University (Sweden), by cognitive scientists Lars Hall, Petter Johansson, Andreas Lind, Philip Pärnamets, and Thomas Strandberg.  In their paper that came out recently in the Journal of Experimental Psychology, "False Beliefs and Confabulation Can Lead to Lasting Changes in Political Attitudes," they showed that a simple trick -- a bit of sleight-of-hand -- can lead people to defend a view they originally argued against.

[Image licensed under the Creative Commons David Shankbone, Anger during a protest by David Shankbone, CC BY-SA 3.0]

What they did was striking in its simplicity.  First, they showed test subjects various photographs of human faces, asking them to decide between pairs of them which was the more attractive.  The researchers surreptitiously exchanged the higher-rated photograph for the lower-rated one, and went back through the choices, saying, "This is the one you chose.  Do you still feel that way, or do you want to reconsider your decision?"

Two-thirds of the subjects never caught on.

Well, you might be thinking, that's just about physical attraction, which can change pretty fluidly, and isn't that critical anyhow.  But the researchers went a step further -- pulling the same kind of trick, but this time with political statements (for example, "It is more important for a society to promote the welfare of the citizens than to protect their personal integrity").  The review-your-decision phase of the test took one of the statements they'd agreed with and made it diametrically opposite to what they'd chosen.

Once again, two-thirds never noticed the change, and they were just as articulate in defending their position -- the one that, minutes earlier, had been the opposite of what they believed -- as they were opinions they'd held all along.

The real kicker: they were retested a week and then three weeks later, and most of them stuck with the view they didn't intend but had been forced to justify.

And didn't even know they'd changed.

I don't know about you, but I find this kind of hopeful.  It shows that human perception and memory are even more unreliable than I'd realized, which is rather humbling.  But it does mean that we can shift our views.  I suspect that the most powerful aspect was that the subjects were asked to defend the position they'd initially disagreed with, and that forced them to consider the opposite side's views without engaging the emotional side of the brain by identifying them as contrary to their own beliefs ahead of time.  This squares with an experience I had long ago.  A teacher in a high school English class had us all write position papers, and then debate them -- but we had to research and defend the opposite stance.  If your paper was about strengthening gun control laws, you had to argue in the debate on the side of loosening them.

It was an intensely uncomfortable experience, but it did make me aware of the fact that my opponents' views did have support, that they weren't simply unfounded and unjustified opinions.  I don't think the exercise changed my particular viewpoint, but it did make me appreciate that there was another defensible side.

So maybe political discourse isn't as hopeless as it seems.  It remains to be seen, however, how to engage this mental plasticity without the emotional brain screaming it down -- which it did with the argument I was in over the holidays.  But if we could, we might find that the acrimony largely vanishes -- and that our opinions may not be as far apart as they'd seemed.

****************************************

This week's Skeptophilia book recommendation is one of personal significance to me -- Michael Pollan's latest book, How to Change Your Mind.  Pollan's phenomenal writing in tours de force like The Omnivore's Dilemma and The Botany of Desire shines through here, where he takes on a controversial topic -- the use of psychedelic drugs to treat depression and anxiety.

Hallucinogens like DMT, LSD, ketamine, and psilocybin have long been classified as schedule-1 drugs -- chemicals which are off limits even for research except by a rigorous and time-consuming approval process that seldom results in a thumbs-up.  As a result, most researchers in mood disorders haven't even considered them, looking instead at more conventional antidepressants and anxiolytics.  It's only recently that there's been renewed interest, when it was found that one administration of drugs like ketamine, under controlled conditions, was enough to alleviate intractable depression, not just for hours or days but for months.

Pollan looks at the subject from all angles -- the history of psychedelics and why they've been taboo for so long, the psychopharmacology of the substances themselves, and the people whose lives have been changed by them.  It's a fascinating read -- and I hope it generates a sea change in our attitudes toward chemicals that could help literally millions of people deal with disorders that can rob their lives of pleasure, satisfaction, and motivation.

[If you purchase the book from Amazon using the image/link below, part of the proceeds goes to supporting Skeptophilia!]




Saturday, August 20, 2016

Memory offload

A couple of years ago, I had a student who had what seemed to me a weird approach to figuring things out.  When presented with a question he didn't know the answer to, his immediate response was to pull out his school-issued iPad and Google it.  Often, he didn't even give his brain a chance to wrestle with the question; if the answer wasn't immediately obvious, out came the electronics.

This became an even bigger obstacle when we were studying genetics.  Genetics is, more than anything else at the introductory-biology level, about learning a process.  There are a few important terms -- recessive, dominant, phenotype, allele, and so on -- but the point is to learn a systematic way of thinking about how genes work.

But given a problem -- a set of data that (for example) would allow you to determine whether the gene for Huntington's disease is recessive or dominant -- he would simply look it up.

"What have you learned by doing that?" I asked him, trying to keep the frustration out of my voice.

"I got the right answer," he said.

"But the answer isn't the point!"  Okay, at that point my frustration was pretty clear.

I think the issue I had with this student comes from two sources.  One is the education system's unfortunate emphasis on Getting The Right Answer -- that if you have The Right Answer on your paper, it doesn't matter how you got it, or whether you really understand how to get there.  But the other is our increasing reliance on what amounts to external memory -- usually in the form of the internet.  When we don't know something, the ease and accessibility of answers online makes us default to that, rather than taking the time to search our own memories for the answer.

[image courtesy of the Wikimedia Commons]

That latter phenomenon was the subject of a study that was published this week in the journal Memory.  Called "Cognitive Offloading: How the Internet is Increasingly Taking Over Human Memory," the study, by cognitive psychologists Benjamin Storm, Sean Stone, and Aaron Benjamin, looked at how people approach the recall of information, and found that once someone has started relying on the internet, it becomes the go-to source, superseding one's own memory:
The results revealed that participants who previously used the Internet to gain information were significantly more likely to revert to Google for subsequent questions than those who relied on memory.  Participants also spent less time consulting their own memory before reaching for the Internet; they were not only more likely to do it again, they were likely to do it much more quickly.  Remarkably, 30% of participants who previously consulted the Internet failed to even attempt to answer a single simple question from memory.
This certainly mirrors my experience with my students.  Not all of them are as hooked to their electronics as the young man in my earlier anecdote, but it is becoming more and more common for students to bypass thinking altogether and jump straight to Google.

"Memory is changing," lead author Storm said.  "Our research shows that as we use the Internet to support and extend our memory we become more reliant on it.  Whereas before we might have tried to recall something on our own, now we don't bother.  As more information becomes available via smartphones and other devices, we become progressively more reliant on it in our daily lives."

What concerns me is something that the researchers say was outside the scope of their research; what effect this might have on our own cognitive processes.  It's one thing if the internet becomes our default, but that our memories are still there, unaltered, should the Almighty Google not be available. It's entirely another if our continual reliance on external "offloaded" memory ultimately weakens our own ability to process, store, and recall.  It's not as far-fetched as it sounds; there have been studies that suggest that mental activity can stave off or slow down dementia, so the "if you don't use it, you lose it" aphorism may work just as much for our brains as it does for our muscles.

In any case, I'm becoming more and more adamant about students putting away the electronics.  They don't question the benefits of doing calisthenics in P.E. (although they complain about it); it's equally important to do the mental calisthenics of processing and recalling without leaning on the crutch of the internet.  And from the research of Storm et al., it's sounding like the automatic jump to "let's Google it" is a habit a lot of us need to break.

Tuesday, August 9, 2016

Linguistic brain atlas

Well, folks, I'm going to be away for a little while again... and I'll be out of wifi and cellphone range (for those of you who know my general attitude about technology, you can probably imagine what a respite this will be for me).  I'll be back with a new post on Monday, August 15.  See you in a few days!

*****************************************

Science is amazing.

I know, I know, I say that every other day.  But there are times when I read the science news and am completely overwhelmed by how cool it all is, and am frankly astonished by our ability to parse the way the universe works.

The most recent research that provoked that reaction is a paper that appeared in Nature this week entitled, "Natural Speech Reveals the Semantic Maps that Tile Human Cerebral Cortex," by Alexander G. Huth, Wendy A. de Heer, Thomas L. Griffiths, Frédéric E. Theunissen, and Jack L. Gallant.  And what this research has done is something I honestly didn't think was possible -- to create a "brain atlas" that maps how words are organized in the cerebrum.

[image courtesy of the Wikimedia Commons]

The scientists did this by having subjects in an fMRI machine listen to the MOTH Radio Hour, a compelling storytelling program that the researchers thought would be riveting enough to keep people's interest and their minds from wandering.  And while they were listening, the fMRI mapped out which words and groups of words triggered responses in tens of thousands of spots all over the cerebral cortex.

"Our goal was to build a giant atlas that shows how one specific aspect of language is represented in the brain, in this case semantics, or the meanings of words," said study author Gallant, a neuroscientist at the University of California, Berkeley.  As science writer Ian Sample of The Guardian put it:
The atlas shows how words and related terms exercise the same regions of the brain. For example, on the left-hand side of the brain, above the ear, is one of the tiny regions that represents the word "victim."  The same region responds to "killed," "convicted"," "murdered" and "confessed."  On the brain’s right-hand side, near the top of the head, is one of the brain spots activated by family terms: "wife," "husband," "children," "parents."
Further, as many words have more than one definition, the researchers were able to map how context influences meaning and changes the site of brain activation.  The word "top," for example, can mean a child's toy, a woman's shirt, or can be a relational word that describes position.

The study's authors write:
We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals.  We then use a novel generative model to create a detailed semantic atlas.  Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area.  This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.
The research is groundbreaking.  Lorraine Tyler, cognitive neuroscientist and head of the Centre for Speech, Language and the Brain at Cambridge University, described it as "a tour de force" -- a phrase scientists don't use lightly.  There is already talk of using the research to allow people who are unable to speak for reasons of illness or injury, but whose other cognitive processes are undamaged, to communicate with speech-production software via a brain/computer interface.  What other applications might come up are mind-bending even to consider.  Uri Hasson, a neuroscientist at Princeton, said, "There are so many implications... we are barely touching the surface."

So once again, it's science for the win.  It's heartening to think, in this age where I'm often afraid to open up the newspaper for fear of finding out what new and unusual ways we've come up with to be horrible to one another, that we are capable of elegant and beautiful research that elucidates how our own minds work.  As Carl Sagan put it, "We are a way for the cosmos to know itself."

The paper's authors write:


We show that the semantic system is
organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to
create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information
about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in
each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and
functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.



W
e show that the semantic system is
organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to
create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information
about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in
each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and
functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.
We show that the semantic system is
organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to
create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information
about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in
each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and
functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.