Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label learning. Show all posts
Showing posts with label learning. Show all posts

Tuesday, January 23, 2024

Never seen it before

Ever heard of the opposite of déjà vu -- jamais vu?

This may sound like it's the setup for some sort of abstruse bilingual joke, but it's not.  Déjà vu ("already seen" in French) is, as you undoubtedly know, the sensation that something you're experiencing has happened exactly that way before even though you're certain it can't have (a phenomenon, by the way, which is still yet to be fully explained, although there was a suggestive study out of Colorado State University five years ago that gave us some interesting clues about it).  Jamais vu ("never seen") is indeed the opposite; the eerie sense that something completely familiar is unfamiliar, uncertain, or simply incorrect.

One of the most common forms of jamais vu is an experience a lot of us have had; looking at a word and convincing ourselves that it's misspelled.  It can happen even with simple and ridiculously common words.  I remember being a teenager and working on a school assignment, and staring at the word "were" for what seemed like ages because suddenly it looked wrong.  The same thing can happen with music -- skilled musicians can reach a point in a piece they've practiced over and over, and suddenly it feels unfamiliar.  Less common, but even more unsettling, are reports where people look at faces of family and friends, and have the overwhelming sensation that they have never seen them before.

The emphasis here is on "looks" and "feels" and "sensation."  This seems not to be a cognitive issue but a sensory-emotional one; when I've had jamais vu over the spellings or definitions of words, and I look the word in question up, almost always what I'd been writing turned out to be correct even though it felt wrong.  The people who had the sense that their loved ones' faces were somehow unfamiliar still knew their names and relationships, so their cognitive understanding of who those people were was undiminished; it was the "gut feeling" that was all wrong.

[Image courtesy of creator © Michel Royon / Wikimedia Commons Brain memory, CC0 1.0]

The reason the subject comes up is that a team led by Chris J. A. Moulin of the Université Grenoble Alpes has done a preliminary look into the strange phenomenon of jamais vu, and their results were the subject of a paper in the journal Memory.  Their research started with a simple question: can jamais vu be induced?  The answer was yes, and by a simple protocol -- repeat something often enough, and it starts to look strange.

The researchers took familiar words like "door" and less familiar ones like "sward," and asked volunteers to write them repeatedly until they wanted to stop.  They were told they could stop for whatever reason they wanted -- tired hand, bored, feeling peculiar, whatever -- but to be aware of why they stopped.  It turned out that by far the most common reason for stopping was "feeling strange," which was cited as the cause by seventy percent of the volunteers.  The effect was more pronounced with common words than uncommon ones, as if we kind of expect to see uncommon words as odd, so it doesn't strike us as off.

It even happened with the most common word in the English language -- "the."  It only took 27 repetitions, on average, for people to halt.  One volunteer said, "[Words] lose their meaning the more you look at them."  Another, even more interestingly, said, "It doesn't seem right.  It almost looks like it's not really a word, but someone's tricked me into thinking it is."

The researchers believe that jamais vu isn't just some kind of psychological fluke.  It may serve a purpose in jolting us when our cognitive processes are going onto autopilot -- as they can, when we're asked to do a repetitive task too many times.  That feeling of strangeness brings us back to a state of high alertness, where we're paying attention to what we're doing, even if the downside is that it makes us think we've made mistakes when we haven't.

"Jamais vu is a signal to you that something has become too automatic, too fluent, too repetitive," the authors write.  "It helps us 'snap out' of our current processing, and the feeling of unreality is in fact a reality check.  It makes sense that this has to happen.  Our cognitive systems must stay flexible, allowing us to direct our attention to wherever is needed rather than getting lost in repetitive tasks for too long."

So a sense of peculiarity when we're doing ordinary stuff might actually have an adaptive benefit.  Good to know, because it's really unsettling when it happens.

But for what it's worth, I still don't think "were" should be spelled like that.

****************************************



Saturday, December 9, 2023

The honey hunters

One of the things I learned from 32 years of teaching biology is that many non-human animals are way smarter than we give them credit for -- and its corollary, which is that we humans are not as far separated from the rest of the natural world as many of us would like to think.

A charming piece of research in Science this week illustrates this point brilliantly.  It's about a species of African bird, the Greater Honeyguide (its scientific name, which I swear I'm not making up, is Indicator indicator).  It's found in open woodland in most of sub-Saharan Africa, and has a very specialized diet -- it lives on bee eggs, larvae, and wax (it's one of the few known animals that can digest wax).

Illustration of a Greater Honeyguide by Nicolas Huet (1838) [Image is in the Public Domain]

Because of its diet, local residents have developed a mutualistic relationship with honeyguides, a relationship that is what gives the birds their common name.  People living in the region listen for the bird's call and then follow it to find the bees' nests it was attracted to.  The people tear open the nests and take the honey -- and the bird gets the larvae and the wax.  Many cultures that live in the honeyguides' range have developed specific calls to attract the birds when they're ready to go on a honey hunt.

The study, led by ecologist Claire Spottiswoode of the University of Cambridge, looked at the fact that honeyguides seem to learn the specific calls used by the people they live near.  Initially, it was uncertain if the people had figured out what the birds responded to, or if the reverse was true and the birds had learned what noises the people made.  So she and her team decided to test it; they used recordings of individuals from two cultures that are known to use honeyguides, the Hadza of Tanzania and the Yao of Malawi and Mozambique.  The Hadza employ a complex series of whistles to summon their helpers, while the Yao make a "brrr-huh" sound.

Both signals work just fine, but only in particular regions.  When a recording of the Hadza signal is played in Malawi, or a recording of the Yao signal is played in Tanzania, the birds don't respond.  The birds have evidently learned to recognize the specific calls of their partners in the region where they live -- and don't "speak the language" used elsewhere.

Spottiswoode's team also found there are two places where the symbiotic relationship is falling apart.  In more urban areas, where commercial sugar is widely available, there are fewer people engaged in honey hunting, so the birds have decided they're better off working as free agents.  Even more interesting, in some areas in Mozambique, the Yao discovered that if they destroy the wax and the rest of the hive, the honeyguides will stay hungry and look for other nests.  But... the birds are learning that their human partners are stiffing them, and they're becoming less likely to respond when called, so the human honey hunters are having less overall success.

So even birds can recognize when they're getting a raw deal, and put a stop to it.

The more we find out about the other life forms with which we share the planet, the more commonality we find.  Everything in the natural world exists on a continuum, from our physiology and our genetics to characteristics many thought of as solely human traits, like emotion, empathy, and intelligence.

So be careful when you throw around terms like "bird-brain" -- they're not as far off from us as you might like to believe.

****************************************



Tuesday, December 28, 2021

Cyborg games

I hate computer games.

Now, don't get all up in arms.  I'm not saying you can't love them and want to spend all available waking hours playing them.  This has nothing to do with moralizing about productive use of time.  For me, computer games are the opposite of relaxing and entertaining, particularly the ones where speed is required.  Even simple ones like Tetris get me so wound up I want to scream.  I still recall vividly my one and only time playing Angry Birds, because I got way angrier than the birds were.  The third time I flew my Bird head-first into a steel pipe, I just about had to be physically restrained from throwing my computer out of the window.

I realize this is an admission of a mild psychiatric disorder.  It's just a game, nothing to take seriously, certainly nothing to get agitated about, and so forth ad nauseam.  But it's a purely spontaneous reaction that I seem to have zero control over.  The result is if I had to choose between spending an hour playing Super Mario Brothers and having my prostate examined by Edward Scissorhands, I'd have to think about it.

All of this comes up because of a preprint of a new scientific paper sent to me by a friend wherein some researchers apparently taught an "organoid" -- a small, organ-like structure made of cultured brain cells -- how to play Pong.


Here's how the authors, a team led by Brett Kagan of Cortical Labs of Melbourne, Australia, describe what they did:
Integrating neurons into digital systems to leverage their innate intelligence may enable performance infeasible with silicon alone, along with providing insight into the cellular origin of intelligence.  We developed DishBrain, a system which exhibits natural intelligence by harnessing the inherent adaptive computation of neurons in a structured environment.  In vitro neural networks from human or rodent origins, are integrated with in silico computing via high-density multielectrode array.  Through electrophysiological stimulation and recording, cultures were embedded in a simulated game-world, mimicking the arcade game ‘Pong’.  Applying a previously untestable theory of active inference via the Free Energy Principle, we found that learning was apparent within five minutes of real-time gameplay, not observed in control conditions.  Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time.  Cultures display the ability to self-organise in a goal-directed manner in response to sparse sensory information about the consequences of their actions.

"We think it's fair to call them cyborg brains," Kagan said, in an interview with New Scientist.

What's a little humbling is that these organoids probably play Pong better than I do.  And I doubt that after playing Pong for five minutes, they want to smash their Petri dish against the wall, which is how I'd react.

It does make me wonder where all this is going, however.  We have a clump of cultured brain cells integrated into electronic circuitry (thus the appellation "cyborg brains") that can learn, and get progressively better at, a game.  Okay, it may seem like a silly accomplishment; an organoid playing Pong, so what?  But keep in mind that this is only a proof-of-concept.  If the process works -- and it certainly seems like it did -- there's no reason they can't ramp up the sophistication of the task until they have something that is truly a complex synthesis of organic brain and electronic brain.

Just as long as we don't take the research too far.  Fellow Doctor Who fans know exactly where I'm going with this.


In this case, maybe the outcome would be that the Cybermen would do nothing worse than forcing humans to play hours and hours of Pong with them.  And I guess that's better than their wanting to assimilate us all.  

Well, for most of us, at least.  Once again, given the choice, I'd have to think about it.

My question, though, is what'd be next?  Daleks playing Laser Tag?  The Silence playing charades?  Weeping Angels playing hide-and-go-seek?  Seems like the possibilities are endless.

In any case, if it passes peer review, it's a pretty stupendous achievement, and it'll be interesting to see where the research leads.  We're probably still a long way from anything useful, but as I've learned from years of watching science news, sometimes those leaps can come when you least expect them -- and span chasms you thought would never be crossed.

 **********************************

Neil deGrasse Tyson has become deservedly famous for his efforts to bring the latest findings of astronomers and astrophysicists to laypeople.  Not only has he given hundreds of public talks on everything from the Big Bang to UFOs, a couple of years ago he launched (and hosted) an updated reboot of Carl Sagan's wildly successful 1980 series Cosmos.

He has also communicated his vision through his writing, and this week's Skeptophilia book-of-the-week is his 2019 Letters From an Astrophysicist.  A public figure like Tyson gets inundated with correspondence, and Tyson's drive to teach and inspire has impelled him to answer many of them personally (however arduous it may seem to those of us who struggle to keep up with a dozen emails!).  In Letters, he has selected 101 of his most intriguing pieces of correspondence, along with his answers to each -- in the process creating a book that is a testimony to his intelligence, his sense of humor, his passion as a scientist, and his commitment to inquiry.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]



Tuesday, November 17, 2020

Mental maps

Picture a place you know well.  Your house, your apartment, a park, a church, a school.  You can probably imagine it, remember what it's like to wander around in it, maybe even visualize it to a high level of detail.

Now, let's change the perspective to one you probably have never taken.  Would you be able to draw a map of the layout -- as seen from above?  An aerial view?

Here's a harder task.  In a large room, there are various obstacles, all fairly big and obvious.  Tables, chairs, sofas, the usual things you might find in a living room or den.  You're standing in one corner, and from that perspective are allowed to study it for as long as you like.

Once you were done, could you walk from that corner to the diagonally opposite one without running into anything -- while blindfolded?

Both of these tasks require the use of a part of your brain called the hippocampus.  The name of the structure comes from the Greek word ἱππόκαμπος -- literally, "seahorse" -- because of its shape.  The hippocampus has a role in memory formation, conflict avoidance... and spatial navigation.

Like the other structures in the brain, the hippocampus seems to be better developed in some people than others.  My wife, for example, has something I can only describe as an internal GPS.  To my knowledge, she has never been lost.  When we took a trip to Spain and Portugal a few years ago, we rented a car in Madrid and she studied a map -- once.  After that, she navigated us all over the Iberian Peninsula with only very infrequent checks to make sure we were taking the correct turns, which because of her navigational skills, we always were.

I, on the other hand, get lost walking around a tree.

[Image licensed under the Creative Commons Edward Betts, Bloomsbury - map 1, CC BY-SA 2.0]

The topic comes up because of a paper last week in Cell that showed something absolutely fascinating.  It's called "Targeted Activation of Hippocampal Place Cells Drives Memory-Guided Spatial Behavior," and was written by a team led by Nick T. M. Robinson of University College London.  But to understand what they did, you have to know about something called optogenetics.

Back in 2002, a pair of geneticists, Boris Zemelman and Gero Miesenböck, developed an amazing technique.  They genetically modified mammalian nerve tissue to express a protein called rhodopsin, which is one of the light-sensitive chemicals in the retina of your eye.  By hitching the rhodopsin to ion-sensitive gateway channels in the neural membrane, they created neurons that literally could be turned on and off using a beam of light.

Because the brain is encased in bone, animals that express this gene don't respond any time the lights are on; you have to shine light directly on the neurons that contain rhodopsin.  This involves inserting fiber optics into the brain of the animal -- but once you do that, you have a set of neurons that fire when you shine a light down the fibers.  Result: remote-control mice.

Okay, if you think that's cool, wait till you hear what Robinson et al. did.

So you create some transgenic mice that express rhodopsin in the hippocampus.  Fit them out with fiber optics.  Then let the mice learn how to run a maze for a reward, in this case sugar water in a feeder bottle.  Watch through an fMRI and note which hippocampal neurons are firing when they learn -- and especially when they recall -- the layout of the maze.

Then take the same mice, put them in a different maze.  But switch the lights on in their brain to activate the neurons you saw firing when they were recalling the map of the first maze.

The result is that the mice picture the first maze, and try to run that pattern even though they can see that they are now in a different maze.  The light activation has switched on a memory of the layout of the maze they'd learned that then overrode all the other sensory information they had access to.

It's as if you moved from Tokyo to London, and then tried to use your knowledge of the roads of Tokyo to find your way from St. Paul's Cathedral to the Victoria & Albert Museum.

This is pretty astonishing from a number of standpoints.  First, the idea that you can switch a memory on and off like that is somewhere between fascinating and freaky.  Second, that the neural firing pattern is so specific -- that pattern corresponds to that map, and no other.  And third, that the activation of the map made the mice doubt the information coming from their own eyes.  

So once again, we have evidence of how plastic our brains are, and how easy they are to fool.  What you're experiencing right now is being expressed in your brain as a series of neural firings; in a way, the neural firing pattern is the experience.  If you change the pattern artificially, you experience something different.

More disturbing still is that our sense of self is also deeply tied to our neural links (some would say that our sense of self is nothing more than neural links; to me, the jury's still out on where consciousness comes from, so I'm hesitant to go that far).  So not only what you perceive, but who you are can change if you alter the pattern of neural activation.

We're remarkable, complex, amazing, and fragile beasts, aren't we?

So that's today's contribution from the Not Science Fiction department.  I'm wondering if I might be able to get one of those fiber optics things to activate my hippocampus.  Sounds pretty extreme, but I am really tired of getting lost all the time.  There are trees everywhere around here.

*****************************************

This week's Skeptophilia book-of-the-week is one that has raised a controversy in the scientific world: Ancient Bones: Unearthing the Astonishing New Story of How We Became Human, by Madeleine Böhme, Rüdiger Braun, and Florian Breier.

It tells the story of a stupendous discovery -- twelve-million-year-old hominin fossils, of a new species christened Danuvius guggenmosi.  The astonishing thing about these fossils is where they were found.  Not in Africa, where previous models had confined all early hominins, but in Germany.

The discovery of Danuvius complicated our own ancestry, and raised a deep and difficult-to-answer question; when and how did we become human?  It's clear that the answer isn't as simple as we thought when the first hominin fossils were uncovered in Olduvai Gorge, and it was believed that if you took all of our millennia of migrations all over the globe and ran them backwards, they all converged on the East African Rift Valley.  That neat solution has come into serious question, and the truth seems to be that like most evolutionary lineages, hominins included multiple branches that moved around, interbred for a while, then went their separate ways, either to thrive or to die out.  The real story is considerably more complicated and fascinating than we'd thought at first, and Danuvius has added another layer to that complexity, bringing up as many questions as it answers.

Ancient Bones is a fascinating read for anyone interested in anthropology, paleontology, or evolutionary biology.  It is sure to be the basis of scientific discussion for the foreseeable future, and to spur more searches for our relatives -- including in places where we didn't think they'd gone.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Wednesday, June 3, 2020

Keeping count

I'm fortunate to have been raised in a bilingual home.  My mother's first language was French, and my dad (son of a father of French and a mother of Scottish descent) was also fluent, although much more comfortable in English.  The result is that my parents spoke French in front of me (and also with our older relatives) when they didn't want me to understand, which was a hell of an incentive to learn listening comprehension, although -- as I found out later -- a bit of a problem when you're actually called upon to speak it yourself.

My Uncle Sidney, my mother's brother, didn't help matters much, because he was extremely fluent in the art of the French swear word.  He taught me a good many really creative expressions when I was still quite young, but I found out pretty quickly that when Uncle Sidney said, "Go ask your mother what ____ means," it was better to remain in ignorance than to incite my prim and prudish mom's ire.

Eventually, despite the impediments, I learned to speak French fairly well.  I distinctly recall, though, how baffled I was when I first learned the French counting system.

Even living in a Francophone household, it struck me as weird right from the get-go.  One through ten, no problem.  Like English, eleven and twelve have their own special names (onze and douze).  But... so do thirteen through sixteen.  Then seventeen, eighteen, and nineteen translate, respectively, to ten-seven, ten-eight, and ten-nine.

Things don't go really off the rails until you hit seventy.  Sixty is soixante; seventy is soixante-dix (sixty-ten). Then we reach eighty -- quatre-vingt -- literally, "four-twenty."

For what it's worth, ninety-seven is quatre-vingt dix-sept -- four-twenty ten-seven.

I read French pretty well, but when I hit a number with more than two digits, I still have to stop and do some mental arithmetic to figure it out.

Turns out I'm not alone.  A study by Iro Xenidou-Dervou of Vrije Universiteit Amsterdam et al. found that even when you control for other factors, the language a child speaks (so the counting system (s)he learns) has an effect on the facility with which the child learns arithmetic.  The more it corresponds to a simple, regular base-10 system, the better the child is at learning math.

On the extremely logical side, we have Chinese.  In Mandarin, ninety-two is jiǔ shí èr -- "nine-ten-two."  We've already looked at French (where it would be "four-twenty-twelve").  But for my money, the winner in the what-the-fuck-were-you-thinking department would be Danish, where ninety-two is tooghalvfems, where halvfems (ninety) is an abbreviation of the Old Norse halvfemsindstyve, or "four and a half times twenty."

And don't even get me started about Roman numerals. [Image licensed under the Creative Commons Дмитрий Окольников, Roman numerals!, CC BY-SA 4.0]

"The fact that they were the same in every other aspect, apart from the condition where two digits showed up, shows you that it's the language that is making the difference," said study lead author Xenidou-Dervou, in an interview with BBC.  "The effects are small, and yet this is numeracy at its most basic, just estimating a number on a line.  As adults, we're doing very complicated tasks in our daily lives, and so even small difficulties caused by the number naming system could potentially be an additive hurdle to everyday mathematical skills."

All of this brings back a subject that's fascinated me since my days as a graduate student in linguistics: the Sapir-Whorf hypothesis.  This is the idea that the language we grow up with profoundly influences our brain wiring -- so not only does our cognitive development influence our language learning, our language learning influences our cognitive development.  I found out about a particularly cool example of this when I was reading the brilliant book The Last Speakers, by K. David Harrison, which was an attempt to chronicle some of the world's most endangered languages.  When Harrison was traveling with a tribe in Siberia, he was intrigued to find out that they had no words for "right," "left," "behind," and "in front of."  Everything was described in terms of the cardinal directions.  So right now, my computer isn't in front of me; it's south of me.  (They also have direction-related words meaning "upstream" and "downstream.")

Anyhow, Harrison was trying to talk to one of the tribal elders about why that was, and all he got from him for a time was frank bafflement.  It was as if he couldn't even quite understand what Harrison was driving at.  Then, all of a sudden, he burst into gales of laughter.  "Really?" he said to Harrison.  "That's how you see the world?  So that tree is in one place, but if you turn around, it's now in a different place?  Everything in the world is relative to the position of your body, so when you move, the entire universe shifts around you?  What an arrogant people you must be!"

I know that these days, Sapir-Whorf is kind of out of vogue with linguists, but studies like the one by Xenidou-Dervou et al. make me realize how deeply woven together our cognition and our language is.  We create our world with our words, and the words we learn shape the reality we are able to see.

Including whether we say "ninety-two" or -- in old-system Welsh -- dau ar ddeg a phedwar ugain.

Literally, "two on ten and four twenties."

************************************

This week's Skeptophilia book recommendation of the week is a fun one -- George Zaidan's Ingredients: The Strange Chemistry of What We Put In Us and On Us.  Springboarding off the loony recommendations that have been rampant in the last few years -- fad diets, alarmist warnings about everything from vaccines to sunscreen, the pros and cons of processed food, substances that seem to be good for us one week and bad for us the next, Zaidan goes through the reality behind the hype, taking apart the claims in a way that is both factually accurate and laugh-out-loud funny.

And high time.  Bogus health claims, fueled by such sites as Natural News, are potentially dangerous.  Zaidan's book holds a lens up to the chemicals we ingest, inhale, and put on our skin -- and will help you sort the fact from the fiction.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]




Tuesday, January 26, 2016

Memory boost

There's one incorrect claim I find coming up in my classes more than any other, and that's the old idea that "humans only use 10% of their brain."  Or 5%.  Or 2%.  Often bolstered by the additional claim that Einstein is the one who said it.  Or Stephen Hawking.  Or Nikola Tesla.

Or maybe all three of 'em at once, I dunno.

The problem is, there's no truth to any of it, and no evidence that the claim originated with anyone remotely famous.  That at present we understand only 10% of the brain is doing -- that I can believe.  That we're using less than 100% of our brain at any given time -- of course.

But the idea that evolution has provided us with these gigantic processing units, which (according to a 2002 study by Marcus Raichle and Debra Gusnard) consume 20% of our oxygen and caloric intake, and then we only ever access 10% of its power -- nope, not buying that.  Such a waste of resources would be a significant evolutionary disadvantage, and would have weeded out the low-brain-use individuals long ago.  (Which gives me hope that we might actually escape ending up with a human population straight out of the movie Idiocracy.)

And speaking of movies, the 2014 cinematic flop Lucy didn't help matters, as it features a woman who gets poisoned with a synthetic drug that ramps up her brain from its former 10% usage rate to... *gasp*... 100%.  Leading to her becoming able to do telekinesis and the ability to "disappear within the space/time continuum."

Whatever the fuck that means.

All urban legends and goofy movies aside, the actual memory capacity of the brain is still the subject of contention in the field of neuroscience.  And for us dilettante science geeks, it's a matter of considerable curiosity.  I know I have often wondered how I can manage to remember the scientific names of obscure plants, the names of distant ancestors, and melodies I heard fifteen years ago, but I routinely have to return to rooms two or three times because I keep forgetting what I went there for.

So I found it exciting to read about a study published last week in eLife, by Terry Sejnowski (of the Salk Institute for Biological Studies), Kristen Harris (of the University of Texas/Austin), et al., entitled "Nanoconnectomic Upper Bound on the Variability of Synaptic Plasticity."  Put more simply, what the team found was that human memory capacity is ten times greater than previously estimated.

In computer terms, our storage ability amounts to one petabyte.  And put even more simply for non-computer types, this translates roughly into "a shitload of storage."

"This is a real bombshell in the field of neuroscience," Sejnowski said. "We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power.  Our new measurements of the brain's memory capacity increase conservative estimates by a factor of 10 to at least a petabyte, in the same ballpark as the World Wide Web."

The discovery hinges on the fact that there is a hierarchy of size in our synapses.  The brain ramps up or down the size scale as needed, resulting in a dramatic increase in our neuroplasticity -- our ability to learn.

"We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses," said team member Tom Bartol.  "One answer is in the constant adjustment of synapses, averaging out their success and failure rates over time... For the smallest synapses, about 1,500 events cause a change in their size/ability and for the largest synapses, only a couple hundred signaling events cause a change.  This means that every 2 or 20 minutes, your synapses are going up or down to the next size.  The synapses are adjusting themselves according to the signals they receive."

"The implications of what we found are far-reaching," Sejnowski added. "Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us."

And the most mind-blowing thing of all is that all of this precision and storage capacity runs on a power of about 20 watts -- less than most light bulbs.

Consider the possibility of applying what scientists have learned about the brain to modeling neural nets in computers.  It brings us one step closer to something neuroscientists have speculated about for years -- the possibility of emulating the human mind in a machine.

"This trick of the brain absolutely points to a way to design better computers," Sejnowski said.  "Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains."

Which is thrilling and a little scary, considering what happened when HAL 9000 in 2001: A Space Odyssey basically went batshit crazy halfway through the movie.


That's a risk that I, for one, am willing to take, even if it means that I might end up getting turned into a Giant Space Baby.

But I digress.

In any case, the whole thing is pretty exciting, and it's reassuring to know that the memory capacity of my brain is way bigger than I thought it was.  Although it still leaves open the question of why, with a petabyte of storage, I still can't remember where I put my cellphone.