Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.
Showing posts with label jargon. Show all posts
Showing posts with label jargon. Show all posts

Saturday, April 10, 2021

Bullshitometry

Having spent 32 years as a high school teacher, I developed a pretty sensitive bullshit detector.

It was a necessary skill.  Kids who have not taken the time to understand the topic being studied are notorious for bullshitting answers on essay questions, often padding their writing with vague but sciency-sounding words.  An example is the following, which is verbatim (near as I can recall) from an essay on how photosynthesis is, and is not, the reverse of aerobic cellular respiration:
From analyzing photosynthesis and the process of aerobic cellular respiration, you can see that certain features are reversed between the two reactions and certain things are not.  Aerobic respiration has the Krebs Cycle and photosynthesis has the Calvin Cycle, which are also opposites in some senses and not in others.  Therefore, the steps are not the same.  So if you ran them in reverse, those would not be the same, either.
I returned this essay with one comment: "What does this even mean?"  The student in question at least had the gumption to admit he'd gotten caught.  He grinned sheepishly and said, "You figured out that I had no idea what I was talking about, then?"  I said, "Yup."  He said, "Guess I better study next time."

I said, "Yup."

Developing a sensitive nose for bullshit is critical not only for teachers, because there's a lot of it out there, and not just in academic circles.  Writer Scott Berkun addressed this in his wonderful piece, "How to Detect Bullshit," which gives some concrete suggestions about how to figure out what is USDA grade-A prime beef, and what is the cow's other, less pleasant output.  One of the best is simply to ask the questions, "How do you know that?", "Who else has this opinion?", and "What is the counter-argument?"

You say your research will revolutionize the field?

Says who?  Based on what evidence?

He also says to be very careful whenever anyone says, "Studies show," because usually if studies did show what the writer claims, (s)he'd be specific about what those studies were.  Vague statements like "studies show" are often a red flag that the claim doesn't have much in its favor.

Remember Donald Trump's "People are telling me" and "I've heard from reliable sources" and "A person came up to me at my last rally and said"?

Those mean, "I just now pulled this claim out of my ass."

Using ten-dollar buzzwords is also a good way to cover up the fact that you're sailing close to the wind.  Berkun recommends asking, "Can you explain this in simpler terms?"  If the speaker can't give you a good idea of what (s)he's talking about without resorting to jargon, the fancy verbiage is fairly likely to be there to mislead.

This is the idea behind BlaBlaMeter, a website I discovered a while back, into which you can cut-and-paste text and get a score (from 0 to 1.0) for how much bullshit it contains.  I'm not sure what the algorithm does besides detecting vague filler words, but it's a clever idea.  It'd certainly be nice to have a rigorous way to detect it when you're being bamboozled with words.



The importance of being able to detect fancy-sounding nonsense was highlighted by the acceptance of a paper for the International Conference on Atomic and Nuclear Physics -- when it turned out that the paper had been created by hitting iOS Autocomplete over and over.  The paper, written (sort of) by Christoph Bartneck, associate professor at the Human Interface Technology laboratory at the University of Canterbury in New Zealand, was titled "Atomic Energy Will Have Been Made Available to a Single Source" (the title was also generated by autocomplete), and contained passages such as:
The atoms of a better universe will have the right for the same as you are the way we shall have to be a great place for a great time to enjoy the day you are a wonderful person to your great time to take the fun and take a great time and enjoy the great day you will be a wonderful time for your parents and kids.
Which, of course, makes no sense at all.  In this case, I wonder if the reviewers simply didn't bother to read the paper -- or read a few sample sentences and found that they (unlike the above) made reasonable sense, and said, "Looks fine to me."

Although I'd like to think that even considering my lack of expert status on atomic and nuclear physics, I'd have figured out that what I was looking at was ridiculous.

On a more serious note, there's a much more pressing reason that we all need to arm ourselves against bullshit, because so much of what's on the internet is outright false.  A team of political fact-checkers was hired by Buzzfeed News to sift through claims on politically partisan Facebook pages, and found that on average, a third of the claims made by partisan sites were outright false.  And lest you think one side was better than the other, the study found that both right and left were making a great many unsubstantiated, misleading, or wrong claims.  And we're not talking about fringe-y wingnut sites here; these were sites that if you're on Facebook you see reposts from on a daily basis -- Occupy Democrats, Breitbart, AlterNet, Fox News, The Blaze, The Other 98%, NewsMax, Addicting Info, Right Wing News, and U.S. Uncut.

What this means is that when you see posts from these sites, there is (overall) about a 2/3 chance that what you're seeing is true.  So if you frequent those pages -- or, more importantly, if you're in the habit of clicking "share" on every story that you find mildly appealing -- you damn well better be able to figure out which third is wrong.

The upshot of it is, we all need better bullshit filters.  Given that we are bombarded daily by hundreds of claims from the well-substantiated to the outrageous, it behooves us to find a way to determine which is which.

And, if you're curious, a 275-word passage from this Skeotphilia post was rated by BlaBlaMeter as having a bullshit rating of 0.13.  Which I find reassuring.  Not bad, considering the topic I was discussing.

**************************************

This week's Skeptophilia book-of-the-week is a bit of a departure from the usual science fare: podcaster and author Rose Eveleth's amazing Flash Forward: An Illustrated Guide to the Possibly (and Not-So-Possible) Tomorrows.

Eveleth looks at what might happen if twelve things that are currently in the realm of science fiction became real -- a pill becoming available that obviates the need for sleep, for example, or the development of a robot that can make art.  She then extrapolates from those, to look at how they might change our world, to consider ramifications (good and bad) from our suddenly having access to science or technology we currently only dream about.

Eveleth's book is highly entertaining not only from its content, but because it's in graphic novel format -- a number of extremely talented artists, including Matt Lubchansky, Sophie Goldstein, Ben Passmore, and Julia Gförer, illustrate her twelve new worlds, literally drawing what we might be facing in the future.  Her conclusions, and their illustrations of them, are brilliant, funny, shocking, and most of all, memorable.

I love her visions even if I'm not sure I'd want to live in some of them.  The book certainly brings home the old adage of "Be careful what you wish for, you may get it."  But as long as they're in the realm of speculative fiction, they're great fun... especially in the hands of Eveleth and her wonderful illustrators.

[Note: if you purchase this book from the image/link below, part of the proceeds goes to support Skeptophilia!]



Friday, September 25, 2020

Neurobabble

Confirming something that people like Deepak Chopra and Dr. Oz figured out years ago, researchers at Villanova University and the University of Oregon have shown that all you have to do to convince people is throw some fancy-sounding pseudoscientific jargon into your argument.

The specific area that Diego Fernandez-Duque, Jessica Evans, Colton Christian, and Sara D. Hodges researched was neurobabble, in particular the likelihood of increasing people's confidence in the correctness of an argument if some bogus brain-based explanation was included. Fernandez-Duque et al. write:
Does the presence of irrelevant neuroscience information make explanations of psychological phenomena more appealing?  Do fMRI pictures further increase that allure?  To help answer these questions, 385 college students in four experiments read brief descriptions of psychological phenomena, each one accompanied by an explanation of varying quality (good vs. circular) and followed by superfluous information of various types.  Ancillary measures assessed participants' analytical thinking, beliefs on dualism and free will, and admiration for different sciences.  In Experiment 1, superfluous neuroscience information increased the judged quality of the argument for both good and bad explanations, whereas accompanying fMRI pictures had no impact above and beyond the neuroscience text, suggesting a bias that is conceptual rather than pictorial.  Superfluous neuroscience information was more alluring than social science information (Experiment 2) and more alluring than information from prestigious “hard sciences” (Experiments 3 and 4).  Analytical thinking did not protect against the neuroscience bias, nor did a belief in dualism or free will.  We conclude that the “allure of neuroscience” bias is conceptual, specific to neuroscience, and not easily accounted for by the prestige of the discipline.  
So this may explain why people so consistently fall for pseudoscience as long as it's couched in seemingly technical terminology.  For example, look at the following, an excerpt from an article in which Deepak Chopra is hawking his latest creation, a meditation-inducing device called "DreamWeaver":
About two years ago I got interested in the idea that you could feed light pulses through the brain with your eyes closed and sound and music at a certain frequency.  Your brain waves would dial into it and then you could dial the instrument down so that you would decrease the brain wave frequency from what it is normally in the waking state.  And then you could slowly dial down the brainwave frequency to what it would be in the dream state, which is called theta, and then you even dial further down into delta.
What the hell does "your brain waves would dial into it" mean?   And I would like to suggest to Fernandez-Duque et al. that their next experiment should have to do with people immediately believing claims if they involve the word "frequency."

[Image licensed under the Creative Commons NascarEd, Sleep Stage N3, CC BY-SA 3.0]

Then we have the following twofer -- an excerpt of an article by Deepak Chopra that appeared on Dr. Oz's website:
Try to eat one of these three foods once a day to protect against Alzheimer’s and memory issues.  
Wheat Germ - The embryo of a wheat plant, wheat germ is loaded with B-complex vitamins that can reduce levels of homocysteine, an amino acid linked to stroke, Alzheimer’s disease and dementia.  Sprinkle wheat germ on cereal and yogurt in the morning, or enjoy it on salads or popcorn with a little butter. 
Black Currents [sic] - These dark berries are jam-packed with antioxidants that help nourish the brain cells surrounding the hippocampus.  The darker in color, the more antioxidants black currents [sic] contain.  These fruits are available fresh when in season, or can be purchased dried or frozen year-round. 
Acorn Squash - This beautiful gold-colored veggie contains high amounts of folic acid, a B-vitamin that improves memory as well as the speed at which the brain processes information.
Whenever I read this sort of thing, I'm not inclined to believe it; I'm more inclined to scream, "Source?"  For example, I looked up the whole black currant claim, and the first few sources waxed rhapsodic about black currants' ability to enhance our brain function.  But then I noticed that said sources were all from the Black Currant Foundation (I didn't even know that existed, did you?) and the website blackcurrant.co.nz.  Scrolling down a bit, I found a post on WebMD that was considerably less enthusiastic, saying that it "may be useful in Alzheimer's" (with no mention of exactly how, nor any citations to support the claim) but that it also can lower blood pressure and slow down blood clotting.

So I suppose that the only way to protect yourself against this kind of nonsense is to learn some actual science, and be willing to read some peer-reviewed papers on the subject -- which includes training yourself to recognize which sources are peer-reviewed and which are not.

But doing all this research myself leaves me feeling like I need some breakfast.  Maybe a wheat germ, black currant, and acorn squash stir-fry.  Can't have too many antioxidants, you know, when your hippocampus is having some frequency problems.

**********************************

Author Mary Roach has a knack for picking intriguing topics.  She's written books on death (Stiff), the afterlife (Spook), sex (Bonk), and war (Grunt), each one brimming with well-researched facts, interviews with experts, and her signature sparkling humor.

In this week's Skeptophilia book-of-the-week, Packing for Mars: The Curious Science of Life in Space, Roach takes us away from the sleek, idealized world of Star Trek and Star Wars, and looks at what it would really be like to take a long voyage from our own planet.  Along the way she looks at the psychological effects of being in a small spacecraft with a few other people for months or years, not to mention such practical concerns as zero-g toilets, how to keep your muscles from atrophying, and whether it would actually be fun to engage in weightless sex.

Roach's books are all wonderful, and Packing for Mars is no exception.  If, like me, you've always had a secret desire to be an astronaut, this book will give you an idea of what you'd be in for on a long interplanetary voyage.

[Note: if you purchase this book using the image/link below, part of the proceeds goes to support Skeptophilia!]


Saturday, October 22, 2016

Bullshitometry

As a teacher, I've developed a pretty sensitive bullshit detector.

It's a necessary skill.  Kids who have not taken the time to understand the topic being studied are notorious for bullshitting answers on essay questions, often padding their writing with vague but sciency-sounding words.  An example is the following, which is verbatim (near as I can recall) from an essay on how photosynthesis is, and is not, the reverse of aerobic cellular respiration:
From analyzing photosynthesis and the process of aerobic cellular respiration, you can see that certain features are reversed between the two reactions and certain things are not.  Aerobic respiration has the Krebs Cycle and photosynthesis has the Calvin Cycle, which are also opposites in some senses and not in others.  Therefore, the steps are not the same.  So if you ran them in reverse, those would not be the same, either.
I returned this essay with one comment:  "What does this even mean?"  The student in question at least had the gumption to admit he'd gotten caught.  He grinned sheepishly and said, "You figured out that I had no idea what I was talking about, then?"  I said, "Yup."  He said, "Guess I better study next time."

I said, "Yup."

Developing a sensitive nose for bullshit is critical not only for teachers, because there's a lot of it out there, and not just in academic circles.  Writer Scott Berkun addressed this in his wonderful piece, "How to Detect Bullshit," which gives some concrete suggestions about how to figure out what is USDA grade-A prime beef, and what is the cow's other, less pleasant output.  One of the best is simply to ask the questions, "How do you know that?", "Who else has this opinion?", and "What is the counter-argument?"

You say your research will revolutionize the field?

Says who?  Based on what evidence?

He also says to be very careful whenever anyone says, "Studies show," because usually if studies did show what the writer claims, (s)he'd be specific about what those studies were.  Vague statements like "studies show" are often a red flag that the claim doesn't have much in its favor.

Using ten-dollar buzzwords is also a good way to cover up the fact that you're sailing pretty close to the wind.  Berkun recommends asking, "Can you explain this in simpler terms?"  If the speaker can't give you a good idea of what (s)he's talking about without resorting to jargon, the fancy verbiage is fairly likely to be there to mislead.

This is the idea behind BlaBlaMeter, a website I found out about from a student of mine, into which you can cut-and-paste text and get a score (from 0 to 1.0) for how much bullshit it contains.  I'm not sure what the algorithm does besides detecting vague filler words, but it's a clever idea.  It'd certainly be nice to have a rigorous way to detect it when you're being bamboozled with words.


The importance of being able to detect fancy-sounding nonsense was highlighted just this week by the acceptance of a paper for the International Conference on Atomic and Nuclear Physics -- when it turned out that the paper had been created by hitting iOS Autocomplete over and over.  The paper, written (sort of) by Christoph Bartneck, associate professor at the Human Interface Technology laboratory at the University of Canterbury in New Zealand, was titled "Atomic Energy Will Have Been Made Available to a Single Source" (the title was also generated by autocomplete), and contained passages such as:
The atoms of a better universe will have the right for the same as you are the way we shall have to be a great place for a great time to enjoy the day you are a wonderful person to your great time to take the fun and take a great time and enjoy the great day you will be a wonderful time for your parents and kids.
Which, of course, makes no sense at all.  In this case, I wonder if the reviewers simply didn't bother to read the paper -- or read a few sample sentences and found that they (unlike the above) made reasonable sense, and said, "Looks fine to me."

Although I'd like to think that even considering my lack of expert status on atomic and nuclear physics, I'd have figured out that what I was looking at was ridiculous.

On a more serious note, there's a much more pressing reason that we all need to arm ourselves against bullshit, because so much of what's on the internet is outright false.  A team of political fact-checkers was hired by Buzzfeed News to sift through claims on politically partisan Facebook pages, and found that on average, a third of the claims made by partisan sites were outright false.  And lest you think one side was better than the other, the study found that both right and left were making a great many unsubstantiated, misleading, or wrong claims.  And we're not talking about fringe-y wingnut sites here; these were sites that if you're on Facebook you see reposts from on a daily basis -- Occupy Democrats, Eagle Rising, Freedom Daily, The Other 98%, Addicting Info, Right Wing News, and U.S. Uncut.

What this means is that when you see posts from these sites, there is (overall) about a 2/3 chance that what you're seeing is true.  So if you frequent those pages -- or, more importantly, if you're in the habit of clicking "share" on every story that you find mildly appealing -- you damn well better be able to figure out which third is wrong.

The upshot of it is, we all need better bullshit filters.  Given that we are bombarded daily by hundreds of claims from the well-substantiated to the outrageous, it behooves us to find a way to determine which is which.

And, if you're curious, a 275-word passage from this Skeotphilia post was rated by BlaBlaMeter as having a bullshit rating of 0.13.  Which I find reassuring.  Not bad, considering the topic I was discussing.

Monday, May 11, 2015

Neurobabble

Confirming something that people like Deepak Chopra and Dr. Oz figured out years ago, researchers at Villanova University and the University of Oregon have shown that all you have to do to convince people is throw some fancy-sounding pseudoscientific jargon into your argument.

The specific area that Diego Fernandez-Duque, Jessica Evans, Colton Christian, and Sara D. Hodges researched was neurobabble, in particular the likelihood of increasing people's confidence in the correctness of an argument if some bogus brain-based explanation was included.  Fernandez-Duque et al. write:
Does the presence of irrelevant neuroscience information make explanations of psychological phenomena more appealing?  Do fMRI pictures further increase that allure?  To help answer these questions, 385 college students in four experiments read brief descriptions of psychological phenomena, each one accompanied by an explanation of varying quality (good vs. circular) and followed by superfluous information of various types.  Ancillary measures assessed participants' analytical thinking, beliefs on dualism and free will, and admiration for different sciences.  In Experiment 1, superfluous neuroscience information increased the judged quality of the argument for both good and bad explanations, whereas accompanying fMRI pictures had no impact above and beyond the neuroscience text, suggesting a bias that is conceptual rather than pictorial.  Superfluous neuroscience information was more alluring than social science information (Experiment 2) and more alluring than information from prestigious “hard sciences” (Experiments 3 and 4).  Analytical thinking did not protect against the neuroscience bias, nor did a belief in dualism or free will.  We conclude that the “allure of neuroscience” bias is conceptual, specific to neuroscience, and not easily accounted for by the prestige of the discipline.  It may stem from the lay belief that the brain is the best explanans for mental phenomena.
So this may explain why people so consistently fall for pseudoscience as long as it's couched in technical terminology.  For example, look at the following, an excerpt from an article in which Deepak Chopra is hawking his latest creation, a meditation-inducing device called "DreamWeaver":
About two years ago I got interested in the idea that you could feed light pulses through the brain with your eyes closed and sound and music at a certain frequency.  Your brain waves would dial into it and then you could dial the instrument down so that you would decrease the brain wave frequency from what it is normally in the waking state.  And then you could slowly dial down the brainwave frequency to what it would be in the dream state, which is called theta, and then you even dial further down into delta.
What the hell does "your brain waves would dial into it" mean?  And I would like to suggest to Fernandez-Duque et al. that their next experiment should have to do with people immediately believing claims if they involve the word "frequency."

[image courtesy of the Wikimedia Commons]

Then we have the following twofer -- an excerpt of an article by Deepak Chopra that appeared on Dr. Oz's website:
Try to eat one of these three foods once a day to protect against Alzheimer’s and memory issues. 
Wheat Germ - The embryo of a wheat plant, wheat germ is loaded with B-complex vitamins that can reduce levels of homocysteine, an amino acid linked to stroke, Alzheimer’s disease and dementia. Sprinkle wheat germ on cereal and yogurt in the morning, or enjoy it on salads or popcorn with a little butter. 
Black Currents [sic] - These dark berries are jam-packed with antioxidants that help nourish the brain cells surrounding the hippocampus. The darker in color, the more antioxidants black currents [sic] contain. These fruits are available fresh when in season, or can be purchased dried or frozen year-round. 
Acorn Squash - This beautiful gold-colored veggie contains high amounts of folic acid, a B-vitamin that improves memory as well as the speed at which the brain processes information.
Whenever I read this sort of thing, I'm not inclined to believe it; I'm more inclined to shout, "Source?"  For example, I looked up the whole black currant claim, and the first few sources waxed rhapsodic about black currants' ability to enhance our brain function.  But then I noticed that said sources were all from the Black Currant Foundation (I didn't even know that existed, did you?) and the website blackcurrant.co.nz.  Scrolling down a bit, I found a post on WebMD that was considerably less enthusiastic, saying that it "may be useful in Alzheimer's" (with no mention of exactly how, nor any citations to support the claim) but that it also can lower blood pressure and slow down blood clotting.

So I suppose that the only way to protect yourself against this kind of nonsense is to learn some actual science, and be willing to do some research -- which includes training yourself to recognize what a credible source looks like.

But doing all this research myself leaves me feeling like I need some breakfast.  Maybe a wheat germ, black currant, and acorn squash stir-fry.  Can't have too many antioxidants, you know, when your hippocampus is having some frequency problems.

Wednesday, February 26, 2014

Academic gibberish

About three years ago, I wrote a post on the problem with scientific jargon.  The gist of my argument was that while specialist vocabulary is critical in the sciences, its purpose should be to enhance clarity of speech and writing, and if it does not accomplish that, it is pointless.  Much of woo-wooism, in fact, comes about because of mushy definitions of words like "energy" and "field" and "frequency;" the best scientific communication uses language precisely, leaving little room for ambiguity and misunderstanding.

That doesn't mean that learning scientific language isn't difficult, of course.  I've made the point more than once that the woo-woo misuse of terminology springs from basic intellectual laziness.  The problem is, though, that because the language itself requires hard work to learn, the use of scientific vocabulary and academic syntax can cross the line from being precise and clear into deliberate obscurantism, a Freemason-like Guarding of the Secret Rituals.  There is a significant incentive, it seems, to use scientific jargon as obfuscation, to prevent the uninitiated from understanding what is going on.

[image courtesy of the Wikimedia Commons]

The scientific world just got a demonstration of that unfortunate tendency with the announcement yesterday that 120 academic papers have been withdrawn by publishers, after computer scientist Cyril Labbé of Joseph Fourier University (Grenoble, France) demonstrated that they hadn't, in fact, been written by the people listed on the author line...

... they were, in fact, computer-generated gibberish.

Labbé developed software that was specifically written to detect papers produced by SciGen, a random academic paper generator produced by some waggish types at MIT.  The creators of SciGen set out to prove that meaningless jargon strings would still make it into publication -- and succeeded beyond their wildest dreams.  “I wasn’t aware of the scale of the problem, but I knew it definitely happens.  We do get occasional emails from good citizens letting us know where SciGen papers show up,” says Jeremy Stribling, who co-wrote SciGen when he was at MIT.

The result has left a lot of folks in the academic world red-faced.  Monika Stickel, director of corporate communications at IEEE, a major publisher of academic papers, said that the publisher "took immediate action to remove the papers" and has "refined our processes to prevent papers not meeting our standards from being published in the future."

More troubling, of course, is how they got past the publishers in the first place, because I think this goes deeper than substandard (worthless, actually) papers slipping by careless readers.  Myself, I have to wonder if anyone can actually read some of the technical papers that are currently out there, and understand them well enough to determine if they make sense or not.  Now, up front I have to say that despite my scientific background, I am a generalist through and through (some would say "dilettante," to which I say: guilty as charged, your honor).  I can usually read papers on population genetics and cladistics with a decent level of understanding; but even papers in the seemingly-related field of molecular genetics zoom past me so fast they barely ruffle my hair.

Are we approaching an era when scientists are becoming so specialized, and so sunk in jargon, that their likelihood of reaching anyone who is not a specialist in exactly the same field is nearly zero?

It would be sad if this were so, but I fear that it is.  Take a look, for example, at the following little quiz I've put together for your enjoyment.  Below are eight quotes, of which some are from legitimate academic journals, and some were generated using SciGen.  See if you can determine which are which.
  1. On the other hand, DNS might not be the panacea that cyberinformaticians expected. Though conventional wisdom states that this quandary is mostly surmounted by the construction of the Turing machine that would allow for further study into the location-identity split, we believe that a different solution is necessary.
  2. Based on ISD empirical literature, is suggested that structures like ISDM might be invoked in the ISD context by stakeholders in learning or knowledge acquisition, conflict, negotiation, communication, influence, control, coordination, and persuasion. Although the structuration perspective does not insist on the content or properties of ISDM like the previous strand of research, it provides the view of ISDM as a means of change.
  3. McKeown uses intersecting multiple hierarchies in the domain knowledge base to represent the different perspectives a user might have. This partitioning of the knowledge base allows the system to distinguish between different types of information that support a particular fact. When selecting what to say the system can choose information that supports the point the system is trying to make, and that agrees with the perspective of the user.
  4. For starters, we use pervasive epistemologies to verify that consistent hashing and RAID can interfere to realize this objective. On a similar note, we argue that though linked lists and XML are often incompatible, the acclaimed relational algorithm for the visualization of the Internet by Kristen Nygaard et al. follows a Zipf-like distribution.
  5. Interaction machines are models of computation that extend TMs with interaction to capture the behavior of concurrent systems, promising to bridge the fields of computation theory and concurrency theory.
  6. Unlike previous published work that covered each area individually (antenna-array design, signal processing, and communications algorithms and network throughput) for smart antennas, this paper presents a comprehensive effort on smart antennas that examines and integrates antenna-array design, the development of signal processing algorithms (for angle of arrival estimation and adaptive beamforming), strategies for combating fading, and the impact on the network throughput.
  7. The roadmap of the paper is as follows. We motivate the need for the location-identity split. Continuing with this rationale, we place our work in context with the existing work in this area. Third, to address this obstacle, we confirm that despite the fact that architecture can be made interposable, stable, and autonomous, symmetric encryption and access points are continuously incompatible.
  8. Lastly, we discuss experiments (1) and (4) enumerated above. Error bars have been elided, since most of our data points fell outside of 36 standard deviations from observed means. On a similar note, note that active networks have more jagged seek time curves than do autogenerated neural networks.
Ready for the answers?

#1:  SciGen.
#2:  Daniela Mihailescu and Marius Mihailescu, "Exploring the Nature of Information Systems Development Methodology: A Synthesized View Based on a Literature Review," Journal of Service Science and Management, June 2010.
#3:  Robert Kass and Tom Finin, "Modeling the User in Natural Language Systems," Computational Linguistics, September 1988.
 #4:  SciGen.
#5:  Dina Goldin and Peter Wegner, "The Interactive Nature of Computing: Refuting the Strong Church-Turing Thesis," Kluvier Academic Publications, May 2007.
#6:  Salvatore Bellofiore et al., "Smart Antenna System Analysis, Integration, and Performance on Mobile Ad-Hoc Networks (MANETs)," IEEE Transactions on Antennas and Propagation, May 2002.
#7:  SciGen.
 #8:  SciGen.

How'd you do?  If you're like most of us, I suspect that telling them apart was guesswork at best.

Now, to reiterate; it's not that I'm saying that scientific terminology per se is detrimental to understanding.  As I say to my students, having a uniform, standard, and precise vocabulary is critical.  Put a different way, we all have to speak the same language.  But this doesn't excuse murky writing and convoluted syntax, which often seem to me to be there as much to keep non-scientists from figuring out what the hell the author is trying to say as it is to provide rigor.

And the Labbé study illustrates pretty clearly that it is not just a stumbling block for relative laypeople like myself.  That 120 computer-generated SciGen papers slipped past the eyes of the scientists themselves points to a more pervasive, and troubling, problem.

Maybe it's time to revisit the topic of academic writing, from the standpoint of seeing that it accomplishes what it originally was intended to accomplish; informing, teaching, enhancing knowledge and understanding.  Not, as it seems to have become these days, simply being a means of creating a coded message that is so well encrypted that sometimes not even the members of the Inner Circle can elucidate its meaning.