Author note: I admit that in a few previous posts I may have put a bit more emphasis on having fun than on being strictly informative. However, I swear to you on my autographed copy of Tactics of Scientific Research that everything in the present post is 100% factual.
Of all of the specialized skills that a novice researcher must master, perhaps none is as frustrating and ill-defined as the literature search. You can’t do science without knowing what other scientists have accomplished, and to achieve this you need to find their publications. In my youth, this required combing through dusty paper journals for relevant sources in tables of contents and reference lists. When you found something useful, you often had to write actual paper letters to authors requesting paper reprints. The process could take months.
Today we live in more enlightened times: Our scientific literature has been digitized! This ought to make things easier, and to some extent it does. I doubt that people without pre-internet memories can appreciate the supreme, nearly erotic, joy that this survivor of the paper-journal past derives from finding and downloading an article in seconds.
And yet most people who do frequent literature searches know well this painful scenario: You are certain that the articles you need exist, but you can’t come up with the search terms that lead you to them. Others have written on the skill set needed to conduct a proper electronic search, and those skills most definitely get refined with experience.
In my teaching of research-related classes, I often see the consequences of a poorly-formed search repertoire: undergraduates who are new to literature searches driven to distraction by search-term woes. One student came to believe that the search engine was controlled by some demon or gremlin who hated him and maliciously withheld useful results. As improbable as this hypothesis may sound, read on before drawing any conclusions.
Some observers have suggested that literature searches are hard partly because our our electronic systems make searching hard. This proposition is difficult to evaluate, however, because, let’s face it, does anybody really understand how search algorithms work? Google once poked fun at the opacity of search engine technology with a blog post claiming that its searches employ the “PigeonRank” system, which “build[s] on the breakthough work of B.F. Skinner” and employs hard-working pigeons whose break rooms are “stocked with an assortment of delectable seeds and grains and feature the finest in European statuary for roosting.”
PigeonRank’s success relies primarily on the superior trainability of the domestic pigeon (Columba livia) and its unique capacity to recognize objects regardless of spatial orientation. The common gray pigeon can easily distinguish among items displaying only the minutest differences, an ability that enables it to select relevant web sites from among thousands of similar pages….When a search query is submitted to Google, it is routed to a data coop where monitors flash result pages at blazing speeds. When a relevant result is observed by one of the pigeons in the cluster, it strikes a rubber-coated steel bar with its beak, which assigns the page a PigeonRank value of one. For each peck, the PigeonRank increases. Those pages receiving the most pecks are returned at the top of the user’s results page with the other results displayed in pecking order.
PigeonRank is described as evidence-based (Google provided data on the efficiency of its operator-avians, like a graph of the relationship between wingspan and beak speed) and effective (“While some unscrupulous websites have tried to boost their ranking by including images on their pages of bread crumbs, bird seed and parrots posing seductively in resplendent plumage, Google’s PigeonRank technology cannot be deceived by these techniques.”).
This is genius stuff, and yet also disturbing in a way because, if real, PigeonRank wouldn’t be all that far from my student’s assumption that search technology has agency (see this recent related speculation!) Fortunately, though I can’t tell computer code from a codfish, given the publication date of April 1, 2002, I’m fairly sure this post is a lark. But that doesn’t mean something troubling isn’t going on under the hood of search engines, as a recent experience illustrates.
One day I stumbled across an article reporting humor norms for several thousand English words, based on people rating them on a scale ranging from very funny (booby, toot, rump) to very unfunny (blood, sorrow, arsenic). And I can’t remember why, but I wondered what would happen if I entered some of the funniest and unfunniest words as search terms in publisher web sites for behavior analysis journals. I’m not sure what I expected to come from this, but it sure wasn’t the hundreds of hits I got. For some reason, most of the hits were returned for those cornerstones of our literature, the staid and serious Journal of Applied Behavior Analysis and Journal of the Experimental Analysis of Behavior (and especially for JABA, which inspired me to investigate whether JABA may in fact be the funniest journal in behavior analysis; but I digress).
Below are a few of the empirically-verified funniest English words and some JEAB/JABA articles that Wiley Publishers’ search engine insisted represent them. I swear I am not making this up.
Funny word | Example Article |
booty | Hill et al. (2019, JABA). Using equivalence-based instruction to teach piano skills to children. |
tit | Yi & Rachlin (2013, JEAB). Contingencies of reinforcement in a five-person Prisoner’s Dilemma. |
booby | Lobow (1974, JEAB). Higher-order concept formation in the pigeon. |
pecker | Lattal & Doepke (2013, JABA). Correspondence as conditional stimulus control: Insights from experiments with pigeons. |
bebop | Sundberg et al. (2018, JEAB). Covert mediation in arbitrary matching to sample. |
ass | Malott (2018, BAP). A model for training science-based practitioners in applied behavior analysis. |
oink | Ingvarsson et al. (2013, JABA). An evaluation of intraverbal training to generate socially appropriate responses to novel questions. |
crotch | Hineline (1968, JEAB). A rapid retractable response lever. |
whimsy | Miller (1991, JABA). Avoiding the countercontrol of Applied Behavior Analysis. |
toot | Macenski & Meisch (2013, JEAB). Ratio size and cocaine concentration effects on oral cocaine-reinforced behavior. |
snort | Saylor et al. (2013, JABA). Effects of three types of noncontingent auditory stimulation on vocal stereotypy in children with autism. |
rump | Ferguson & Rosalez-Ruiz (2013, JABA). Leading the problem loader: The effects of target training and shaping on trailer-loading behavior of horses. |
And here are some of the empirically-verified unfunniest English words and some articles supposedly representing them. For some reason, almost all of the “unfunny” hits came from JABA (raising the possibility that JABA is both the funniest and unfunniest journal in behavior analysis).
Unfunny Word | Example Article |
demon | Loos et al. (1977, JABA). A multi-element analysis of the effect of teacher aides in an “open” style classroom. |
disgust | Dougher (2013, JABA). This is not B.F. Skinner’s behavior analysis: A review of Hayes, Strosahl, and Wilson’s Acceptance and Commitment Therapy. |
hatred | Thyer (2013, JABA). On the possible influence of Bertrand Russell on B.F. Skinner’s approach to education. |
vengeance | Ghezzi (2013, JABA). In memoriam: Sidney W. Bijou |
bankrupt | Agras (1973, JABA). Toward the certification of behavior therapists? |
blood | Winett et al. (1991, JABA). Altering shoppers’ supermarket purchases to fit nutritional guidelines: An interactive information system. |
horror | Sajwaj (1977, JABA). Issues and implication of establishing guidelines for the use of behavioral techniques. |
stab | Birnie-Selwin & Guerin (2013, JABA). Teaching children to spell: Decreasing consonant cluster errors by eliminating selective stimulus control |
arsenic | Rodriguez (2022, JABA). Facilitating the emergence of intraverbal tacts in children with autism spectrum disorder: A preliminary analysis. |
sorrow | O’Leary (1993, JABA). An Editor’s recollections: 1978-1980. |
kill | Mayer et al. (1986, JABA). Promoting low-fat entree choices in a public cafeteria. |
divorce | Novak & Pelaez (2021, JABA). Frances Degen Horowitz’s influence on behavioral systems theory. |
pain | Jones & McCaughey (1992, JABA). Gentle teaching and applied behavior analysis: A critical review. |
It is impossible to say what is going on here. Some of the article matches feel almost whimsical, as if the search engine is good-naturedly winking at you (e.g., pecker = Correspondence as conditional stimulus control: Insights from experiments with pigeons; kill = Promoting low-fat entree choices in a public cafeteria). Others seem random, as if the search engine’s intent is simply to exasperate (e.g., arsenic = Facilitating the emergence of intraverbal tacts — huh?). In none of these cases, of course, does the search result bear any substantive relationship to the article’s content. A gremlin is as good an explanation as any.
So the next time you’re parked at your computer, fuming because you know useful articles are out there but can’t find them, remember that it might not be your fault. Consider the possibility that your search engine is, in fact, as my student deduced, malicious or capricious or personality disordered … or even demonic. Maybe PigeonRank is the better technology after all.
Postscript
I emphasize again that I am not making these search results up. However, be aware that about a year before penning this post I released a preprint describing my findings, and for purposes of sleuthing out the dark forces operating in search engines, that may have been a tactical error. I recently repeated my searches and the results were nothing like what’s described here. Just accurate and useful and supremely boring. The logical conclusion is that the gremlin, now aware that I was onto him, went deep underground, or perhaps developed even-more-insidious ways of messing with our searches.