Everyone who publishes in and/or helps to run behavior analysis journals should check out a Journal of Applied Behavior Analysis article by Cengher and LeBlanc (2023), which provides a detailed look at the operation of the peer review system. This should be required reading for authors because, to strategically prepare an article for submission to a journal, you need to consider how the review team will approach it. It should be required reading for members of review teams because, if you’re asked to review, then you need to know the “cultural traditions” that apply to your job.
To complement the Cengher and LeBlanc article, I’d like to share some thoughts derived from my own roughly 40 years spent in the trenches of submitting, reviewing, and editing manuscripts. I’ve reviewed for most of the behavior analysis journals and served in editorial roles for a half-dozen of them. During that time I’ve thought a lot about what peer review is for and how best to accomplish its goals. Maybe the most important lesson I’ve derived is that Cengher and LeBlanc’s conventional take on the purpose of peer review (to “facilitate quality control, legitimize scientific research, and self-regulate scientific communities”) is only a partial truth. This gatekeeper function is only part of the bigger agenda of crowdsourcing progress in science. First and foremost, journals exist so readers can be better informed than they would be if scores of scientists worked alone without communicating. Although of course we don’t want to distribute junk science, the crowdsourcing of progress succeeds only when reviewers and editors assure that useful articles are published in a form that exerts suitable audience control.
With that in mind, below I list some principles that I think apply to reviewer and editor roles. There is a lot of thematic overlap between my comments and those of Cengher and LeBlanc. Theirs, to be sure, are richer with technical and procedural details about how the peer review system works; mine are more focused on how to work within that system.
This first installment, of three, presents some rules specific to performing the role of reviewer. On to my observations! Take ’em, leave ’em, wrap fish with ’em. It’s totally up to you. If you have feedback — especially if you think I’ve missed (or misconstrued) key aspects of the manuscript-evaluator role — I’d love to hear from you (email@example.com).
Rules for Reviewers
- Your review should not be longer than the manuscript. Your job is to help the action editor make an editorial decision. In most cases, this means identifying those few critical aspects of the manuscript that most impact its “publishabiity,” and if at all possible suggesting how the author can deal with them constructively. Reviewers get into trouble sometimes because, implicitly, it feels like peer review is a test of them. Nobody wants to be That Reviewer who missed some fatal flaw that others recognized easily. And implicitly you’ve been asked to review because you have special expertise, and you feel the need to live up to that expectation. There’s a tendency, therefore, to scratch these itches by throwing everything including the kitchen sink into a review. But your job is to help the action editor and author with the manuscript, and the longer your review, the more it’s about you (Look, I caught things others reviewers didn’t!” or “Look how much smarter and more rigorous I am than this author!’) and not about the manuscript per se.
- Your job is not to protect the discipline or the journal from recalcitrant authors. Too often the tone of reviews conveys the sense that the sacred journal is under assault from a horde of idiot authors who must be repelled at all costs. This is the gatekeeper function run amok. The reality is that journals don’t exist without authors — if you doubt me, try working for one to which too few people choose to submit. There’s nothing like inadequate manuscript flow to demonstrate that authors are the prime movers of journals. Within the “crowdsourcing of science” framework, therefore, it could just as easily be said that you work for authors, not for the journal.
- Your job is not to defend a status quo. It’s to move the literature forward and make the journal a vital instrument in its evolution. Therefore the novel, the offbeat, the patently weird study is potentially just as valuable as the one that’s Brick #100 in a very well-established wall. Don’t get me wrong; A lot of science needs to have that another-brick-in-the-wall feel to it, but studies of that sort are also instruments of a collective confirmation bias in which it’s easy to feel as if we already know everything worth knowing. Unsurprising articles do not make people think. Methods and findings that defy traditional expectations have the potential to stir up the intellectual muck in constructive ways. By the way, it doesn’t matter if every thought-provoking article turns out, with the benefit of twenty years of hindsight, to be “correct.” What matters is the contemporary heuristic function of placing theory under the critical microscope and of stimulating new studies that either show us new paths or, in creative ways, eventually verify that the path we were already on is a good one. Therefore, in service of a scientific community that suffers when it stagnates, a key purpose of peer review is to reward authors who take responsible intellectual risks.
- The Critical Question every reviewer must answer about every manuscript is, “Who might do what differently, with respect to what problem, as a function of reading this article?” In other words, how does the article serve the journal’s audience? Does it offer a new conceptual framework through which readers can make sense of their ongoing work in new ways? Does it answer questions people have been wrestling with? Does it suggest new hypotheses and illuminate ways of testing those? In general, will it have a heuristic-catalytic influence on future work? These are the same questions that every submission must answer about itself, and unfortunately some authors think of manuscripts as displaying their thoughts rather than as influencing an audience. If a manuscript doesn’t directly answer my Critical Question, there are two options. Option #1 is for you to make recommendations about how the manuscript could improve re: audience utility. But if you can’t come up with anything, this is a fair sign that Option #2 is on the table: Recommend rejection on the grounds that the manuscript (no matter how erudite it may be) offers no practical value to the audience. In the end, the primary function of a review is to explore and maximize the connections between authors who create manuscripts and readers who consume them. A valuable article cannot exist without a potential audience effect.
- Not every valid thought you have about the article goes into the review. The insights you offer should be those that most directly serve to link authors and readers. Insights that disrupt the flow of the manuscript are best left out. Every experienced reader of journal articles knows what results if you do not. There is, for instance, is an unmistakable choppy or disjointed feel to journal-article Discussion sections that have been cobbled together from unrelated “must-include” points that reviewers simply couldn’t restrain themselves from raising. Because the parts of such a section do not add up to a coherent whole, they are often confusing and uninformative, and thereby interfere with the goal of creating good audience control.
- Your job is not to get the author to create the manuscript YOU would have written on this topic. This is really a corollary of Rule #5. One honest reason why reviewers often “say too much” is that they get excited about a manuscript in which they’ve seen value. An overwhelming flurry of comments and suggestions is, in effect, a demonstration of the manuscript’s promising audience control (it succeeded in evoking lots of reader verbal behavior!). The trick is to distinguish between which reviewer reactions are exciting to the reviewer and which are critical for the author to incorporate in order to communicate effectively with readers generally.
- Try to find diamonds in the rough. There is a class of authors who are more brilliant than the rest of us when it comes to scientific methods and insights, but who, for various reasons, are not good at communicating about the importance of what they do. When I was an associate editor for a particular journal, I handled maybe a dozen manuscripts from one author that all shared one property: They were awesome in ways that the author basically never mentioned! The author wrote model Methods and Results sections and sandwiched them between the most uninteresting and uninformative Introduction and Discussion sections imaginable. Some of these were ground-breaking papers that spoke to an intense theoretical debate that raged at the time, but you’d never have known that from reading the manuscripts. Consequently, reviewers often had a “So what?” sort of response to these submissions. Had I simply followed reviewer advice, a lot of those papers wouldn’t have been published. My point is that, as a reviewer, sometimes you can find cause for excitement in a manuscript that the author did not address. This is the one instance when it’s worth, in effect, asking an author to write the manuscript YOU would have written on the topic: When really good work likely would not see the light of day otherwise. The needed changes could be in the realm of communication, as per my example, but they could also take the form of new analyses or, in extreme cases, additional studies that are required to complete an empirical story line with the potential to move the discipline forward.
- Actively fight your biases. The madder and more frustrated that a manuscript makes you feel, the harder you should work to find a way to accept it. Our instincts as reviewer run in the opposite direction. In my younger days I wrote a lot of reviews that followed this developmental trajectory: Early in my read something in the manuscript would strike me as foolish or misguided or sloppy or ill-considered, and then I’d spend the rest of my reviewing time compiling “evidence” for why the manuscript was awful. This is confirmation bias again: I decided first that the manuscript wasn’t worthy, and then I set about to prove that. I must tell you that, as an action editor, I’ve seen this scenario play out far more times than I wish were true, and it’s one way we end up with reviews that challenge the manuscript in length. But here’s my point: What angers or frustrates you about a manuscript could be exactly the thing that makes it thought-provoking (after all, the manuscript clearly is controlling a lot of YOUR behavior!). Sure, some manuscripts are just bad, but some get under our skin because they challenge accepted wisdom, or break rules we’re accustomed to following. Those are potential diamonds in the rough, and your job is to help the author find a way to present in ways that can earn the work a fair hearing in the court of audience evaluation.