Recognize “gotcha” peer reviews? This editor can

Neil Herndon
Neil Herndon

Ever read a review where the editor or reviewer seems to be specifically looking for reasons to reject a paper? Neil Herndon, the editor-in-chief of the Journal of Marketing Channels, from the South China University of Technology in Guangzhou, has. In a recent editorial, Herndon calls this type of review “gotcha” peer reviewing, and presents an alternative. 

Retraction Watch: What is “gotcha” reviewing?  What is its purpose and who is practicing it for the most part?

Neil Herndon: Gotcha Reviewing occurs when the editors and reviewers at a journal emphasize finding what is wrong with a paper to provide a basis for rejection. They are thinking first of rejection rather than how to improve the paper given that there are no “fatal flaws” in the research, that is, flaws that cannot be fixed. I associate this practice with top-tier journals as they use it as a screening method to reduce editor and reviewer workload due in part to their tremendously large number of submissions. Unfortunately, a lot of really fine research with interesting ideas gets lost in the process.  

Gotcha reviewing also allows a journal to inappropriately quickly screen out more junior researchers (among other groups) along with interesting and valuable research findings that may expand our understanding of marketing. I believe as do many others, but cannot prove, that paper acceptance in top-tier journals may revolve around more than the quality of the research and its presentation: gender bias, regional bias, seniority bias, and affiliation bias are all suspected by authors, reviewers, and editors according to a global study published in 2015 by Journal of Marketing Channels publisher Taylor and Francis.  

RW: What is “developmental” reviewing?  What are its key distinctions from the “gotcha” practice?

NH: Developmental Reviewing, which I believe is the gold standard for journals truly interested in helping guide quality research, occurs when the emphasis is on providing authors with additional insights and advice to improve the quality of their work in a double-blind, helpful way rather than thinking first of rejection (as is done in gotcha reviewing). This provides what we hope is welcome support, especially so for new researchers in a publish-or-perish world.  

RW: I would imagine most journals say that they are, or would like to, practice “developmental” reviewing. How many actually are doing so, in your opinion?

NH: If a journal truly wants reviews to be helpful and collegial, then an editorial attitude and editorial review board culture of developmental reviewing will go a long way towards making that possible. If a journal truly wants to publish papers from all segments of the marketing community studying areas within that journal’s aims and scope, developmental reviewing will help open its pages to quality work in an unbiased way.  

I can’t guess how many journals take a developmental approach, but I can say that my own experiences and those of my colleagues and research partners suggest that not many do so. I also find that some editors and reviewers are unkind, unhelpful, and certainly not collegial in their approach to writing reviews, a situation that can certainly discourage junior faculty new to the publish-or-perish world of academia.

RW: Do you think all journals should practice “developmental” reviewing all the time or do you think there’s also room for “gotcha” in some instances?

NH: I believe that everyone that does the work of conducting research and submitting a paper to a journal deserves a fair and unbiased evaluation of their work: this is the core of developmental reviewing. The core of gotcha reviewing is to reject as many papers as possible without regard for their possibilities: this is an unfortunate and unfair approach.  

That said, there are papers that I desk reject all too often because they are outside of the aims and scope of the Journal of Marketing Channels, that are of extremely low quality such that developing them into an article is not possible, that have fatal flaws in their methodology (especially data collection and research design) that cannot be repaired with a different approach or statistical analysis, and papers that, unfortunately, have been previously published or otherwise engage in unethical research practices such as plagiarism.  

RW: What tips would you give journals and reviewers to make their reviews more “developmental” in nature?

NH: First of all, a reviewer has to invest the time to really carefully read a paper, deeply understand its theoretical base and methodology, meticulously examine its hypotheses, statistics, and results, and see if the discussion insights, theoretical implications, managerial implications, public policy implications (if any), and future research suggestions are all supported and fit together as a unit. Only then should the reviewer begin to consider any criticism of the work and make suggestions for its improvement.  

All too often editors and reviewers decide to reject a paper based on the first few pages of text, sometimes just from reading the abstract, and then look for gotcha rejection opportunities.  Developmental suggestions must come from a deep understanding of the researcher’s intentions and approach with the reviewer making comments as though he or she were a coauthor sitting at the desk with the author(s). Only then can the reviews be truly developmental.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

7 thoughts on “Recognize “gotcha” peer reviews? This editor can”

  1. I once had an essay rejected for a ‘top tier’ journal with this comment from a reviewer: ‘I don’t know what is wrong with this paper, but something must be. Reject it.’ The editor then refused to elaborate on this, though they accepted the advice. (I believe the reason had to be that what my argument wasn’t sufficiently in line with the ideological orientation of the journal. To test this out I later sent an essay with a pro line argument, and in it went!)

  2. I would be surprised if many of us who act as reviewers for journals recognise this as truly reflecting scientific publishing. I certainly do not. My experience is that that requests for reviewing are much higher now than previously, and that many of the papers I am asked to review are deeply flawed. Actually, change that many to most – I astonish myself how many recommendations to ‘reject’ I now give compared with even five years ago.

    Developmental reviewing does take time and effort, but can be rewarding if it improves a paper or helps it makes an important point that it had overlooked. It is also rewarding to the reviewer.

    The change, to my mind, is the number of papers where authors simply don’t know the literature, and who have simply followed cook-book methods without knowing the topic. Now this may be because the field that interests me – systematic reviews and meta-analysis – may be seen as a quick and relatively painless way of getting a paper published.

    Anyway, my point is that authors are responsible for doing research to a standard, and if they don’t like the standard they tell us why not. It’s not just down to journals, editors, or reviewers.

  3. I fundamentally disagree with many of the opinions expressed by Herndon. First he rails on “gotcha” reviews being due to editors thinking about rejection first. Beyond solutions such as massive increases in volume at top journals, or decreases in submissions (neither of which are likely any time soon), there’s really not much to be done about low acceptance rates. Even at mid-rank journals I review for (impact factors 4-10), acceptance rates are often in single digit percent range. It’s rather like the problem of an HR assistant with a pile of 100 CVs to scan – small nuances such as spelling mistakes make a difference. Reading everything in full (to develop “a deep understanding of the researcher’s intentions and approach”) takes time and money, both of which are in short supply in academia. The solution is not simply to tell reviewers to read more thoroughly!

    On the topic of “developmental reviewing”, this is problematic at several levels. He claims journals should have “an interest in helping guide quality research”. Wrong. It is not any editor’s role (especially professional editors who are not scientists or only spent minimal time at the bench several years ago) to guide science. How presumptuous. The NIH specifically axed the A2 grant application cycle several years ago, citing the iterative review process as a problem – they didn’t want the study section reviewers to be writing the grant FOR the applicants.

    There are several pitfalls in developmental reviewing: It assumes the authors are actually capable of being mentored (many are not, or refuse to listen). It assumes the additional work proposed by the reviewer is offered in good faith (often it is not). It assumes that the journal has any business whatsoever in the process of science. It assumes scientists have unlimited time and resources to simply do the bidding of reviewers. Finally, it sows discord, when a reviewer suggests something, an author does it, and then the reviewer suggests something else (moving the goal-posts). For these reasons, many journals have specifically moved to stop reviewers from suggesting more experiments, and others are also limiting reviews to 1 cycle and done. “Developmental Reviewing” sounds to me like a pathway to prolonged agony, taking multiple cycles across months or years to get a paper published.

    1. 1) For better or for worse (mostly worse, if you ask me), journals are stakeholders in what scientists do. Thus, they do have an interest in helping guide quality research simply because it would make them more money in the long run.

      2) The description of “developmental reviewing” in this post makes it seem like some of the responsibility for mentoring early scientists should be transferred to journal editors and reviewers, rather than PIs, teachers, and peers. A good PI, or graduate mentor, or whatever you want to call them, should be doing the bulk of what Herndon says developmental reviewers should do.

      3) “Developmental reviewing,” as I interpret it, is simply a euphemism for “constructive criticism”–emphasis on “constructive.”

  4. I am in complete agreement that developmental peer reviewing should be upheld by all top-tier journals. Submissions should be evaluated for their research quality rather than on the basis of the reputation of the authors. However, from the journals’ perspective, is it viable for journals flooded with submissions to play the role of mentors? While it is unfortunate, the fact is that reviewers who are hard pressed for time to review papers and editors who are under pressure to maintain the JIF might not be inclined to play guides to authors. The lack of incentives for peer reviewing, the extreme competition in academia, the pressure of maintaining a journal’s reputation, and such other factors combine into this “gotcha” behavior. Thus, resolving this would require changes at various fronts, and reviewers and editors by themselves may not be successful in fixing this.

  5. Thank you Neil, I agree wholeheartedly that developmental reviewing is the “gold standard”, or perhaps “the ideal we should all try to attain”. But I do not see a clear dividing line between “gotcha” and “developmental”. It is fuzzy, and perhaps in best practice the two are not readily separable. After many years of editorial and review work, firstly with ‘Australasian Journal of Educational Technology’ and more recently with ‘Issues in Educational Research’, I’m very aware of a routine in my writing of reviews, which at times has been as intense as 4-5 per week. I look for the “gotcha”, then I look for the “developmental” opportunities revealed by the “gotcha”. After that, compose the advice to the authors, usually in a well-practised routine: first, acknowledge the strengths, then outline the weaknesses, and end with the “developmental”, the ‘how to progress the research’ encouragement. Of course, that is time consuming (we need to grow the number of retired academics, who have the expertise and the time to give), so I want to mention a “driver” that hopefully will become very important. Academic publishing has to become more inclusive towards non-Western contexts, ESL authors, and developing country topics. We have to perceive the “gotcha” targets (mainly non-Western contexts, ESL …) as “developmental” targets, and do our best to grow inclusivity.

  6. Very interesting article! However, I think that gotcha reviews in a high number cannot be avoided without asking first why they exist at all. Therefore, I would like to try to collect some reasons for gotcha reviews:

    The time that scientists have is decreasing, the number of journal submissions is increasing, and the number of reviewers is more or less constant. This is just a bare claim, but I think it is true. Let us assume this. Then scientists have to do a lot of reviewing which they would like to get done quickly because of their little time which they also need to do research themselves. So, what can I do to get a review quickly done? Well, I could accept it right away. But this might be bad for my reputation if I did that all too often. So better not do that. Surely, I am not really passionate about that thing that I want to get off my desk, right? I know: to accept the paper, I will have to go completely through it because otherwise I could overlook crucial mistakes. And since this takes the most time, I have the aim to find flaws right away from the beginning of the process.

    It also depends on what kind of work I have to review. Is it a known colleague? In this case, I trust their work more than other’s and accept it after a not very thorough reading. This is the nicest case because I feel good afterwards. Maybe the paper is even interesting for me and impacts my own research. But in most cases it’s people I don’t know. Well, people are curious (especially scientists should be). So, I would like to find out where they come from. Ok, I will google them. Often, they come from countries where scientists are forced to produce a certain amount of accepted papers each year. And I have an idea of which countries these are. If the authors happen to come from there, they get a minus point and I tend to a gotcha review already. Also a bad english used in the paper produces minus points. So, I start reading the first few pages with the intention to find mistakes or flaws. Usually, I find them and reject the paper immediately. Rarely, I have to read more than 4 pages. But it happens. In these cases, I get nervous because I really have to deal and work with that paper (which from the scratch I didn’t want). Then I start to think about rejecting because the results aren’t interesting enough…

    I have to add that I am a mathematician and I believe it is harder to reject a mathematics paper than a paper in, say, geology. However, it would be a good start if the journals began to leave the authors unknown to the reviewers. But I am afraid that the time factor remains.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.