Is it time for a journal Review Quality Index?

trends ecoIt’s time to review the reviews.

That’s the central message of a new paper in Trends in Ecology & Evolution , “Errors in science: The role of reviewers,” by Tamás Székely, Oliver Krüger, and E. Tobias Krause. The authors

propose that the process of manuscript reviewing needs to be evaluated and improved by the scientific publishing community.

We certainly agree in principle, and have suggested a Transparency Index that has some of the same goals. We asked Adam Etkin, who founded Peer Review Evaluation (PRE) “to assist members of the scholarly publishing community who are committed to preserving an ethical, rigorous peer review process,” what he thought. PRE has created PRE-val and PRE-score to “validate what level of peer review was conducted prior to the publication of scholarly works.” Etkin told us:

I’m in agreement that there is a rapidly growing need for some sort of “quality control scheme” that the authors propose and that this is something that would benefit not   only the scientific community, but those outside of that community as well.

The authors of the new paper propose four scenarios for a Review Quality Index. Here they are, along with comments from Etkin:

  • “[J]ournals might select a random set of submitted manuscripts, and upload these manuscripts along with the reviewers’ statements and the editorial decision (reject or accept) to an open repository.” Peers would then rate the reviews, resulting in a Review Quality Index. Etkin: “This approach seems inherently prone to potential bias and I’m doubtful the uptake by volunteers would be large enough as to be effective.”
  • “[A]ccompanying their decision letters, the journal editor can also send a link to an online query where authors can rate the reviewers’ statements with respect to correctness, objectiveness, and fairness.” Those ratings would be used to calculate the Review Quality Index. Etkin: “Again, this seems very likely to result in author bias.  Authors of rejected papers are certainly more likely to respond negatively, regardless of whether or not the reviewer comments were valid or not. [The authors] state as much.”
  • Each journal’s editorial board “might select an external panel of senior experts who evaluate a random subset of review statements of both published and rejected manuscripts.” That process, too, would result in a Review Quality Index. Etkin: “This idea suffers from the same criticisms of the current review system.  If the assumption is that the current system does not work, why would the members of this external panel be any more effective than the existing editors/reviewers?” Plus, as the authors note, ““it might be difficult to find enough senior experts as their crucial participation would mean an additional workload that would further strain the already time-consuming peer review system.”
  • “Finally, journals might provide the details of articles submitted, but rejected, to an external web-based repository. This external repository could then track the fate of rejected manuscripts, and monitor whether they are subsequently published by another journal. Using the citation of these initially rejected versus accepted manuscripts, it would be possible to compare the reliability of decision-making process. Based on these data, a Review Quality Index might be calculated.” Etkin: “Of all the ideas proposed I find this one the most interesting.  However I’m again uncertain how the research community, especially authors, would respond to this.”

Etkin agrees with the authors’ conclusions:

It is now time to improve the system for the benefit of all involved.

Hat tip: Rolf Degen

22 thoughts on “Is it time for a journal Review Quality Index?”

  1. Proposals like these show that many people do not understand the purpose and utility of peer review.

    Peer review is not a certification process. All it is (top to bottom) is advice given by an individual to a journal editor as to whether a paper is worth publishing.

    Peer review is not really as important as many people take it to be. Every working scientist needs to evaluate a paper themselves, if they intend to use the reported information. I would like to think that no one assumes that because something is published in a journal it is necessarily true.

    Peer review is also not a formal part of the scientific method. It is further work that determines the validity of a theory or experiment. Validity is not determined by the opinion of an expert.

    1. I agree with Dan. I think peer review is “necessary but not sufficient.” When perusing the literature or studying a paper, scientists (should) do exactly what we are trained to do – think.

      I don’t think post-pub-only review is really a solution, as some have suggested. While I’m all in favor of allowing post-pub review in the form of comments on papers on the web (and letters/editorials in print), I’m very much against post-pub-only. If we switch to open, post-pub reviews only, then anyone will be able to publish any garbage they want and I’m quite confident that folks in the media will cite it in pop sci articles. This will do nothing to aid global scientific literacy. Pre-pub peer review ensures that there is at least a minimum standard for publication. While expert reviewers may not always catch fraud, we can at least assess experimental approaches, appropriateness of conclusions, and plausibility of a hypothesis or experimental outcome. A reasonable counter-argument would be that, with the proliferation of low ranked pay-to-publish journals, anyone can already publish any garbage they want. However, with our current system, we have impact factors and journals have reputations that (while far from fool proof) can generally be relied upon. A better solution would be to have both pre- and post- pub review, which we essentially now have with the introduction of comments on Pubmed. In short, I guess I favor the way things are now. Now we just need to encourage people to be brave enough to actually use the Pubmed comments, which show real names…

    2. Helping an editor to make a decision is, indeed, all peer review does for a lot of people, but not everyone reduce peer review to this.
      IF peer review is part of a researcher’s work, then there is a lot more to it (holding referees accountable, recognizing good reviewship, accounting for a researcher’s contribution for the scientific community etc.)
      Many contend that peer review is dead and post-publication review cuts it.
      I don’t see this happening (I don’t like the idea of choosing what to read based on… what?), but this is a complex issue, and debate will continue.

      Yes, there are people who don’t understand, usually called “the others” or “them”.
      There are people who can grasp the essence of things and summarize it in very short phrases.
      And then there are people who discuss issues with less authoritarianism.

  2. Waste of time, and rather non-sequitur. Open, public, post-publication peer review is the best filter to what to read.

    1. I’d like to see journals at least have to report how many outside reviewers they engaged on an article, editorial, or letter [not including the editor(s) and other in-house copy editors, etc].

      Repositories like medline should require this “R” number in order to index them even if the journals don’t publish these numbers themselves. In medline, pubmed, etc, the R Number could be given a dedicated field [eg Reviewer Info] just like is done already for [Author Info] directly underneath the authors’ names.

      2 reviewers are clearly not enough for a research article, but 0 or 1 may be sufficient for a letter or editorial.

      1. I know most dont like it, but traditional peer review is dead, insisting on it is a waste of time. The quicker we move onto self-publishing moderated by open post-publication peer review (yes, when your errors are apparent and you are the only responsible), the less time and money we lose.

  3. Please let me recommend the following Editorial, issued last year by François Diederich, and forwarded to me by a good colleague. Many interesting issues are addressed, like the declining quality of reviews received by editors:

    Are We Refereeing Ourselves to Death? The Peer-Review System at Its Limit
    Angew. Chem. (2013). 52, 13828-9
    http://dx.doi.org/10.1002/anie.201308804 (free access).

    François Diederich would probably not endorse the idea of a Review Quality Index, which will make the things more and more complex. His main conclusion is: “Tasks and assessments that can be done within organizations on the basis of their competence and self-critical judgment should not be moved outside.”

    The evaluation of the process of manuscript reviewing suggested by Tamás Székely et al. seems to be an “evaluation of the evaluation”. So why not to continue with the nested procedure, and propose an “evaluation of the evaluation of the evaluation”, followed by an “evaluation of the evaluation of the evaluation of the evaluation”… and so on ad infinitum?
    Moreover, I guess a Review Quality Index will be grossly misused, as the journals Impact Factors were.

  4. Recently, I had to realize that a responsible journal co-editor had to ask several potential reviewers to review a critical comment. It took very long to find one reviewer who was able to do that – and then, in a very long review, confirmed all the points addressed and recommended to publish without changes, but strongly recommended to revise the insufficient authors’ reply. Maybe the journal was not very good in the selection of the first reviewers of the original article, but correct in finding the last reviewer for my critical comment.

    My impression is, all journals in a field have to compete to select the best articles, sometimes they are not lucky, but it is up to the scientists – and their institutions – not to buy just every journal for their libraries.

  5. In my opinion, we need a new journal quality index (taking retraction index into consideration), a journal review quality index and much more. We need transparency in the system. Locally, the most transparent place is your laboratory. There almost everyone knows about reproducible results and not-so-reproducible results. Unfortunately, some speak out later.

    To increase the transparency, I think, we need a system like Pubpeer. Making of Pubpeer is one of the greatest idea since sliced bread. I guess there is no single person who has not been surprised by observing errors discovered in papers and then discussed in Pubpeer. This kind of system should be put more strongly in place. Instead of just submitting a paper to a journal, there should be a system like Pubpeer so that the papers can be checked by a bigger crowd, let’s say for few weeks. This will allow more transparency in the process of publication.

    1. All of the suggestions made above are simply Band-Aids. I smell that this “venture” is just another marketing ploy to devise another system that is useless, that adds more noise to an already noisy system. And some company, or corporation, is going to benefit. I am extremely critical of Thomson Reuter’s impact factor, ORCID, LinkedIn, and all these other artificial fabrications that add noise to science, don’t actually have any real tangible value (much like the over-evaluated value of those rocks made of carbon, i.e., diamonds). Transparency Index and Peer Review Evaluation are just new, useless parameters that will be used by some as a tool to discredit others. There is only ONE tool that (unfortunately) can still provide quality control, and that’s peer review. However, not this useless system we have in place currently that employs typically, for medium range impact factor journals, one to three reviewers, at most, but one which operates in a three-step process:

      STEP 0: the publisher must check the submitted manuscript for plagiarism, before, during and after the other steps listed below. This is NOT the responsibility of the peer reviewers. If plagiarism is then detected in a published paper, responsibility should be 100% that of publishers. No excuses that the software wasn’t sophisticated enough (enough of excuses by publishers, already!).
      STEP 1: pre-peer review in which the authors are responsible for having their plagiarism-free paper checked by at least three peers of their choice. Their identities are known, their qualifications are vetted and approved by the journal/publisher, and their identities are revealed in STEP 2.
      STEP 2: after addressing the pre-peer review requests, the paper is then passed on to an additional three peers selected by the editor board. The whole process MUST be double-blind during peer review (i.e., the authors do not know the identity of the peer reviewers and vice versa).
      STEP 4: assuming that the manuscript has passed the acceptance by 6 peers (3 known and 3 unknown, 3 open and 3 double-blind), then upon publication, the names of all the peer reviewers MUST be published, including the name of the handling editor. There can be NO ROOM for anonymity because anonymity breeds opacity. Anonymity serves only for reporting academic fraud.
      STEP 5: there must always be post-publication peer review as a supplement to the other steps listed above. PPPR is NOT an alternative, it is a supplement.

      Current “best” models (in my opinion) of STEPS 1+4 are F1000Research and Frontiers (the transparency and accountability is good, but their publishing fees suck).

      Finally, throw out all these metrics. They are all rubbish, useless, marketing tools to fake quality control. Simply and fortify what we already have. The traditional peer review system is not a bad thing, but it simply needs extensive re-working and PATIENCE (impatience is fomenting the breeding of errors).

      1. Sure, this requires brainstorming.
        Considering the above situation:
        Step0- there are still errors left even after automated software based checking!! Do we want this?
        Step1- it is confusing. I may have already discussed the paper with other PI colleagues…
        Step-2 this is the normal process followed but is not error free.
        Step3- ? (missing)
        Step4- transparency
        Step5- open to public

        So at the end of the day we want the work to be shared with the public, then why don’t we start there. A system like PubPeer it’s in place. This needs to be in association with the journals where they acknowledge the comments of the crowd.

      2. …seems like a paper will take ages of intricate nit-picking for getting published, until the final opinion to be emitted by post-publication peer review. It is already a challenge nowadays finding 1-2 (usually lazy) reviewers, imagine 9 whose names will be revealed at the end with potential political costs. If I had to answer 9 reviewers and correct the ms accordingly, I would just stick with the flawed traditional system.

        “qualifications are vetted and approved by the journal/publisher” — in principle this is how it should be always, but most involved do not care.

        The reviews themselves should be made public. With 9 these would be longer than the paper, few would read, and publishers would try to cut costs in this.

        1. We have already seen the result of PUSH and PUBLISH. Impatience and laziness has already led to waste of time, money, jobs, PhDs etc. As others have already said, it is time to rethink our peer review system. I don’t want to read fancy results if they are fake and irreproducible.
          We need brainstorming to change this system, and it is not going to be easy. We need to get to the bottom of the real problem:
          Is the Journal a problem?
          Are the reviewers a problem?
          Is there plagiarism in the paper?
          Are there fake results?
          Are there irreproducible results?
          Irreproduciblity is for me is the biggest issue. How can we take care of this before publication?

          1. N and CR, some counter-comments about my 4/5-step proposals.
            1) Step 0: I have claimed many times before that Google Scholar + checking the data-bases of most major publishers can already cut alot of the risk. iThenticate is not perfect and different journals use different cut-off values, so the problem is also with the inconsistency of the publishers. I am critical of COPE because their paying members all use different values of “acceptable” plagiarism. Thus, the problem lies with the publishers. Trust me, scientists are pretty conformational, once strict and logical rules are in place. It is the disorganization of the publishers that is messing us around. Finally, we should demand from commercial companies like iParadigms LLC, which owns iThenticate, TunrItIn, etc. that they have a moral responsibility towards education, science and society to make software free to the world, kind of like Google does a massive service to science and society. The costs of intellectual investment can be covered by advertising or even Ministries of Education, who should start to appreciate the bigger picture. The commercialization of ethics by COPE and iParadigms is the first part of the corruption in science publishing. Remove the rot associated with money and we already will remove the incentive to commit fraud (to some extent).
            2) Step 1: It is NOT confusing. See my concept of iPublish. Simple, effective. No all will like it, but a classical case of “friends in need are friends indeed”. Discussion among inner circle members is not good. Your PI is not the most neutral advisor to the study. Peers can be known, but they must be from independent laboratories. The two problems associated are finding a sizeable pool of effective colleagues, and the issue of acknowledgements vs financial remuneration.
            3) Step 2: This is NOT the normal process. There is a most likely a 100% coefficient of variation between journals and publishers, and that’s where the problem lies. At the lower end of the bottom-end feeders, the true “predatory” journals, OA or not, and at the upper echelon, Nature, Science and the like. We have seen from RW that if even perceived excellent papers in the top (including JBC, PNAS, etc.) are being retracted for all sorts of reasons, then it is horrific to think of the academic “crimes” being committed among the filter feeders.
            4) Step 3 (sorry missed the numbers!).
            5) Step 4: Yes, trust, openness and transparency are the three key characteristics that are being eroded by the rot caused by a base built on sand, and increasing marketing principles that provide a fake veil of quality rather than true science principles.
            6) Step 5: agreed, it must be open access. Who covers the costs? I don’t know, but all I know is that I am tired of seeing a “double-taxation” on scientists. First taxed for their intellect (in the form of the paper), then taxed for publishing costs. No, no, no!

            CR, the problem is precisely the fact that scientists are too much in a rush to publish, for whatever reason. Nit-picking is essential. The details and precision are essential. No more room is allowed for sloppiness. Major errors deserve retractions, and minor errors that slip through this “nit-picking” net would simply require errata.

            “publishers would try to cut costs in this.” Isn’t it time we started demanding more of publishers rather than the other way around?

            My 4-5-step system would cover all of the problems you list, N (including (ir)reproducibility). Almost guaranteed.

          2. My point is exactly that I think publication should be the first step for commencing true scientific debate. By nit-picking I mean small marginally relevant details, which comprise most of preoccupation of reviewers, especially if they want to seem effective. I really do not trust traditional/hidden peer review as it polishes the paper prior to publication, thus making it more glossy and believable.
            Upon one of my last rounds of reviewing a paper I found fundamental flaws in biochemical methodology, as the authors simply said that had “mistyped” methods and “overlooked” some details, and rewrote methods as to fit my expectations. In almost all papers I reviewed, authors were trying to push conclusions beyond to what data interpretation allowed, as to make their paper “more interesting”. In all cases the editor was useless in judging arguments, and wanted to repass YES or NO to the authors, and avoiding displeasing authors as best as possible.
            I am convinced one of the reasons the system is so corrupted (though it looks very fair from outside)is that it is controlled by private companies which essentially care not and knows not of science. The other one is that it is hidden from public view.
            I believe more in post publication peer review as it is public and supposedly made by unmoderated scientists. And any issues befall solely on the authors and other scientists. More like as in the old times.

          3. Sorry, just adding: as no one will polish our papers and errors will immediately be made evident, soon we will realise that each paper should be treated with greatest care by no one else other than ourselves. Thus all will be more careful before publishing, and not the contrary. With self-publishing, good scientists will naturally tend to publish less and more carefully, as their reputations will not be saved by allying their names and lies with great-reputated-multinationals as happens today. I think it is useless trying to fight off self-publication in the digital era — it will only cause more delays and build up more false idols.

  6. Post pub review is an obvious mistake. Just leave it as a white paper on your website if it isn’t accepted. No shame in that.

    I’m not sure a review-of-reviewers would accomplish much. Rather than trying to improve the review process through stronger incentives, reduce the burden reviewers and journal editors place on the process. Cureus is an open source medical journal that has an average turn around time of 7 days. Authors suggest reviewers for their papers but the final selection is made by the editor. Once at least two reviewers are satisfied, the paper is published. No waiting months (years?) from submission to publication because of the archaic print version.

    Instead of rating the journal based on citations, each article published in Cureus is given a comment period for other authors to rate it’s quality. The resulting index value is indicated on the article itself.

    We need more journals like Cureus in other fields.

    1. I immediately went to the Cureus web-site: http://www.cureus.com/

      I am not impressed with some things I observed immediately: “we’ll help you publish in just a few days, complete with comprehensive peer review and editorial oversight. ”

      That’s impossible. Companies and journals/publishers that make idiotic and surreal promises like that are termed predatory (because they prey upon the naivety of the scientific base and on its weaknesses, i.e., the desire to publish quickly), making false promises.

      Crowd-sourcing is good for other aspects of society. Keep it out of science. The system is sufficiently corrupted.

      Where is the central page of the company Cureus, Inc.*? Lack of transparency already casts a shadow on your efforts. Does your company take a cut of the crowd-source funds? Where do funds come from to maintain the site? Where and how are revenues generated? Are peer reviewers and editors paid?

      Finally, to give one exmaple why I think we’re being shammed by the Cureus project. Look at a randomly selected paper: http://www.cureus.com/articles/2504-complementary-and-alternative-medicine—herbals-and-supplements–a-review-for-the-primary-care-physician#.U5xMTFqCjIU

      There is a small symbol with a hammer that states “Peer reviewed”. By whom? By how many? Who exactly? Where are the peer reports? Where are the peer reviewers’ names, affiliations and COI satements? Peer review of this paper in less than one month? Without proof, sorry, there is no reason to believe that little marketing symbol.

      Call me synical, but I am tired of seeing each new mushrooming project claiming to be better and more sophisticated than the last trial-and-error: “Our mission is to remove politics and bias from the publishing world.” Mission already failed (or limited success).

      * http://investing.businessweek.com/research/stocks/private/snapshot.asp?privcapId=214646017

      1. You’re far more cynical than is warranted by your cursory examination of the site. Please do read more.

        I am not affiliated with the site in any way (I’m an economist, not a medical doctor), I simply found their model interesting. You’re looking for revenue, but I don’t think you’ll find it. Crowdsourcing is an extremely low cost model. As long as the software and site are maintained (which is likely done through grant or institutional funding), there’s no cost involved.

        On what basis do you think the turn around time could not be 7 days? If speed (obviously relative to the ridiculously sluggish 19th century process followed by most journals) is the value proposition (with the obvious caveat that quality must be maintained), then editors have an incentive to select reviewers who are attentive. Since the editors also require authors to suggest reviewers, the author is able to suggest people who s/he knows are attentive to such requests.

        Staying locked in the 1800s is really not a viable option. The real world moves far too fast these days and I don’t think folks with the proper incentives (i.e. non academics) will be hindered by the current journal process. There’s no reason that accuracy and quality have to mean sluggishness (unfortunately, this seems to be the current paradigm, at least in my field).

        “There is a small symbol with a hammer that states “Peer reviewed”. By whom? By how many? Who exactly? Where are the peer reports? Where are the peer reviewers’ names, affiliations and COI satements? Peer review of this paper in less than one month? Without proof, sorry, there is no reason to believe that little marketing symbol.”

        I’m not aware of any journal in my field that offers all of this information for everyone to see.

        “That’s impossible.”

        Why so?

        “Where is the central page of the company Cureus, Inc.*?”

        Should we only trust corporate entities when it comes to science? The internet (and modern society) are not so dependent on such structures anymore.

        From the Cureus FAQ:

        “What is our corporate address?

        Cureus Inc.
        c/o John R. Adler, Jr., MD
        Department of Neurosurgery
        Stanford University Medical Center
        300 Pasteur Drive
        Stanford, CA. 94305

        Who are the members of the management team and where are they based?

        John R. Adler, Jr, MD, Founder & CEO
        Stanford, CA

        Alexander Muacevic, MD, Co-Founder
        Munich, Germany

        Christopher Barretto, VP of Engineering
        San Francisco, CA

        What are the publishing credentials of the editors-in-chief?

        Dr. Adler and Dr. Muacevic are professors and noted academics with considerable experience publishing and reviewing within peer-reviewed medical journals. Together they have nearly 2 decades of experience serving on the editorial boards of several medical journals.

        Who is individually responsible for the scientific quality of publications?

        Professor John R. Adler, Stanford University
        Professor Alexander Muacevic, Munich University
        The Cureus Editorial Board of more than 300 accomplished scholars and clinical leaders
        The Cureus Academic Council — a select group of world-renowned academics and past presidents from leading institutions including Stanford University, the Salk Institute, the University of Chicago, Johns Hopkins, the National University of Singapore and the American Medical Association
        The Cureus SIQ (Scholarly Impact Quotient) scoring process utilizes community crowd sourcing to accurately discern scientific quality”

        It appears they will use the same model nearly every other provider of “free” content on the internet uses: advertising.

        “What is our advertising policy?

        In the future, Cureus plans to introduce paid, targeted ads from BioPharma and healthcare institutions in order to continue providing an entirely free open access journal.”

        I, along with some of my colleagues, are starting a new journal in our field and intend to use a crowdsourcing model. I think people in post-doc positions and new faculty in particular will see the value in the ability to publish in a few weeks instead of 12-18 months. I agree with Dan Zabetakis (first comment), people have so little understanding of the function of peer review and far too much faith in it. I hope this combination doesn’t hinder scholarship from integrating with the 21st century world.

  7. FWIW I invite any who are interested to take a look at what we’re doing at http://www.pre-val.org. I’ve also previously published my own ideas on how a metrics based approach around peer review might work (http://link.springer.com/article/10.1007%2Fs12109-013-9339-y). I suspect those who are in the “let’s do away with peer review” camp won’t change their minds, but I think these types of conversations and alternate approaches are valuable.

  8. While far from perfect, and probably not close to what the authors of this article have in mind, there are already attempts to provide quality assessment of reviews and journals. http://www.journalreviewer.org for example allows authors to evaluate the review process (including editors and reviewers) on a variety of quality indicators.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.