Make reviews public, says peer review expert

Irene Hames

After more than 30 years working with scholarly journals, Irene Hames has some thoughts on how to improve peer review. She even wrote a book about it. As the first recipient of the Publons Sentinel Award, Hames spoke to us about the most pressing issues she believes are facing the peer review system — and what should be done about them.

Retraction Watch: At a recent event held as part of this year’s Peer Review Week, you suggested that journals publish their reviews, along with the final paper. Why?

Irene Hames: I don’t think that saying something is ‘peer reviewed’ can any longer be considered a badge of quality or rigour. The quality of peer review varies enormously, ranging from excellent through poor/inadequate to non-existent. But if reviewers’ reports were routinely published alongside articles – ideally with the authors’ responses and editorial decision correspondence – this would provide not only information on the standards of peer review and editorial handling, but also insight into why the decision to publish has been made, the strengths and weaknesses of the work, whether readers should bear reservations in mind, and so on. As I’ve said before, I can’t understand why this can’t become the norm. I haven’t heard any reasons why it shouldn’t, and I’d love the Retraction Watch audience to make suggestions in the comments here. I’m not advocating that the reviewers’ names should appear – I think that’s a decision that should be left to journals and their communities.

If publishing reviewer reports were to become the norm, the other enormous benefit of this transparency would be that we’d easily and without complicated procedures or checklists expose ‘predatory’/questionable journals. Those with inadequate peer review and poor editorial decision-making would also be revealed. Many editors/journals are finding it harder to recruit reviewers, and there are concerns that this may be affecting quality. At the Peer Review Week panel in Chicago in September, I mentioned the statistic from Publons that the average length of a first-round verified peer-review report in their database was 457 words (median 321). I found this surprising and a bit concerning. Is that enough to do a proper review? Not just to indicate the quality of the work or level of enthusiasm for publication in selective journals, but to provide comprehensive feedback to authors. This is a key feature of high-quality peer review, and not only helps authors improve/rescue their current paper, but also can guide the direction of their research, and so help the research and scholarly efforts of a discipline. This may sound idealistic, but in my 20 years as a managing editor, during which time I oversaw many thousands of manuscripts through peer review and decision, I saw this in action.

RW: How else could the peer review process in academic publishing be improved?

IH: Basically, by making sure systems, processes and policies are fit for purpose so that they help achieve rather than impede efficient and appropriate peer review. This involves reviewing all these regularly, amending them when necessary, and ensuring consistency of information across the journal and manuscript management systems. Very importantly, new issues, developments, and innovations need to be discussed at the editorial board level early on and policies developed. These then need to be communicated broadly, using clear simple language that everyone can understand. If they aren’t, messy situations can arise that cause upset, damage relationships, and introduce delays all round. Preprints and what can be done with peer reviews are recent examples.

There needs to be adequate due diligence, especially when selecting and approaching potential peer reviewers. This shouldn’t need to be said, but the fake peer review cases have thrown up many instances where this hasn’t occurred. I was shocked back in 2012 when the first cases came to light, and now that over 500 retractions due to fake reviews have been reported by Retraction Watch, I’m quite stunned.

Finally, I think there needs to be real thought put into what we can realistically expect individual reviewers to do, and how a comprehensive review can be achieved. Journal articles are nowadays more than just a written paper – they have associated research outputs, e.g. datasets, software, code, protocols, preprints.

RW: You’ve been an advocate for training researchers to peer review articles. Why is that so important, and what might such training look like?

IH: It’s important because being a peer reviewer is an important job – peer reviewers’ assessments and recommendations guide editorial decisions, which in turn influence job appointments, promotions, and grant funding. Peer reviewers also help keep the scholarly literature sound. Up until now there’s been little formal training. That’s changing and many societies and organisations are introducing peer-review training for their communities. This is a good thing, but I worry that some is rather superficial, and also that researchers are in some cases being informed by people in marketing departments with little knowledge of the processes and issues, let alone actual experience.

What should the training look like? The finer details will vary from discipline to discipline, but the basic principles will be similar across all. The training should cover practical and ethical aspects across the whole process: how to respond to peer-review invitations, how to approach doing a review, how/when to communicate with journals during the review process, how to write a helpful and constructive report, how to deal with ethical issues and concerns. But it’s not just reviewers who need training – the same applies to editors. These individuals wield a lot of power, but often come to their roles with little or no experience of managing peer review.

Training in peer review should be part of the wider training in research integrity and publication ethics. Often, problems with research integrity don’t come to light till work is submitted for publication or published – you’ve only got to look at Retraction Watch posts to see the range and scale of problems we’re seeing. Dealing with these is stretching editors, editorial offices, and resources. I know from personal experience that there is considerable lack of knowledge and/or understanding of good research and publication practice among early-career researchers. Incorporating training on this for researchers at an early stage – at postgraduate level – will help minimize the problems editors and journals have to deal with.

RW: Congratulations on being the first recipient of the Publons Sentinel award. (We were honored to be nominated, as well.)  How does it feel to have your decades of work towards improving peer review be recognized in this way?

IH: I was deeply honoured to get the award, and actually very surprised when I was told, because of who else was on the shortlist. I’d like to congratulate all the others for the great contributions they’re making to peer-review innovation and accountability, and to keeping standards high. I was, though, really delighted to get the award at this time because next year, after 40 years working in scholarly publishing/communication, I’ll be moving to near-full-time retirement so it’s a very nice way to finish my career.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at [email protected].

31 thoughts on “Make reviews public, says peer review expert”

  1. Famous reviewers have identifiable styles. So, unless they take pains to disguise themselves, those experienced in a field will know who they are, which is a good thing in my estimation.

    1. I’ve come across reviewers adopting different styles for this reason. Also heard of cases where the style adopted is done to lead authors to assume someone else has done the review. Latter isn’t ethical, former is fine I think if just done to aid anonymity in a blinded process. In my experience, though, authors attempting to guess a reviewer’s identity are often wrong. Also, someone who they’ve requested not be a reviewer can produce the most generous and constructive review.

  2. Though I don’t mean to disrepect Irene’s many years of experience, I’m not yet convinced that quality of peer review can be made better through publishing of reviewers’ reports. I think if you’re in the field, you will know which are the predatory/questionable journals. And if you watch carefully who are the well-known researchers in the field, you can guess which are the predatory/questionable journals. I think the rise in predatory/questionable journals is due to the academic world placing pressures to publish and with those in management counting the number of published journals as a measure of staff performance.

    Similarly, I think peer-review reports are short because it is truly volunteer work. Many researchers do not get any credit from their place of employment for performing this academic service (well, unless it is the few “famous” journals out there). Likewise, some authors might abuse the system since it costs nothing to them to submit a paper. Quite frankly, authors no longer need to check if their manuscript is “ready”; they can submit it to a journal and let others check it for them. She mentions the phrase “helps authors improve/rescue their current paper” — I agree, but shouldn’t authors bare some responsibility, too? I’ve peer-reviewed some awful papers and I guess they just wanted to “test the waters”.

    I think I would like to see a mix of journals. Some with no reviewer reports made available, some with them made available, and some with them as well as reviewer names made available. And then let authors (and readers!) decide. If you go in and eliminate predatory/questionable journals (which I think *is* the right thing to do), then you also need to deal with how some universities and research centres are managed. Perhaps the former is the problem, but maybe the latter is the cause? A pool of double-blind reviewers would be nice, too. That is, sending the manuscript to one reviewer who isn’t directly in the area. This might minimize the chance of authors suggesting their friends as reviewers.

    Indeed, I agree there are many problems and I’m hesitant to believe that the solution can be summed up in a phrase such as “more open-access journals” or “more open reviews”. I think the problem is more complex and maybe the different solutions have to be tried before we even begin to know which method yields better quality research output.

    …perhaps a “meta-research study” is needed to see which way of publishing research is the best?

    1. I agree the scholarly publication arena is a complex one with many problems. There is no magical single solution, and what will work for one individual journal or initiative won’t for another. It’s great to see an increasing number of journals, societies and organisations being open-minded and experimenting with different approaches. They can all, of course, do whatever they want, but I agree with you that the research communities they serve can, do and will choose where they submit, also who they review for.
      The predatory/questionable journals issue is another complex area, more complicated than was originally thought. Easier to spot them within one’s own fields, but it can be difficult outside of those. Also in certain situations, eg for early-career researchers without good mentors, researchers in the global south (and many legitimate journals there can appear ‘predatory’ when assessed with some of the checklists being created; INASP is doing a lot of good work in this area) mixed-discipline/management position and promotion panels, assessing people’s CVs. I myself had problems with the last one recently – it took me a long time to go through a bibliography for someone in a different discipline when I had some concerns.

      1. Thanks for replying, Irene!

        About the issue of predatory/questionable journals, I agree that not everyone may have good mentors, but I don’t quite agree the part about “difficult to spot if it’s outside of one’s field”. I know this seems harsh, but I say that for two reasons…

        First, it’s like anything else in life — like buying your first car or whatever. There’s a fairly good chance you’re going to be ripped off. If there are car salespeople that rips off unsuspecting first time buyers, then why can’t there be journals that rip off unsuspecting first time authors? We might think that academia is different from buying a car, but at the end of the day, the common factor is that they’re people and…sadly, people like ripping off other people. It’s sort of human nature. While I think the government can step in and eliminate those really bad car dealerships, there are some that are just good enough to remain legal. So yes, they’ll find someone to rip off. We all have skills to detect when we’re about to be ripped off, but those skills are built up in time. Someone who submits to a predatory/questionable journal will learn.

        The other reason is that I think it is very rare that someone publishes in a completely new area. A co-author is sometimes needed or s/he can ask someone in their department. There will be someone they can ask…if there isn’t in the entire workplace, then they aren’t in a very good workplace and one can ask if the workplace is itself a predatory/questionable institute. 🙂 Joking aside, if you’re new to a field and your first attempt at publishing yields an accepted paper and they want you to pay first, then red flags should go off. It is no different from buying that car or any other life decision we have to make.

        I think my point is that even with the red flags going off, some people still choose to give their credit card number. Why? It because of the pressures from their workplace… So addressing predatory journals while ignoring why such people succumb to them seems to be only looking at one half of the problem. The harder half is really difficult to solve.

        On a unrelated note, there are some places where you can get many unsolicited phone calls and other places where you get very few. I think the difference between them is when the spammer has to pay for the phone calls. It would be “nice” if authors have to pay some amount to have their paper reviewed. An amount that is deducted from the publication charge when accepted but if it is rejected, it is donated to their favourite charity (since if the publisher kept it, then that could be a predatory journal again). Or the favourite charity as chosen by the reviewers? Some variation of this would be nice to see.

        Anyway, don’t get me wrong! It’s great that you’re asking interesting questions and even replying to comments here! I’m somewhat dismayed with open-access journals, for example — not with the model but that some believe it is *the* solution and it gets replicated by many. It isn’t the final solution…but maybe a step forward where we can see what works and what doesn’t work. Thank you!

        PS: I mean no disrespect to the occupation of selling cars. It’s just the first example I thought of which is associated with a “major” purchase that some do when they are young (especially a used car).

  3. As long as a Journal charges outrageous subscription fees why would a reviewer volunteer hours of free labor to that journal?

    1. It’s very much a reciprocal process so if someone submits work to a journal it wouldn’t really be fair if they didn’t also do some reviewing for them if approached. But otherwise reviewers are free to give their labour to whomever they wish. These days when many editors and journals are finding it harder to find reviewers they all need to work harder to earn their respect and agreement to review. I’ve, unfortunately, heard of some pretty bad and dismissive behaviour on the part of some (legitimate) journals.

  4. Re: ‘average length of a first-round verified peer-review report in their database was 457 words (median 321). I found this surprising and a bit concerning’. We should not equate quantity with quality: ‘Brevity is the Soul of Wit’ Hamlet, Bill Shakespeare, Act-2, Scene-2, line 90

    1. I absolutely agree that quantity can’t be equated with quality! eg 4 pages of trivial or copyediting comments aren’t what most editors are looking for from a reviewer’s valuable time (but reviewers should be given direction about that). A review of 457/321 words can be enough, especially when papers and the work being reported are good (or when they’re very bad – and reviewers are seeing some of which shouldn’t even have been sent out by journals), but complex submissions that are interesting, have potential etc, but where there may be some issues can take much longer. Getting proper constructive feedback is especially important when a submission is for what is likely to become a seminal paper.

    2. A note on Publons word counts: It’s not clear to me whether that average/median includes the peer reviews for which Publons sees only a receipt and not the content. I use Publons, and my “average” is very inaccurate because some of the journals I have reviewed for don’t automatically submit to Publons. In those cases Publons gives reviewers the option of forwarding the “thank you” email from the journal to verify a review, but the null word count for these appears to be averaged in with everything else. So those stats may not be entirely reliable.

      1. Thank you, Marsha, for posting this important bit of information. I asked Publons a while ago for details on how the figures had been derived and also, if possible, for breakdowns by various indicators. They were going to look into the things I’d asked for/suggested, plus other interesting things. I’ll contact Andrew Preston and see if he can provide an update here on where they are with this.

        1. Hi both, a quick response:

          > I use Publons, and my “average” is very inaccurate because some of the journals I have reviewed for don’t automatically submit to Publons. In those cases Publons gives reviewers the option of forwarding the “thank you” email from the journal to verify a review, but the null word count for these appears to be averaged in with everything else.

          We do exclude empty reviews from statistics. In your case it looks like some of your reviews have content and some don’t; we’ve only counted the reviews that have content. You can tell this by hovering over the review length statistic on your page. You will see a popup showing the number of reviews we’re including in the calculation.

          If you think there is still an error in our calculation then please let me know and we can look into it further.

          Irene, more generally we will post a blog with more details on review statistics in the new year.

  5. I have long maintained that the reviewers who recommend a publication should have their names identified — they should take ownership of that recommendation (Physics Today, Sept. 1995, pg 125). I believe this will improve review quality as no one wants their name associated with a paper that has errors, particularly ones that should have been detected by an observant reviewer.

    While many may believe they can determine authorship by style, in my experience, many authors have accused me of writing reviews I did not, and I have listened to authors tell me that someone else had written a review that I myself had written. The one exception is the reviewer who demands the authors cite a number of publications, all from the same group.

    1. Well, even in that case, there might be exceptions. I once reviewed a paper and one of the other reviewers suggested to cite 2 of my papers, and I am pretty sure he was not a co-author. Luckily, his review was much more than that, otherwise I would have contacted the editor to clarify the situation (if possible).

    2. This seems like a reasonable thought (and I agree, sort of), but I’m not sure it is possible to implement for a big reason:

      (1) This would significantly increase time that a review takes. If my name were associated with the paper, I would feel obligated to have to run stats, etc. on it (assuming data is there). This would mean fewer people would accept reviews (which is a problem in my field, at least).

      And I have further reservations because:

      (2) I think it would be make reviewers way more negative and probably squash creative innovation, especially on non-experimental papers. Not that they wouldn’t get published, but they might take an extra few tries and that is a waste of everyone’s time and bad for moving science forward.

    3. I see a major problem with respect to risk/reward. It is already hard to find people willing to spend their time on reviewing other people’s work without being rewarded for it. I cannot imagine who would be willing to do that and at the same time accept the risk of being blamed for having made mistakes in the reviewing work they did for free.

  6. Tanacetum,

    I disagree with the notion that more negative reviewers would probably squash creative innovation. Given the volume of poor quality and biased research that gets published, how can extra tries be a waste of everybody’s time? Shouldn’t “science” be about discovery of verifiable facts? Glasziou and Chalmers estimate that 85% of health research is a “waste.” http://blogs.bmj.com/bmj/2016/01/14/paul-glasziou-and-iain-chalmers-is-85-of-health-research-really-wasted/.

  7. A potential problem in the short term will be that authors are embarrassed by comments to the effect that they cannot spell or add simple numbers (which is remarkably common in what I review) being made public, a potential benefit in the long run might be that authors then take the trouble to properly proof read their manuscripts and check their calculations.

  8. There is a simple solution.

    If scientists are largely University based, using taxpayers and charity money, it is simply astonishing that billions of dollars can be spent using what has now become a secret society of reviewers, often leading to a friend of a friend scenario “I will scratch your back if you scratch mine”. Publications are the backbone of a successful grant application. Universities espouse to openness and transparency and yet their most senior academics, using taxpayers and charity money for their research, do not support or implement such policies, nor I believe, will they ever. Why should they, the system as it is works fine for them?

    Perhaps the reason why Glasziou and Chalmers estimate that 85% of health research is a “waste” as John H Noble Jr pointed out, above, is due to this unique phenomenon.

    My proposed solution is simple; all grant awarding bodies, such as NIH, MRC, Wellcome Trust etc should have a condition of applications and award: All reviewers of their publications arising from their funded research, included in grant applications, and reviews of the grant applications submitted, be named and full reviews published. This is the only way to a truly transparent system.

    Here in the UK in 2005 grant bodies adapted a policy to create an equality of employment for women, and it is, by the large, working well. All Universities here have an Athena Swan Charter, without which they would not be allowed to apply for many grants at all. This was government led, and the Universities had no choice but to implement.

    An open review system is the future, but the faster it arrives the better it will be for all.

    1. The problem is that whereas it is easy for funding agencies to make demands of the researchers that they fund and hence the authors of the papers, they are in a poor position to make demands of the reviewers. Imagine that, for example, MRC demanded that I as an unpaid reviewer of a manuscript in a country other than the UK must make my review report openly available with my name on it. Why would I accept that demand? How would they find reviewers if many people said no to their demands?

      1. I imagine you may accept the demand in the interests of open, honest science where transparency is at the forefront. There may even be post peer grant application reviews on the internet. Its always good to discuss science and reviews of papers and grants are an integral part of science and there is simply no need for maintaining the status quo of nameless, secret reviews which have all but made peer review a non sequitur.

        Irene Hames sums this up nicely above: “I don’t think that saying something is ‘peer reviewed’ can any longer be considered a badge of quality or rigour”

  9. Some of comments made above are of academics that live and or believe in a utopic world. In real world you can have you academic career ruined for just make a negative comment on a paper of a “top dog”. Not to mention those many countries where freedom of speeech does not even exist. Only those who are already retired can afford such luxury,

    1. Thanks, Pedro, for presenting this perspective. It’s one of the reasons I don’t advocate named open reviews, just anonymous ones. Not all journals are clear about what type of peer review and degree of openness they practice, or whether making reviewers’ names known to the author/s or publishing reviewers’ names with their reviews is an absolute policy or up to the people involved as authors and reviewers. If anyone is ever in any doubt – especially those who might be harmed by the release of certain information, for example because of lack of freedom of speech or career stage – they shouldn’t be afraid to ask for clarification.

    2. Pedro, if this is the case, I would suggest such “top dogs” need naming and shaming. We have all heard of the Harvey Weinstein scandal, it only takes a few brave individuals to out such bullies and create an establishment change.

      This, I believe, will make science better. This makes our societies better.

  10. Andrew Paterson
    Re: ‘average length of a first-round verified peer-review report in their database was 457 words (median 321). I found this surprising and a bit concerning’. We should not equate quantity with quality: ‘Brevity is the Soul of Wit’ Hamlet, Bill Shakespeare, Act-2, Scene-2, line 90

    Brevity is the sole of wit, but once you subtract from that brevity the salutations and boilerplate about the paper, enthusiastic comments about why it should be published, why they should refer to your paper of 1922, very little is left for substantial comment, especially if there is no formal method to indicate which bits of the paper have been scrutinized.

    Brevity is great for making one pointed comment about the paper.

    It is terrible for indicating that you read and considered in detail references 1-20, verified that the assertions made about those papers were correct, understood properly the scaling of any metrics used in the article, and if they were used in impermissible ways (taking the mean of a log distribution, for example) .

    The assertion that brevity is good implicitly assumes that peer reviewers carefully read all aspects of the paper, and their conclusion is based on that.

    Rather than (as some that I have read that are open) seem to, make simple errors due to skimming the paper, and ignore the ‘hard parts’ entirely, commenting only on the discussion section and abstract – especially if those agree with their work.

    And yes, this response implies that peer review as a free service has severe problems, expecting people to peer review papers to this level of detail is hard.
    But, what it effectively means is if two (or perhaps three) reviewers skip over the hard parts, no peer review has been done in a meaningful way.
    Open review helps in that you can at least get a sense for if the peer reviewer even glanced at sections which you have questions about in a paper.

  11. Irene Hames’s paper is good and makes some important points, but I wonder if she (and your readers) is aware that EMBO Journal is doing just what she advocates (publishing referees’ reports and also editorial correspondence). They call this the “transparent publishing process”. Maybe you should have a look at recent issues of this (excellent) journal…

    1. Thanks for raising this here Bertrand – yes, the EMBO journals have been doing a superb job with their transparent peer review process for many years and I often mention and commend them for it. I also direct early-career researchers there to gain insight into reviews, how authors respond to them, what editorial correspondence looks like.
      An example can be found via this link for anyone who’s interested – just click on the ‘Review Process File’
      http://emboj.embopress.org/content/early/2017/11/17/embj.201796484.transparent-process

  12. I full-heartedly agree that reviews should be made public. It is crucial to known the arguments that underlie the decision to admit a paper to the scientific literature. At Collabra: Psychology, we give authors the option of open reviews. Iy is encouraging to know that more than 80% of authors opt for open review. The reviews can be anonymous. This is important, but also cause for concern, because early career researchers (as well as some more senior researcher) are concerned about career consequences when they sign their reviews.

  13. As noted by Bertrand Jordan, all four EMBO Press journals have had voluntary transparent review for over 8 years now – referees are welcome to sign, but don’t have to and importantly we also extend transparency to show all editor-author interactions in unedited form with the dates assigned. In our view this has a number of very concrete advantages: 1) it adds three expert views of a dataset orthogonal to the authors; 2) it is a great self teaching tool for referees; 3) it adds accountability for referees, authors and editors by opening the black box of the editorial process. We also have a number of other related policies such as encouraging mentored co-review by named postdocs in the referee’s lab, removing ‘confidential comments’ from reports and emphasizing free text reports over multiple choice answers.
    Editorial decisions are further enhanced by referee Cross-commenting and author preconsultations.
    We are very interested in providing greater credit to referees for their crucial work.
    Read more @The EMBO Journal (2010) 29, 3891-3892 and The EMBO Journal (2014) 33, 1-2 (http://emboj.embopress.org/content/29/23/3891; http://emboj.embopress.org/content/33/1/1)

  14. We are all missing important points here. Value added, fiduciary compensation, equity, equality, profits, etc. Nothing is being said about these as a substantial part of the conversation.

  15. One aspect of making reviews public is that it could help students or novice readers further evaluate and understand research and the process leading to its publication. As a librarian, I’ve seen many students who equate “peer reviewed” with being “fact” which oversimplifies the context of research, the process of developing new knowledge, and this idea of “scholarship as conversation.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.