Peer review in 2030: New report hopes it’s faster, more transparent, and more diverse

Elizabeth Moylan
Rachel Burley

Over the decades, the concept of peer review has changed dramatically – so what does the future hold? That’s a question examined in a new report issued today by BioMed Central and Digital Science, based on discussions held during the SpotOn London conference. (Disclosure: Our co-founder Ivan Oransky spoke there.) We spoke with Elizabeth Moylan, Senior Editor Research Integrity at BioMed Central, and Rachel Burley, Publishing Director at BioMed Central about the central question posed by the report: What will peer review look like in the year 2030?

Retraction Watch: People have many complaints about peer review. What do you think are its most pressing flaws?

Elizabeth Moylan and Rachel Burley: Although peer review has many flaws, it is still fundamental to the publishing process and is the best system we have for validating research results and advancing discovery. The greatest challenges are that it can be slow, inefficient, biased and open to abuse.

RW: The report offers many different types of solutions to address those flaws. You talk about increasing transparency with new models of peer review — can you provide an idea of what those might look like?

EM and RB: These can range from sharing the peer review content (transparent peer review) to fully open peer review (where both the content of the report and the reviewer name accompany publication of the article). But there are other ways to ‘open up’ the peer review process too, for example by naming the handling Editor, by allowing reviewers to comment on each other’s reports and by facilitating exchanges between authors and reviewers.

The medical journals in the BMC Series have all practiced open peer review since 1999, and increasingly other journals within BMC’s wider portfolio have adopted open peer review. We believe it makes the peer review process more accountable and reduces bias. It also allows reviewers to have recognition for their vital work, and is helpful for others seeking training in how to do peer review.

RW: You mention using Artificial Intelligence (AI) to find the best reviewers — could you explain more about that, and how it might work?

EM and RB: Editors are responsible for managing their own reviewer lists, which includes finding new reviewers. When time is pressing it is often most practical for editors  to use reliable peer reviewers they have used before, without taking a risk on a new reviewer, but this can quickly over-burden some reviewers. It can be difficult to find early career researchers to peer review as they don’t necessarily have an established publication record. Smart software is helpful here in identifying new potential reviewers from web sources that editors may not have considered. In the long term this could help increase the pool, and reduce the workload for existing reviewers.

RW: The report discusses other ways of using automation, such as to spot inconsistencies in the text reviewers might miss. Could you talk more about that?

EM and RB: In the report Chadwick DeVoss from StatReviewer notes that:

many of the current plagiarism algorithms match text verbatim. The use of synonyms or paraphrasing can foil these services.  However, new software can identify components of whole sentences or paragraphs (much like the human mind would). It could identify and flag papers with similar-sounding paragraphs and sentences.

You could also use AI to check whether authors have failed to report key information, such as conflicts of interest, whether they have used the wrong statistical checks, or whether they have even fabricated data.

RW: Which of your suggestions do you think might have the most impact on scientific integrity and reducing the amount of flawed research that is ultimately retracted?

EM and RB: No one development will provide all the solutions, it will be a combination of all the above. Increasing transparency of the peer review process, using technology (including AI) and verification tools to assist editors and of course providing reviewers with the training support and recognition they deserve for the important work they do.

RW: Finally, what’s the answer to the main question: What will peer review look like in 2030? (And why pick that date, specifically?)

EM and RB: The scientific publishing landscape has changed a huge amount in the last 15 or so years, including the advent of digital journals and open access. We envisage that by 2030, we may have seen another revolution in research publishing which could see huge benefits for academics. It’s far enough in the future that we might see radical improvements, but close enough that we could also think about incremental solutions to individual issues.

Our ambition is that peer review will be quicker, more efficient, with increased recognition and transparency. We also want to work towards a more diverse, and inclusive peer reviewer pool. All of these should help to reduce peer reviewers’ workloads.

Whether you’re a frustrated scientist, a peer reviewer, an editor, a publisher or a librarian, we would love to hear your views. Please do tweet us using #SpotOnReport, or email us ([email protected]).

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

8 thoughts on “Peer review in 2030: New report hopes it’s faster, more transparent, and more diverse”

  1. No mention is made of paying peer reviewers for their consultant services. Volunteers, I have noted over the years, cut corners or decide for themselves how and how much of their services to provide. Editors under pressure from publishers are biased toward accepting favorable reviews. Peer reviewers understand the dynamic and can use it to ease the burden of doing a thorough job.

  2. “Our ambition is that peer review will be quicker, more efficient, with increased recognition and transparency.”

    While I agree wholeheartedly with the need for increased recognition and transparency, why there is such a need for speed in reviewing manuscripts? I argue that good science requires time, both in the production and during the quality check.

    1. I agree with your statement that good science requires time during the quality check.

      However, much of the time taken to review isn’t to do a proper quality check, but simply because it is at the bottom of a large stack of other things to do, and generally is placed in the lower-right corner rather than the upper-right corner of the Covey Quadrant. The question is maybe how to move reviews up, which would at the very least make peer review better (as it is considered important), but likely also faster (because it is considered important).

      Also, there must be ways of improving the time from submission to actually sending a paper out for review. I’ve seen some statistics for one journal, that indicated a time delay of around 3-4 weeks from submission to start of review for quite a few of the papers that were submitted.

  3. At least in the federal agency that I worked for, any review appearing under our name would have had to go through an extensive clearance process before we would be allowed to submit it. The clearance procedure could be lengthy, more than a journal would find unacceptable, and also might well require changes. And a review that expressed opinions might well not be cleared. Thus for practical purposes we could not participate in open peer review.

  4. Surely this is only a problem for readers who treat peer reviewed articles as having some halo of authenticity? What about personal integrity (reader and author)? As is obvious by the need for Retraction Watch, lots of rubbish gets published, even in so-called leading journals. Reader beware!

  5. The main problem with peer-review is that authors can ignore it.
    Consider this:
    An author submits a paper to a journal and the peer-review identifies significant problems with the analysis that call into question the validity of the results and the journal rejects the paper.
    The author re-submits the manuscript in its original form to another journal, the peer-review fails to identify the analysis issues, and the manuscript is accepted for publication.

    In the medical/clinical research area this sort of thing is exceedingly common and will continue to be so unless journals have access to all the peer-review.

    1. Or the new peer review identifies the shortcomings in the first peer review. Indeed, availability of open peer review improves the total process.

  6. In the interview, the role of artificial intelligence (AI) is highlighted also as a means to detect scientific misconduct making it a part of the struggle between the good and the bad.
    AI can also be constructive, if automatic generation of the routine parts of a paper is available such as the introduction, and the materials and methods section. An author then provides a link to this and further provides his unique scientific aim, unique results and his unique conclusions. This makes the production, processing and reading of papers more efficient.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.