Over the decades, the concept of peer review has changed dramatically – so what does the future hold? That’s a question examined in a new report issued today by BioMed Central and Digital Science, based on discussions held during the SpotOn London conference. (Disclosure: Our co-founder Ivan Oransky spoke there.) We spoke with Elizabeth Moylan, Senior Editor Research Integrity at BioMed Central, and Rachel Burley, Publishing Director at BioMed Central about the central question posed by the report: What will peer review look like in the year 2030?
Retraction Watch: People have many complaints about peer review. What do you think are its most pressing flaws?
Elizabeth Moylan and Rachel Burley: Although peer review has many flaws, it is still fundamental to the publishing process and is the best system we have for validating research results and advancing discovery. The greatest challenges are that it can be slow, inefficient, biased and open to abuse.
RW: The report offers many different types of solutions to address those flaws. You talk about increasing transparency with new models of peer review — can you provide an idea of what those might look like?
EM and RB: These can range from sharing the peer review content (transparent peer review) to fully open peer review (where both the content of the report and the reviewer name accompany publication of the article). But there are other ways to ‘open up’ the peer review process too, for example by naming the handling Editor, by allowing reviewers to comment on each other’s reports and by facilitating exchanges between authors and reviewers.
The medical journals in the BMC Series have all practiced open peer review since 1999, and increasingly other journals within BMC’s wider portfolio have adopted open peer review. We believe it makes the peer review process more accountable and reduces bias. It also allows reviewers to have recognition for their vital work, and is helpful for others seeking training in how to do peer review.
RW: You mention using Artificial Intelligence (AI) to find the best reviewers — could you explain more about that, and how it might work?
EM and RB: Editors are responsible for managing their own reviewer lists, which includes finding new reviewers. When time is pressing it is often most practical for editors to use reliable peer reviewers they have used before, without taking a risk on a new reviewer, but this can quickly over-burden some reviewers. It can be difficult to find early career researchers to peer review as they don’t necessarily have an established publication record. Smart software is helpful here in identifying new potential reviewers from web sources that editors may not have considered. In the long term this could help increase the pool, and reduce the workload for existing reviewers.
RW: The report discusses other ways of using automation, such as to spot inconsistencies in the text reviewers might miss. Could you talk more about that?
EM and RB: In the report Chadwick DeVoss from StatReviewer notes that:
many of the current plagiarism algorithms match text verbatim. The use of synonyms or paraphrasing can foil these services. However, new software can identify components of whole sentences or paragraphs (much like the human mind would). It could identify and flag papers with similar-sounding paragraphs and sentences.
You could also use AI to check whether authors have failed to report key information, such as conflicts of interest, whether they have used the wrong statistical checks, or whether they have even fabricated data.
RW: Which of your suggestions do you think might have the most impact on scientific integrity and reducing the amount of flawed research that is ultimately retracted?
EM and RB: No one development will provide all the solutions, it will be a combination of all the above. Increasing transparency of the peer review process, using technology (including AI) and verification tools to assist editors and of course providing reviewers with the training support and recognition they deserve for the important work they do.
RW: Finally, what’s the answer to the main question: What will peer review look like in 2030? (And why pick that date, specifically?)
EM and RB: The scientific publishing landscape has changed a huge amount in the last 15 or so years, including the advent of digital journals and open access. We envisage that by 2030, we may have seen another revolution in research publishing which could see huge benefits for academics. It’s far enough in the future that we might see radical improvements, but close enough that we could also think about incremental solutions to individual issues.
Our ambition is that peer review will be quicker, more efficient, with increased recognition and transparency. We also want to work towards a more diverse, and inclusive peer reviewer pool. All of these should help to reduce peer reviewers’ workloads.
Whether you’re a frustrated scientist, a peer reviewer, an editor, a publisher or a librarian, we would love to hear your views. Please do tweet us using #SpotOnReport, or email us (firstname.lastname@example.org).
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.