Would peer review work better if reviewers talked to each other?

katherine-brown
Katherine Brown

Would distributing all reviewers’ reports for a specific paper amongst every referee before deciding whether to accept or reject a manuscript make peer review fairer and quicker? This idea — called “cross-referee commenting” — is being implemented by the journal Development, as part of its attempt to improve the peer-review process. Katherine Brown, executive editor of Development from Cambridge, UK, who co-authored a recent editorial about the phenomenon, spoke to us about the move. 

Retraction Watch: Many journals share the reviews of a particular paper with those who’ve reviewed it. What is cross-referee commenting in peer review and how is it different from current reviewing processes? Continue reading Would peer review work better if reviewers talked to each other?

Here’s why more than 50,000 psychology studies are about to have PubPeer entries

pubpeerPubPeer will see a surge of more than 50,000 entries for psychology studies in the next few weeks as part of an initiative that aims to identify statistical mistakes in academic literature.

The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.

Till now, the initiative is one of the biggest large-scale post-publication peer review efforts of its kind. Some researchers are, however, concerned about its current process of detecting potential mistakes, particularly the fact that potentially stigmatizing entries are created even if no errors are found. Continue reading Here’s why more than 50,000 psychology studies are about to have PubPeer entries

We’ve seen computer-generated fake papers get published. Now we have computer-generated fake peer reviews.

Eric Medvet
Eric Medvet

Retraction Watch readers may recall that in 2014, publisher Springer and IEEE were forced to retract more than 120 conference proceedings because the papers were all fakes, written by the devilishly clever SCIgen program and somehow published after peer review. So perhaps it was inevitable that fake computer-generated peer reviews were next.

In a chapter called “Your Paper has been Accepted, Rejected, or Whatever: Automatic Generation of Scientific Paper Reviews,” a group of researchers at the University of Trieste “investigate the feasibility of a tool capable of generating fake reviews for a given scientific paper automatically.” And 30% of the time, people couldn’t tell the difference. “While a tool of this kind cannot possibly deceive any rigorous editorial procedure,” the authors conclude, “it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds.”

We spoke to one of the chapter’s authors, Eric Medvet, by email.

Retraction Watch: In the paper, you test the feasibility of computer-generated fake peer reviews. Why? Continue reading We’ve seen computer-generated fake papers get published. Now we have computer-generated fake peer reviews.

Should systematic reviewers report suspected misconduct?

BMJ Open

Authors of systematic review articles sometimes overlook misconduct and conflicts of interest present in the research they are analyzing, according to a recent study published in BMJ Open.

During the study, researchers reviewed 118 systematic reviews published in 2013 in four high-profile medical journals — Annals of Internal Medicine, the British Medical Journal, The Journal of the American Medical Association and The Lancet. In addition, the authors contacted review authors to ask additional questions; 80 (69%) responded. The review included whether the authors had followed certain procedures to ensure the integrity of the data they were compiling, such as checking for duplicate publications, and analyzing if the authors’ conflicts of interest may have impacted the findings. 

Carrying out a systematic review involves collecting and critically analyzing multiple studies in the same area. It’s especially useful for accumulating and weighing conflicting or supporting evidence by multiple research groups. A byproduct of the process is that it can also help spot odd practices such duplication of publicationsContinue reading Should systematic reviewers report suspected misconduct?

We’re blinded by positive results. So what if we removed them?

Mike Findley
Mike Findley

The problem of publication bias — giving higher marks to a paper that reports positive results rather than judging it on its design or methods — plagues the scientific literature. So if reviewers are too focused on the results of a paper, would stripping a paper of its findings solve the problem? That was the question explored in a recent experiment by guest editors of Comparative Political Studies. Mike Findley, an associate professor at the University of Texas at Austin and one of the guest editors of the journal, talked to us about a new paper explaining what they learned.

Retraction Watch: Can you explain what a “results-free” paper looks and reads like? Continue reading We’re blinded by positive results. So what if we removed them?

Recognize “gotcha” peer reviews? This editor can

Neil Herndon
Neil Herndon

Ever read a review where the editor or reviewer seems to be specifically looking for reasons to reject a paper? Neil Herndon, the editor-in-chief of the Journal of Marketing Channels, from the South China University of Technology in Guangzhou, has. In a recent editorial, Herndon calls this type of review “gotcha” peer reviewing, and presents an alternative. 

Retraction Watch: What is “gotcha” reviewing?  What is its purpose and who is practicing it for the most part? Continue reading Recognize “gotcha” peer reviews? This editor can

From annoying to bitter, here are the six types of peer reviewers

Urban Geography

After two decades of submitting papers to journals, and more than 10 years of serving on an editorial board or editing journals, geography researcher Kevin Ward knows a thing or two about peer review.

Recently, as the editor of Urban Geography, he received a particularly “grumpy” and “obnoxious” review in his inbox, which got him thinking. Although, he says, the review raised “professionally appropriate issues,” it went well beyond the widely accepted content and tone. Ward, therefore, decided to reflect on his two decades of experience, and decipher the different types of reviewers and their characteristics.

In all, Ward — from the University of Manchester in the UK — says he’s encountered six types of referees.

Here’s the first, according to his recent editorial published in Urban Geography: 

Continue reading From annoying to bitter, here are the six types of peer reviewers

Do publishers add value? Maybe little, suggests preprint study of preprints

ArXiv

Academic publishers argue they add value to manuscripts by coordinating the peer-review process and editing manuscripts — but a new preliminary study suggests otherwise.

The study — which is yet to be peer reviewed — found that papers published in traditional journals don’t change much from their preprint versions, suggesting publishers aren’t having as much of an influence as they claim. However, two experts who reviewed the paper for us said they have some doubts about the methods, as it uses “crude” metrics to compare preprints to final manuscripts, and some preprints get updated over time to include changes from peer-reviewers and the journal.

The paper, posted recently on ArXiv, compared the text in over 12,000 preprint papers published on ArXiv from February 2015 to their corresponding papers published in journals after peer review.

The authors report in their paper, “Comparing published scientific journal articles to their pre-print versions:” Continue reading Do publishers add value? Maybe little, suggests preprint study of preprints

Do interventions to reduce misconduct actually work? Maybe not, says new report

Elizabeth Wager and Ana Marusic

Can we teach good behavior in the lab? That’s the premise behind a number of interventions aimed at improving research integrity, invested in by universities across the world and even private companies. Trouble is, a new review from the Cochrane Library shows that there is little good evidence to show these interventions work. We spoke with authors Elizabeth Wager (on the board of directors of our parent organization) and Ana Marusic, at the University of Split School of Medicine in Croatia.

Retraction Watch: Let’s start by talking about what you found – looking at 31 studies (including 15 randomized controlled trials) that included more than 9500 participants, you saw there was some evidence that training in research integrity had some effects on participants’ attitudes, but “minimal (or short-lived) effects on their knowledge.” Can you talk more about that, including why the interventions had little impact on knowledge? Continue reading Do interventions to reduce misconduct actually work? Maybe not, says new report

What if we tried to replicate papers before they’re published?

Martin Schweinsberg
Martin Schweinsberg
Eric Uhlmann
Eric Uhlmann

We all know replicability is a problem – consistently, many papers in various fields fail to replicate when put to the test. But instead of testing findings after they’ve gone through the rigorous and laborious process of publication, why not verify them beforehand, so that only replicable findings make their way into the literature? That is the principle behind a recent initiative called The Pipeline Project (covered in The Atlantic today), in which 25 labs checked 10 unpublished studies from the lab of one researcher in social psychology. We spoke with that researcher, Eric Uhlmann (also last author on the paper), and first author Martin Schweinsberg, both based at INSEAD.

Retraction Watch: What made you decide to embark upon this project? Continue reading What if we tried to replicate papers before they’re published?