Retractions take too long, carry toomuch of a stigma, and often provide too little information about what went wrong. Many people agree there’s a problem, but often can’t concur on how to address it. In one attempt, a group of experts — including our co-founder Ivan Oransky — convened at Stanford University in December 2016 to discuss better ways to address problems in the scientific record. Specifically, they explored which formats journals should adopt when publishing articleamendments — such as corrections or retractions. Although the group didn’t come to a unanimous consensus (what group does?), workshop leader Daniele Fanelli (now at the London School of Economics) and two co-authors (John Ioannidis and Steven Goodman at Stanford) published a new proposal for how to classify different types of retractions. We spoke to Fanelli about the new “taxonomy,” and why not everyone is on board.
Retraction Watch: What do you think are the biggest issues in how the publishing industry deals with article amendments?
“Why Do Scientists Fabricate And Falsify Data?” That’s the start of the title of a new preprint posted on bioRxiv this week by researchers whose names Retraction Watch readers will likely find familiar. Daniele Fanelli, Rodrigo Costas, Ferric Fang (a member of the board of directors of our parent non-profit organization), Arturo Casadevall, and Elisabeth Bik have all studied misconduct, retractions, and bias. In the new preprint, they used a set of papers from PLOS ONE shown in earlier research to have included manipulated images to test what factors were linked to such misconduct. The results confirmed some earlier work, but also provided some evidence contradicting previous findings. We spoke to Fanelli by email.
Are individual scientists now more productive early in their careers than 100 years ago? No, according to a large analysis of publication records released by PLOS ONE today.
Despite concerns of rising “salami slicing” in research papers in line with the “publish or perish” philosophy of academic publishing, the study found that individual early career researchers’ productivity has not increased in the last century. The authors analyzed more than 760,000 papers of all disciplines published by 41,427 authors between 1900 and 2013, cataloged by Thomson Reuters Web of Science.
The authors summarize their conclusions in “Researchers’ individual publication rate has not increased in a century:”
A new study suggests that much of what we think about misconduct — including the idea that it is linked to the unrelenting pressure on scientists to publish high-profile papers — is incorrect.
In a new paper out today in PLOS ONE [see update at end of post], Daniele Fanelli, Rodrigo Costas, and Vincent Larivière performed a retrospective analysis of retractions and corrections, looking at the influence of supposed risk factors, such as the “publish or perish” paradigm. The findings appeared to debunk the influence of that paradigm, among others:
Danish judges have overruled scientists in that nation, concluding that a panel of experts erred in finding that physiologist Bente Klarlund Pedersen, of the University of Copenhagen, was guilty of misconduct.
Last September, Pedersen announced that she would fight the ruling of the Danish Committees on Scientific Dishonesty (DCSD, Danish acronym UVVU), which had said she had committed misconduct in four of 12 articles it had examined.
One of the complaints we often hear about the self-correcting nature of science is that authors and editors seem very reluctant to retract papers with obvious fatal flaws. Indeed, it seems fairly clear that the number of papers retracted is smaller than the number of those that should be.
To try to get a sense of how errors are corrected in the literature, Arturo Casadevall, Grant Steen, and Ferric Fang, whose work on retractions will be familiar to our readers, in a new paper in the FASEB Journal, look at the sources of error in papers retracted for reasons other than misconduct.
We’ve only covered one retraction from Nigeria. But as we’ve often noted, retraction rates don’t necessarily correlate with rates of problematic research, so the low number doesn’t really answer the question in this post’s title.