One of the complaints we often hear about the self-correcting nature of science is that authors and editors seem very reluctant to retract papers with obvious fatal flaws. Indeed, it seems fairly clear that the number of papers retracted is smaller than the number of those that should be.
To try to get a sense of how errors are corrected in the literature, Arturo Casadevall, Grant Steen, and Ferric Fang, whose work on retractions will be familiar to our readers, in a new paper in the FASEB Journal, look at the sources of error in papers retracted for reasons other than misconduct.
One of the questions we often get — but are careful to answer with some version of “we don’t know because we don’t have a denominator” — is how retraction rates vary by scientific field and country. We’ve noticed that the reasons for retraction seem to vary among countries, but didn’t really have the data. A new paper in the Journal of the Medical Library Association by Kathleen Amos takes a good step toward figuring the country part out.
We’ve only covered one retraction from Nigeria. But as we’ve often noted, retraction rates don’t necessarily correlate with rates of problematic research, so the low number doesn’t really answer the question in this post’s title.
Today, PeerJ published Brookes’ analysis of the response to critiques on Science-Fraud.org. It’s a compelling examination that suggests public scrutiny of the kind found on the site — often harsh, but always based solidly on evidence — is linked to more corrections and retractions in the literature.
Brookes looked at
497 papers for which data integrity had been questioned either in public or in private. As such, the papers were divided into two sub-sets: a public set of 274 papers discussed online, and the remainder a private set of 223 papers not publicized.