More than one third — 35% — of the world’s top-ranked science journals that responded to a survey don’t have a retraction policy, according to a new study. And that’s a dramatic improvement over findings of a similar study a little more than a decade ago.
As we’ve written previously, John Carlisle, an anesthesiologist in the United Kingdom, analyzed nearly 170 papers by Fujii and found aspects of the reported data to be astronomically improbable. It turns out, however, that he made a mistake that, while not fatal to his initial conclusions, required fixing in a follow-up paper, titled “Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials,” also published in Anaesthesia.
In a paper that might be filed under “careful what you wish for,” a group of psychology researchers is warning that the push to replicate more research — the focus of a lot of attention recently — won’t do enough to improve the scientific literature. And in fact, it could actually worsen some problems — namely, the bias towards positive findings.
Last month, the community was shaken when a major study on gay marriage in Science was retracted following questions on its funding, data, and methodology. The senior author, Donald Green, made it clear he was not privy to many details of the paper — which raised some questions for C. K. Gunsalus, director of the National Center for Professional and Research Ethics, and Drummond Rennie, a former deputy editor at JAMA. We are pleased to present their guest post, about how co-authors can carry out their responsibilities to each other and the community.
Just about everyone understands that even careful and meticulous people can be taken in by a smart, committed liar. What’s harder to understand is when a professional is fooled by lies that would have been prevented or caught by adhering to community norms and honoring one’s role and responsibilities in the scientific ecosystem.
Take the recent, sad controversy surrounding the now-retracted gay marriage study. We were struck by comments in the press by the co-author, Donald P. Green, on why he had not seen the primary data in his collaboration with first author Michael LaCour, nor known anything substantive about its funding. Green is the more senior scholar of the pair, the one with the established name whose participation helped provide credibility to the endeavor.
The New York Timesquoted Green on May 25 as saying: “It’s a very delicate situation when a senior scientist makes a move to look at a junior scientist’s data set.”
Here at Retraction Watch, we are reminded every day that everybody (including us) makes mistakes — what matters is, how you handle yourself when it happens. That’s why we created a “doing the right thing” category, to flag incidents where scientists have owned up to their errors and taken steps to correct them.
We’re not suggesting retractions have no effect on a scientist’s career — a working paper posted last month by the National Bureau of Economic Research found that principal investigators with retracted papers see an average drop of 10% in citations of their other papers, a phenomenon known as a citation penalty. But they face a bigger penalty if the retraction stemmed from misconduct, rather than an honest mistake.
A new study suggests that much of what we think about misconduct — including the idea that it is linked to the unrelenting pressure on scientists to publish high-profile papers — is incorrect.
In a new paper out today in PLOS ONE [see update at end of post], Daniele Fanelli, Rodrigo Costas, and Vincent Larivière performed a retrospective analysis of retractions and corrections, looking at the influence of supposed risk factors, such as the “publish or perish” paradigm. The findings appeared to debunk the influence of that paradigm, among others: