At least one-third of top science journals lack a retraction policy — a big improvement

jmlaMore than one third — 35% — of the world’s top-ranked science journals that responded to a survey don’t have a retraction policy, according to a new study. And that’s a dramatic improvement over findings of a similar study a little more than a decade ago.

For the new paper, “Retraction policies of top scientific journals ranked by impact factor,” David Resnik, Grace Kissling, and Elizabeth Wager (a member of the board of directors of The Center For Scientific Integrity, our parent non-profit organization) surveyed 200 science journals with the highest impact factors about their retraction policies. About three-quarters provided the information:  Continue reading At least one-third of top science journals lack a retraction policy — a big improvement

Weekend reads: How to publish in Nature; social media circumvents peer review; impatience leads to fakery

booksThe week at Retraction Watch featured a look at why a fraudster’s papers continued to earn citations after he went to prison, and criticism of Science by hundreds of researchers. Here’s what was happening elsewhere: Continue reading Weekend reads: How to publish in Nature; social media circumvents peer review; impatience leads to fakery

Half of anesthesiology fraudster’s papers continue to be cited years after retractions

ethicsIn yet more evidence that retracted studies continue to accrue citations, a new paper has shown that nearly half of anesthesiologist Scott Reuben’s papers have been cited five years after being retracted, and only one-fourth of citations correctly note the retraction.

According to the new paper, in Science and Engineering Ethics: Continue reading Half of anesthesiology fraudster’s papers continue to be cited years after retractions

To catch a cheat: Paper improves on stats method that nailed prolific retractor Fujii

anaesthesiaThe author of a 2012 paper in Anaesthesia which offered the statistical equivalent of coffin nails to the case against record-breaking fraudster Yoshitaka Fujii (currently at the top of our leaderboard) has written a new article in which he claims to have improved upon his approach.

As we’ve written previously, John Carlisle, an anesthesiologist in the United Kingdom, analyzed nearly 170 papers by Fujii and found aspects of the reported data to be astronomically improbable. It turns out, however, that he made a mistake that, while not fatal to his initial conclusions, required fixing in a follow-up paper, titled “Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials,” also published in Anaesthesia.

According to the abstract:

Continue reading To catch a cheat: Paper improves on stats method that nailed prolific retractor Fujii

“The Replication Paradox:” Sans other fixes, replication may cause more harm than good, says new paper

M.A.L.M van Assen
Marcel .A.L.M van Assen

In a paper that might be filed under “careful what you wish for,” a group of psychology researchers is warning that the push to replicate more research — the focus of a lot of attention recently — won’t do enough to improve the scientific literature. And in fact, it could actually worsen some problems — namely, the bias towards positive findings.

Here’s more from “The replication paradox: Combining studies can decrease accuracy of effect size estimates,” by Michèle B. Nuijten, Marcel A. L. M. van Assen, Coosje L. S. Veldkamp, and Jelte M. Wicherts, all of Tilburg University: Continue reading “The Replication Paradox:” Sans other fixes, replication may cause more harm than good, says new paper

Weekend reads: Duplication rampant in cancer research?; meet the data detective; journals behaving badly

booksThis week saw us profiled in The New York Times and de Volkskrant, and the introduction of our new staff writer. We also launched The Retraction Watch Leaderboard. Here’s what was happening elsewhere: Continue reading Weekend reads: Duplication rampant in cancer research?; meet the data detective; journals behaving badly

“If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science”: Guest post

Last month, the community was shaken when a major study on gay marriage in Science was retracted following questions on its funding, data, and methodology. The senior author, Donald Green, made it clear he was not privy to many details of the paper — which raised some questions for C. K. Gunsalus, director of the National Center for Professional and Research Ethics, and Drummond Rennie, a former deputy editor at JAMA. We are pleased to present their guest post, about how co-authors can carry out their responsibilities to each other and the community.

C. K. Gunsalus
C. K. Gunsalus

Just about everyone understands that even careful and meticulous people can be taken in by a smart, committed liar. What’s harder to understand is when a professional is fooled by lies that would have been prevented or caught by adhering to community norms and honoring one’s role and responsibilities in the scientific ecosystem.

Take the recent, sad controversy surrounding the now-retracted gay marriage study. We were struck by comments in the press by the co-author, Donald P. Green, on why he had not seen the primary data in his collaboration with first author Michael LaCour, nor known anything substantive about its funding. Green is the more senior scholar of the pair, the one with the established name whose participation helped provide credibility to the endeavor.

The New York Times quoted Green on May 25 as saying: “It’s a very delicate situation when a senior scientist makes a move to look at a junior scientist’s data set.”

Really?

Continue reading “If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science”: Guest post

Who has the most retractions? Introducing the Retraction Watch leaderboard

Ever since we broke the news about the issues with the now-retracted Science paper about changing people’s minds on gay marriage, we’ve been the subject of a lot of press coverage, which has in turn led a number of people to ask us: Who has the most retractions?

Well, we’ve tried to answer that in our new Retraction Watch leaderboard.

Here is the current list (click here for more detailed information about our methodology and additional notes): Continue reading Who has the most retractions? Introducing the Retraction Watch leaderboard

The consequences of retraction: Do scientists forgive and forget?

NBERHere at Retraction Watch, we are reminded every day that everybody  (including us) makes mistakes — what matters is, how you handle yourself when it happens. That’s why we created a “doing the right thing” category, to flag incidents where scientists have owned up to their errors and taken steps to correct them.

We’re not suggesting retractions have no effect on a scientist’s career — a working paper posted last month by the National Bureau of Economic Research found that principal investigators with retracted papers see an average drop of 10% in citations of their other papers, a phenomenon known as a citation penalty. But they face a bigger penalty if the retraction stemmed from misconduct, rather than an honest mistake.

This jibes with research we’ve seen before, which shows the scientific community can be forgiving when researchers own up to their mistakes – notably, a 2013 study that found scientists face no citation penalty if they ask to retract their own papers, rather than forcing the journal or publisher to act.

Continue reading The consequences of retraction: Do scientists forgive and forget?

Pressure to publish not to blame for misconduct, says new study

plosoneA new study suggests that much of what we think about misconduct — including the idea that it is linked to the unrelenting pressure on scientists to publish high-profile papers — is incorrect.

In a new paper out today in PLOS ONE [see update at end of post], Daniele Fanelli, Rodrigo Costas, and Vincent Larivière performed a retrospective analysis of retractions and corrections, looking at the influence of supposed risk factors, such as the “publish or perish” paradigm. The findings appeared to debunk the influence of that paradigm, among others:

Continue reading Pressure to publish not to blame for misconduct, says new study