Can you spot the signs of retraction? Just count the errors, says a new study

downloadClinical studies that eventually get retracted are originally published with significantly more errors than non-retracted trials from the same journal, according to a new study in BMJ.

The authors actually called the errors “discrepancies” — for example, mathematical mistakes such as incorrect percentages of patients in a subgroup, contradictory results, or statistical errors.

The study doesn’t predict which papers will eventually be retracted, since such discrepancies occur frequently (including one in the paper itself), but the authors suggest a preponderance could serve as an “early and accessible signal of unreliability.”

According to the authors, all based at Imperial College London, you see a lot more of these in papers that are eventually retracted: Continue reading Can you spot the signs of retraction? Just count the errors, says a new study

Here’s how to keep clinical trial participants honest (and why that’s a big deal)

NEJMAdditional lab tests, creating a clinical trial patient registry, and rewards for honesty are among the advice doled out in this week’s issue of the New England Journal of Medicine for researchers to help avoid the major issue of participants lying to get into clinical trials.

In the Perspective, David B. Resnik and David J. McCann, both based at the National Institutes of Health, address concerns raised by a 2013 survey of clinical trial participants that revealed “high rates” of “deceptive behavior.” Specifically: Continue reading Here’s how to keep clinical trial participants honest (and why that’s a big deal)

Weekend reads: “Unfeasibly prolific authors;” why your manuscript will be rejected; is science broken?

booksThe week at Retraction Watch featured revelations of yet more fake peer reviews, bringing the retraction total to 250. Here’s what was happening elsewhere: Continue reading Weekend reads: “Unfeasibly prolific authors;” why your manuscript will be rejected; is science broken?

Three more retractions for former record-holder Boldt, maybe more to come

m_cover

Justus Liebig University in Germany has been investigating concerns that Joachim Boldt, number two on the Retraction Watch Leaderboard and now up to 92 retractions, may have “manipulated” more data than previously believed.

Until now, the vast majority of Boldt’s retractions were thought to have involved inadequate ethics approval. However, new retraction notices for Boldt’s research suggest that there’s evidence the researcher also engaged in significant data manipulation.

The first retraction from the university investigation emerged last year. Two of three new notices cite the investigation specifically, and an informant at the university told us that there are more retractions to come.

Here are the retracted papers that are freshly on the record, starting with an August retraction for a 1991 Anesthesiology paper (cited 37 times, according to Thomson Scientific’s Web of Knowledge):

Continue reading Three more retractions for former record-holder Boldt, maybe more to come

At least one-third of top science journals lack a retraction policy — a big improvement

jmlaMore than one third — 35% — of the world’s top-ranked science journals that responded to a survey don’t have a retraction policy, according to a new study. And that’s a dramatic improvement over findings of a similar study a little more than a decade ago.

For the new paper, “Retraction policies of top scientific journals ranked by impact factor,” David Resnik, Grace Kissling, and Elizabeth Wager (a member of the board of directors of The Center For Scientific Integrity, our parent non-profit organization) surveyed 200 science journals with the highest impact factors about their retraction policies. About three-quarters provided the information:  Continue reading At least one-third of top science journals lack a retraction policy — a big improvement

Weekend reads: How to publish in Nature; social media circumvents peer review; impatience leads to fakery

booksThe week at Retraction Watch featured a look at why a fraudster’s papers continued to earn citations after he went to prison, and criticism of Science by hundreds of researchers. Here’s what was happening elsewhere: Continue reading Weekend reads: How to publish in Nature; social media circumvents peer review; impatience leads to fakery

Half of anesthesiology fraudster’s papers continue to be cited years after retractions

ethicsIn yet more evidence that retracted studies continue to accrue citations, a new paper has shown that nearly half of anesthesiologist Scott Reuben’s papers have been cited five years after being retracted, and only one-fourth of citations correctly note the retraction.

According to the new paper, in Science and Engineering Ethics: Continue reading Half of anesthesiology fraudster’s papers continue to be cited years after retractions

To catch a cheat: Paper improves on stats method that nailed prolific retractor Fujii

anaesthesiaThe author of a 2012 paper in Anaesthesia which offered the statistical equivalent of coffin nails to the case against record-breaking fraudster Yoshitaka Fujii (currently at the top of our leaderboard) has written a new article in which he claims to have improved upon his approach.

As we’ve written previously, John Carlisle, an anesthesiologist in the United Kingdom, analyzed nearly 170 papers by Fujii and found aspects of the reported data to be astronomically improbable. It turns out, however, that he made a mistake that, while not fatal to his initial conclusions, required fixing in a follow-up paper, titled “Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials,” also published in Anaesthesia.

According to the abstract:

Continue reading To catch a cheat: Paper improves on stats method that nailed prolific retractor Fujii

“The Replication Paradox:” Sans other fixes, replication may cause more harm than good, says new paper

M.A.L.M van Assen
Marcel .A.L.M van Assen

In a paper that might be filed under “careful what you wish for,” a group of psychology researchers is warning that the push to replicate more research — the focus of a lot of attention recently — won’t do enough to improve the scientific literature. And in fact, it could actually worsen some problems — namely, the bias towards positive findings.

Here’s more from “The replication paradox: Combining studies can decrease accuracy of effect size estimates,” by Michèle B. Nuijten, Marcel A. L. M. van Assen, Coosje L. S. Veldkamp, and Jelte M. Wicherts, all of Tilburg University: Continue reading “The Replication Paradox:” Sans other fixes, replication may cause more harm than good, says new paper

Weekend reads: Duplication rampant in cancer research?; meet the data detective; journals behaving badly

booksThis week saw us profiled in The New York Times and de Volkskrant, and the introduction of our new staff writer. We also launched The Retraction Watch Leaderboard. Here’s what was happening elsewhere: Continue reading Weekend reads: Duplication rampant in cancer research?; meet the data detective; journals behaving badly