A few months ago, a researcher told Evelien Oostdijk there might be a problem with a 2014 JAMA study she had co-authored.
The study had compared two methods of preventing infection in the intensive care unit (ICU). But a separate analysis had produced different results.
Oostdijk, from the University Medical Center Utrecht in The Netherlands, immediately got to work to try to figure out what was going on. And she soon discovered the problem: The coding for the two interventions had been reversed at one of the 16 ICUs. This switch had “a major impact on the study outcome,” last author Marc Bonten, also from the University Medical Center Utrecht, wrote in a blog post about the experience yesterday, because it occurred at “one of the largest participating ICUs.”
When Oostdijk and a researcher not involved in the study analyzed the data again, they discovered a notable difference between the revised and original findings: The new analysis revealed that one of the interventions had a small but significant survival benefit over the other.
Oostdijk and Bonten, who supervised the re-analysis, notified their colleagues of the revised study outcomes and contacted the journal requesting a retraction and replacement, which was published yesterday in JAMA.
By now, most of our readers are aware that some fields of science have a reproducibility problem. Part of the problem, some argue, is the publishing community’s bias toward dramatic findings — namely, studies that show something has an effect on something else are more likely to be published than studies that don’t.
Many have argued that scientists publish such data because that’s what is rewarded — by journals and, indirectly, by funders and employers, who judge a scientist based on his or her publication record. But a new meta-analysis in PNAS is saying it’s a bit more complicated than that.
In a paper released today, researchers led by Daniele Fanelli and John Ioannidis — both at Stanford University — suggest that the so-called “pressure-to-publish” does not appear to bias studies toward larger so-called “effect sizes.” Instead, the researchers argue that other factors were a bigger source of bias than the pressure-to-publish, namely the use of small sample sizes (which could contain a skewed sample that shows stronger effects), and relegating studies with smaller effects to the “gray literature,” such as conference proceedings, PhD theses, and other less publicized formats.
However, Ferric Fang of the University of Washington — who did not participate in the study — approached the findings with some caution:
PubPeer will see a surge of more than 50,000 entries for psychology studies in the next few weeks as part of an initiative that aims to identify statistical mistakes in academic literature.
The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.
An article on how missing teeth affect chewing was — well, pulled — when someone noticed a few errors. The journal later published a corrected version.
The retraction for “Chewing ability in an adult Chinese population” appeared in Clinical Oral Investigations in 2012, but we’re sharing it with you now because the notice contains some remarkable language:
This article has been withdrawn due to wrong content with serious consequences such as danger to people’s health.
Last author Nico H.J. Creugers, who works at Radboud University Medical Center in the Netherlands, told us:
In the editorial titled “Organised crime against the academic peer review system,” Adam Cohen and other editors at the British Journal of Clinical Pharmacology say they missed “several fairly obvious clues that should have set alarm bells ringing.” For instance, the glowing reviews from supposed high-profile researchers at Ivy League institutions were returned within a few days, were riddled with grammar problems, and the authors had no previous publications.