“Why Growing Retractions Are (Mostly) a Good Sign”: New study makes the case

Daniele Fanelli
Daniele Fanelli

Retraction Watch readers will no doubt be familiar with the fact that retraction rates are rising, but one of the unanswered questions has been whether that increase is due to more misconduct, greater awareness, or some combination of the two.

In a new paper in PLOS Medicine, Daniele Fanelli, who has studied misconduct and related issues, tries to sift through the evidence. Noting that the number of corrections has stayed constant since 1980, Fanelli writes that: Continue reading “Why Growing Retractions Are (Mostly) a Good Sign”: New study makes the case

“Just significant” results have been around for decades in psychology — but have gotten worse: study

qjepLast year, two psychology researchers set out to figure out whether the statistical results psychologists were reporting in the literature were distributed the way you’d expect. We’ll let the authors, E.J. Masicampo, of Wake Forest, and Daniel Lalande, of the Université du Québec à Chicoutimi, explain why they did that:

The psychology literature is meant to comprise scientific observations that further people’s understanding of the human mind and human behaviour. However, due to strong incentives to publish, the main focus of psychological scientists may often shift from practising rigorous and informative science to meeting standards for publication. One such standard is obtaining statistically significant results. In line with null hypothesis significance testing (NHST), for an effect to be considered statistically significant, its corresponding p value must be less than .05.

When Masicampo and Lalande looked at a year’s worth of three highly cited psychology journals — the Journal of Experimental Psychology: General; Journal of Personality and Social Psychology; and Psychological Science — from 2007 to 2008, they found: Continue reading “Just significant” results have been around for decades in psychology — but have gotten worse: study

Are US behavioral science researchers more likely to exaggerate their results?

Daniele Fanelli
Daniele Fanelli

When Retraction Watch readers think of problematic psychology research, their minds might naturally turn to Diederik Stapel, who now has 54 retractions under his belt. Dirk Smeesters might also tickle the neurons.

But a look at our psychology category shows that psychology retractions are an international phenomenon. (Remember Marc Hauser?) And a new paper in the Proceedings of the National Academy of Sciences (PNAS) suggests that it’s behavioral science researchers in the U.S. who are more likely to exaggerate or cherry-pick their findings.

For the new paper, Daniele Fanelli — whose 2009 paper in PLoS ONE contains some of the best data on the prevalence of misconduct — teamed up with John Ioannidis, well known for his work on “why most published research findings are false.” They looked at Continue reading Are US behavioral science researchers more likely to exaggerate their results?

“Why Has the Number of Scientific Retractions Increased?” New study tries to answer

plos oneThe title of this post is the title of a new study in PLOS ONE by three researchers whose names Retraction Watch readers may find familiar: Grant Steen, Arturo Casadevall, and Ferric Fang. Together and separately, they’ve examined retraction trends in a number of papers we’ve covered.

Their new paper tries to answer a question we’re almost always asked as a follow-up to data showing the number of retractions grew ten-fold over the first decade in the 21st century. As the authors write: Continue reading “Why Has the Number of Scientific Retractions Increased?” New study tries to answer

One in twelve Belgian medical scientists admits having “made up and/or massaged data”: Survey

001_coverEOSA recently released survey of Belgian scientists suggests that Flemish medical researchers admit to having made up or massaged data more often than their counterparts around their world.

The survey, by the Dutch science magazine Eos with the help of Joeri Tijdink, of VU University Medical Center in Amsterdam, and the Pascal Decroos Fund for Investigative Journalism, found that Continue reading One in twelve Belgian medical scientists admits having “made up and/or massaged data”: Survey

Not in my journal: Two editors take stock of misconduct in their fields — and don’t find much

biol conservToday brings two journal editorials about misconduct and retractions. They take, if we may, a bit of an optimistic and perhaps even blindered approach.

In an editorial titled “Scientific misconduct occurs, but is rare,” Boston University’s Richard Primack, editor of Biological Conservation, highlights a Corrigendum of a paper by Jesus Angel Lemus, the veterinary researcher who has retracted seven papers: Continue reading Not in my journal: Two editors take stock of misconduct in their fields — and don’t find much

Majority of retractions are due to misconduct: Study confirms opaque notices distort the scientific record

A new study out in the Proceedings of the National Academy of Sciences (PNAS) today finds that two-thirds of retractions are because of some form of misconduct — a figure that’s higher than previously thought, thanks to unhelpful retraction notices that cause us to beat our heads against the wall here at Retraction Watch.

The study of 2,047 retractions in biomedical and life-science research articles in PubMed from 1973 until May 3, 2012 brings together three retraction researchers whose names may be familiar to Retraction Watch readers: Ferric Fang, Grant Steen, and Arturo Casadevall. Fang and Casadevall have published together, including on their Retraction Index, but this is the first paper by the trio.

The paper is Continue reading Majority of retractions are due to misconduct: Study confirms opaque notices distort the scientific record