Weekend reads: Former ORI director speaks out; Is peer review broken?

booksAnother busy week at Retraction Watch. Here’s what was happening elsewhere on the web in scientific publishing and related issues: Continue reading Weekend reads: Former ORI director speaks out; Is peer review broken?

“The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles”

jomThe headline of this post is the title of a fascinating new paper in the Journal of Management suggesting that if the road to publication is paved with good intentions, it may also be paved with bad scientific practice.

Ernest Hugh O’Boyle and colleagues tracked 142 management and applied psychology PhD theses to publication, and looked for various questionable research practices — they abbreviate those “QRPs” — such as deleting or adding data after hypothesis tests, selectively adding or deleting variables, and adding or deleting hypotheses themselves.

Their findings?

Continue reading “The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles”

Tune into BBC Radio 4 today to hear Ivan talk about latest stem cell controversy, post-publication peer review

Ivan-OranskyIvan is scheduled to be on Inside Science on BBC Radio 4 at 12:30 p.m. Eastern (1630 UK time) to discuss the latest stem cell controversy, and what it says about the state of post-publication peer review. Continue reading Tune into BBC Radio 4 today to hear Ivan talk about latest stem cell controversy, post-publication peer review

Weekend reads: “Too much success” in psychology, why hoaxes aren’t the real problem in science

booksAnother busy week at Retraction Watch. Here’s what was happening elsewhere around the web in science publishing and research integrity news: Continue reading Weekend reads: “Too much success” in psychology, why hoaxes aren’t the real problem in science

Nobel Prize winner calls peer review “very distorted,” “completely corrupt,” and “simply a regression to the mean”

brenner
Sydney Brenner

Sydney Brenner has been talking about what’s wrong with the scientific enterprise since long before he shared the Nobel Prize in Physiology or Medicine in 2002.

And in a new interview, Brenner doesn’t hold back, saying that publishers hire “a lot of failed scientists, editors who are just like the people at Homeland Security, little power grabbers in their own sphere.”

In a King’s Review Q&A titled “How Academia and Publishing Are Destroying Scientific Innovation,” Brenner says: Continue reading Nobel Prize winner calls peer review “very distorted,” “completely corrupt,” and “simply a regression to the mean”

No more scientific Lake Wobegon: After criticism, publisher adds a “reject” option for peer reviewers

doveIf you know Prairie Home Companion, you that that in fictional Lake Wobegon, “all the women are strong, all the men are good looking, and all the children are above average.”

That’s a bit like what Harvard’s Nir Eyal found when he was asked to review a paper for Dove Medical Press. Here’s what he saw when he looked at Dove’s peer reviewer form:
Continue reading No more scientific Lake Wobegon: After criticism, publisher adds a “reject” option for peer reviewers

Weekend reads: Waste in research, a praise-worthy swift correction in NEJM, and more

booksThe first full week of 2014 featured a slew of stories and commentary about scientific publishing and related issues. Here’s a sampling: Continue reading Weekend reads: Waste in research, a praise-worthy swift correction in NEJM, and more

Weekend reads: Stapel as an object lesson, peer review’s flaws, and salami slicing

booksIt’s been another busy week at Retraction Watch. Here’s a sampling of scientific publishing and misconduct news from around the web: Continue reading Weekend reads: Stapel as an object lesson, peer review’s flaws, and salami slicing

“Just significant” results have been around for decades in psychology — but have gotten worse: study

qjepLast year, two psychology researchers set out to figure out whether the statistical results psychologists were reporting in the literature were distributed the way you’d expect. We’ll let the authors, E.J. Masicampo, of Wake Forest, and Daniel Lalande, of the Université du Québec à Chicoutimi, explain why they did that:

The psychology literature is meant to comprise scientific observations that further people’s understanding of the human mind and human behaviour. However, due to strong incentives to publish, the main focus of psychological scientists may often shift from practising rigorous and informative science to meeting standards for publication. One such standard is obtaining statistically significant results. In line with null hypothesis significance testing (NHST), for an effect to be considered statistically significant, its corresponding p value must be less than .05.

When Masicampo and Lalande looked at a year’s worth of three highly cited psychology journals — the Journal of Experimental Psychology: General; Journal of Personality and Social Psychology; and Psychological Science — from 2007 to 2008, they found: Continue reading “Just significant” results have been around for decades in psychology — but have gotten worse: study