Archive for the ‘studies about peer review’ Category
“Just significant” results have been around for decades in psychology — but have gotten worse: study
Last year, two psychology researchers set out to figure out whether the statistical results psychologists were reporting in the literature were distributed the way you’d expect. We’ll let the authors, E.J. Masicampo, of Wake Forest, and Daniel Lalande, of the Université du Québec à Chicoutimi, explain why they did that:
The psychology literature is meant to comprise scientific observations that further people’s understanding of the human mind and human behaviour. However, due to strong incentives to publish, the main focus of psychological scientists may often shift from practising rigorous and informative science to meeting standards for publication. One such standard is obtaining statistically significant results. In line with null hypothesis significance testing (NHST), for an effect to be considered statistically significant, its corresponding p value must be less than .05.
When Masicampo and Lalande looked at a year’s worth of three highly cited psychology journals — the Journal of Experimental Psychology: General; Journal of Personality and Social Psychology; and Psychological Science — from 2007 to 2008, they found: Read the rest of this entry »
PubMed today launches a pilot version of PubMed Commons,
a system that enables researchers to share their opinions about scientific publications. Researchers can comment on any publication indexed by PubMed, and read the comments of others.
In general, we’re big fans of post-publication peer review, as Retraction Watch readers know. Once it’s out of its pilot phase — and we hope that’s quite soon — PubMed Commons comments will be publicly available. So this is a step forward — but only a tentative one. That’s because of the first bullet point in the terms of service commenters agree to: Read the rest of this entry »
We’ve sometimes said, paraphrasing Winston Churchill, that pre-publication peer review is the worst way to vet science, except for all the other ways that have been tried from time to time.
subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published
Their findings? Read the rest of this entry »
That’s what Nadia Elia, Liz Wager, and Martin Tramer reported here Sunday in an abstract at the Seventh International Congress on Peer Review and Biomedical Publication. Elia and Tramer are editors at the European Journal of Anaesthesiology, while Wager is former chair of the Committee on Publication Ethics (COPE).
As of January 2013, nine of the papers hadn’t been retracted, Tramer said, while only five — all in one journal — had completely followed COPE guidelines, with adequate retraction notices, made freely available, along with PDFs properly marked “Retracted.” From the abstract (see page 18): Read the rest of this entry »
One of the issues that comes up again and again on Retraction Watch is when it’s appropriate to retract a paper. There are varying opinions. Some commenters have suggested, given the stigma attached, retraction should be reserved for fraud, while many more say error — even unintentional — is enough to merit withdrawal. Some others, however, say retraction is appropriate when a paper is later proven wrong, even in the absence of misconduct or mistakes.
Today, apparently prompted by a retraction that fits into that last category and was, by some accounts, a surprise to the paper’s authors, Public Library of Science (PLoS) medicine editorial director Virginia Barbour and PLoS Pathogens editor-in-chief Kasturi Haldar take the issue head-on. Barbour — who is also chair of the Committee on Publication Ethics, which of course has retraction guidelines — and Haldar write: Read the rest of this entry »
Transparency in action: EMBO Journal detects manipulated images, then has them corrected before publishing
As Retraction Watch readers know, we’re big fans of transparency. Today, for example, The Scientist published an opinion piece we wrote calling for a Transparency Index for journals. So perhaps it’s no surprise that we’re also big fans of open peer review, in which all of a papers’ reviews are made available to readers once a study is published.
Not that many journals have taken this step — medical journals at BioMedCentral are among those that have, and they even include the names of reviewers — but a recent peer review file from EMBO Journal, one publication that has embraced this transparent approach, is particularly illuminating.
Alan G. Hinnebusch, of the U.S. Eunice Kennedy Shriver National Institute of Child Health and Human Development, submitted a paper on behalf of his co-authors on November 2, 2011, at which point it went out for peer review. The editors sent those reviews back to the author on January 2, 2012, and Hinnebusch responded with revisions on April 4. So far, the process looks much like that any scientist goes through — questions about methods, presentation, and conclusions, followed by answers from the authors.
But what caught the eye of frequent Retraction Watch commenter Dave, who brought this to our attention, was what happened starting on May 18 when the editors responded to the authors again. (That letter is labeled as page 6, but is actually page 16 of the linked document.): Read the rest of this entry »
A group of authors at a Pittsburgh company have proposed a new way to write, review, and read scientific papers that they claim will “radically alter the creation and use of credible knowledge for the benefit of society.”
From the abstract of a paper appearing in the new Mary Liebert journal Disruptive Science and Technology, which, according to a press release, will “publish out-of-the-box concepts that will improve the way we live”: Read the rest of this entry »
After five years of operation, the Nature Publishing Group is will no longer accept submissions to its preprint server Nature Precedings, having found the experiment “unsustainable as it was originally conceived.”
Late last year, we published an invited commentary in Nature calling for science to more formally embrace post-publication peer review, and stop fetishizing the published paper. One of the models we cited was Faculty of 1000 (F1000), “in which experts flag important papers in their field.”
So it’s not surprising that F1000 is announcing today that they’re launching a new journal, F1000 Research,
intended to address three major issues afflicting scientific publishing today: timely dissemination of research, peer review and sharing of data.
The journal will publish all submissions immediately, “beyond an initial sanity check:” Read the rest of this entry »