PubMed now allows comments on abstracts — but only by a select few

pubmedPubMed today launches a pilot version of PubMed Commons,

a system that enables researchers to share their opinions about scientific publications. Researchers can comment on any publication indexed by PubMed, and read the comments of others.

In general, we’re big fans of post-publication peer review, as Retraction Watch readers know. Once it’s out of its pilot phase — and we hope that’s quite soon — PubMed Commons comments will be publicly available. So this is a step forward — but only a tentative one. That’s because of the first bullet point in the terms of service commenters agree to: Continue reading PubMed now allows comments on abstracts — but only by a select few

Is impact factor the “least-bad” way to judge the quality of a scientific paper?

plos biologyWe’ve sometimes said, paraphrasing Winston Churchill, that pre-publication peer review is the worst way to vet science, except for all the other ways that have been tried from time to time.

The authors of a new paper in PLOS Biology, Adam Eyre-Walker and Nina Stoletzki, compared three of those other ways to judge more than 6,500 papers published in 2005:

subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published

Their findings? Continue reading Is impact factor the “least-bad” way to judge the quality of a scientific paper?

What happened to Joachim Boldt’s 88 papers that were supposed to be retracted?

a&amisconductcoverCHICAGO — Almost two years after editors at 18 journals agreed in March 2011 to retract 88 of former retraction record holder Joachim Boldt’s papers, 10% of them hadn’t been retracted.

That’s what Nadia Elia, Liz Wager, and Martin Tramer reported here Sunday in an abstract at the Seventh International Congress on Peer Review and Biomedical Publication. Elia and Tramer are editors at the European Journal of Anaesthesiology, while Wager is former chair of the Committee on Publication Ethics (COPE).

As of January 2013, nine of the papers hadn’t been retracted, Tramer said, while only five — all in one journal — had completely followed COPE guidelines, with adequate retraction notices, made freely available, along with  PDFs properly marked “Retracted.” From the abstract (see page 18): Continue reading What happened to Joachim Boldt’s 88 papers that were supposed to be retracted?

“If a paper’s major conclusions are shown to be wrong we will retract the paper”: PLoS

One of the issues that comes up again and again on Retraction Watch is when it’s appropriate to retract a paper. There are varying opinions. Some commenters have suggested, given the stigma attached, retraction should be reserved for fraud, while many more say error — even unintentional — is enough to merit withdrawal. Some others, however, say retraction is appropriate when a paper is later proven wrong, even in the absence of misconduct or mistakes.

Today, apparently prompted by a retraction that fits into that last category and was, by some accounts, a surprise to the paper’s authors, Public Library of Science (PLoS) medicine editorial director Virginia Barbour and PLoS Pathogens editor-in-chief Kasturi Haldar take the issue head-on. Barbour — who is also chair of the Committee on Publication Ethics, which of course has retraction guidelines — and Haldar write: Continue reading “If a paper’s major conclusions are shown to be wrong we will retract the paper”: PLoS

Transparency in action: EMBO Journal detects manipulated images, then has them corrected before publishing

As Retraction Watch readers know, we’re big fans of transparency. Today, for example, The Scientist published an opinion piece we wrote calling for a Transparency Index for journals. So perhaps it’s no surprise that we’re also big fans of open peer review, in which all of a papers’ reviews are made available to readers once a study is published.

Not that many journals have taken this step — medical journals at BioMedCentral are among those that have, and they even include the names of reviewers — but a recent peer review file from EMBO Journal, one publication that has embraced this transparent approach, is particularly illuminating.

Alan G. Hinnebusch, of the U.S. Eunice Kennedy Shriver National Institute of Child Health and Human Development, submitted a paper on behalf of his co-authors on November 2, 2011, at which point it went out for peer review. The editors sent those reviews back to the author on January 2, 2012, and Hinnebusch responded with revisions on April 4. So far, the process looks much like that any scientist goes through — questions about methods, presentation, and conclusions, followed by answers from the authors.

But what caught the eye of frequent Retraction Watch commenter Dave, who brought this to our attention, was what happened starting on May 18 when the editors responded to the authors again. (That letter is labeled as page 6, but is actually page 16 of the linked document.): Continue reading Transparency in action: EMBO Journal detects manipulated images, then has them corrected before publishing

Will a new literature format “radically alter” how scientists write, review, and read papers?

A group of authors at a Pittsburgh company have proposed a new way to write, review, and read scientific papers that they claim will “radically alter the creation and use of credible knowledge for the benefit of society.”

From the abstract of a paper appearing in the new Mary Liebert journal Disruptive Science and Technology, which, according to a press release, will “publish out-of-the-box concepts that will improve the way we live”: Continue reading Will a new literature format “radically alter” how scientists write, review, and read papers?

Nature Precedings to stop accepting submissions next week after finding model “unsustainable”

After five years of operation, the Nature Publishing Group is will no longer accept submissions to its preprint server Nature Precedings, having found the experiment “unsustainable as it was originally conceived.”

Here’s the announcement sent to all Nature Precedings registrants this morning: Continue reading Nature Precedings to stop accepting submissions next week after finding model “unsustainable”

An arXiv for all of science? F1000 launches new immediate publication journal

Late last year, we published an invited commentary in Nature calling for science to more formally embrace post-publication peer review, and stop fetishizing the published paper. One of the models we cited was Faculty of 1000 (F1000), “in which experts flag important papers in their field.”

So it’s not surprising that F1000 is announcing today that they’re launching a new journal, F1000 Research,

intended to address three major issues afflicting scientific publishing today: timely dissemination of research, peer review and sharing of data.

 The journal will publish all submissions immediately, “beyond an initial sanity check:” Continue reading An arXiv for all of science? F1000 launches new immediate publication journal

How good are journals at policing authorship?

One of the most contentious issues in scholarly publishing is authorship. Sometimes there’s forgery involved, but most of the time the tension is more mundane but also more pernicious: Researchers who did most of the work wondering why “honorary” authors suddenly appear on papers, or wondering why their own names didn’t appear.

Journals, it would seem, are a good bulwark against such abuses. And many have subscribed to the International Committee of Medical Journal Editors’ (ICMJE) Uniform Requirements for Manuscripts, which include these requirements for authorship: Continue reading How good are journals at policing authorship?

Should authors be encouraged to pick their own peer reviewers?

If you’ve ever submitted a paper, you know that many journals ask authors to suggest experts who can peer review your work. That’s understandable; after all, as science becomes more and more specialized, it becomes harder to find reviewers knowledgeable in smaller niches.

Human nature being what it is, however, it would seem natural for authors to recommend reviewers who are a bit more likely to recommend acceptance. Such author-suggested reviewers are just one source of the two or three experts who vet a particular paper, and are required to disclose any conflicts of interest that might bias their recommendations.

Still, editors have justifiable concerns that using too many of them may be subtly increasing their acceptance rate. That’s why we’re interested in such issues at Retraction Watch. Increasing a journal’s acceptance rate, of course, could mean increasing the number of papers at the lower end of the quality spectrum, and perhaps up the rate of retractions.

The Journal of Pediatrics recently peered into its own peer review system, Continue reading Should authors be encouraged to pick their own peer reviewers?