Want to correct the scientific literature? Good luck

David Allison
David Allison
Andrew Brown
Andrew Brown

If you notice an obvious problem with a paper in your field, it should be relatively easy to alert the journal’s readers to the issue, right? Unfortunately, for a group of nutrition researchers led by David B. Allison at the University of Alabama at Birmingham, that is not their experience. Allison and his co-author Andrew Brown talked to us about a commentary they’ve published in today’s Nature, which describes the barriers they encountered to correcting the record. 

Retraction Watch: You were focusing on your field (nutrition), and after finding dozens of “substantial or invalidating errors,” you had to stop writing letters to the authors or journals, simply because you didn’t have time to keep up with it all. Do you expect the same amount of significant errors are present in papers from other fields? Continue reading Want to correct the scientific literature? Good luck

Weekend reads: A celebrity surgeon’s double life; misconduct in sports medicine; researcher loses honor

booksThis week at Retraction Watch featured a literally bullshit excuse for fake data, a new record for time from publication to retraction, and news of an upcoming retraction from Science. Here’s what was happening elsewhere: Continue reading Weekend reads: A celebrity surgeon’s double life; misconduct in sports medicine; researcher loses honor

Why retraction shouldn’t always be the end of the story

rsc-logoWhen researchers raised concerns about a 2009 Science paper regarding a new way to screen for enzymatic activity, the lead author’s institution launched an investigation. The paper was ultimately retracted in 2010, citing “errors and omissions.”

It would seem from this example that the publishing process worked, and science’s ability to self-correct cleaned up the record. But not so to researchers Ferric Fang and Arturo Casadevall.

Fang, of the University of Washington, Seattle, and Casadevall, of Johns Hopkins — who have made names for themselves by studying retractions — note today in an article for Chemistry World that

Continue reading Why retraction shouldn’t always be the end of the story

Is an increase in retractions good news? Maybe, suggests new study

SEEIn Latin America, retractions for plagiarism and other issues have increased markedly — which may be a positive sign that editors and authors are paying closer attention to publishing ethics, according to a small study published in Science and Engineering Ethics.

The authors examined two major Latin American/Caribbean databases, which mostly include journals from Brazil, and have been indexing articles for more than 15 years. They found only 31 retractions, all of which appeared in 2008 or later. (Roughly half of the retractions were from journals indexed in the Thomas Reuters’  Journal of Citations Report®  (JCR).)

This was a notable result, the authors write: Continue reading Is an increase in retractions good news? Maybe, suggests new study

Can linguistic patterns identify data cheats?

JLAPSCunning science fraudsters may not give many tells in their data, but the text of their papers may be a tipoff to bad behavior.

That’s according to a new paper in the Journal of Language and Social Psychology by a pair of linguists at Stanford University who say that the writing style of data cheats is distinct from that of honest authors. Indeed, the text of science papers known to contain fudged data tends to be more opaque, less readable and more crammed with jargon than untainted articles.

The authors, David Markowitz and Jeffrey Hancock, also found that papers with faked data appear to be larded up with references – possibly in an attempt to make the work more cumbersome for readers to wade through, or to tart up the manuscript to make it look more impressive and substantial. As Markowitz told us: Continue reading Can linguistic patterns identify data cheats?

Is less publishing linked to more plagiarism?

glogoCountries that publish less science appear to “borrow” more language from others than other, more scientifically prolific countries, according to a new small study.

Using a novel approach of comparing a country’s total citations against its total published papers (CPP), the authors categorized 80 retractions from journals in general and internal medicine. This is a relatively small number of retractions from one specific field of research; still, they found that:

Thus, retractions due to plagiarism/duplication were 3.4 times more likely among low-CPP countries than among high-CPP countries.

The CPP authors’ suggested interpretation? Continue reading Is less publishing linked to more plagiarism?

Can you spot the signs of retraction? Just count the errors, says a new study

downloadClinical studies that eventually get retracted are originally published with significantly more errors than non-retracted trials from the same journal, according to a new study in BMJ.

The authors actually called the errors “discrepancies” — for example, mathematical mistakes such as incorrect percentages of patients in a subgroup, contradictory results, or statistical errors.

The study doesn’t predict which papers will eventually be retracted, since such discrepancies occur frequently (including one in the paper itself), but the authors suggest a preponderance could serve as an “early and accessible signal of unreliability.”

According to the authors, all based at Imperial College London, you see a lot more of these in papers that are eventually retracted: Continue reading Can you spot the signs of retraction? Just count the errors, says a new study

Here’s how to keep clinical trial participants honest (and why that’s a big deal)

NEJMAdditional lab tests, creating a clinical trial patient registry, and rewards for honesty are among the advice doled out in this week’s issue of the New England Journal of Medicine for researchers to help avoid the major issue of participants lying to get into clinical trials.

In the Perspective, David B. Resnik and David J. McCann, both based at the National Institutes of Health, address concerns raised by a 2013 survey of clinical trial participants that revealed “high rates” of “deceptive behavior.” Specifically: Continue reading Here’s how to keep clinical trial participants honest (and why that’s a big deal)

Weekend reads: “Unfeasibly prolific authors;” why your manuscript will be rejected; is science broken?

booksThe week at Retraction Watch featured revelations of yet more fake peer reviews, bringing the retraction total to 250. Here’s what was happening elsewhere: Continue reading Weekend reads: “Unfeasibly prolific authors;” why your manuscript will be rejected; is science broken?

Three more retractions for former record-holder Boldt, maybe more to come

m_cover

Justus Liebig University in Germany has been investigating concerns that Joachim Boldt, number two on the Retraction Watch Leaderboard and now up to 92 retractions, may have “manipulated” more data than previously believed.

Until now, the vast majority of Boldt’s retractions were thought to have involved inadequate ethics approval. However, new retraction notices for Boldt’s research suggest that there’s evidence the researcher also engaged in significant data manipulation.

The first retraction from the university investigation emerged last year. Two of three new notices cite the investigation specifically, and an informant at the university told us that there are more retractions to come.

Here are the retracted papers that are freshly on the record, starting with an August retraction for a 1991 Anesthesiology paper (cited 37 times, according to Thomson Scientific’s Web of Knowledge):

Continue reading Three more retractions for former record-holder Boldt, maybe more to come