Should authors have to retract papers based on data obtained unethically?

via Kent State http://bit.ly/r2CW44

Careful Retraction Watch readers may have noticed that one of the categories in our right-hand column under “by reason for retraction” is “lack of IRB approval.” That’s because in just over a year, we’ve written a number of posts about two cases of retractions for that reason.

One was the now-infamous case of Joachim Boldt, who has retracted some 90 papers. The other was more mundane, about a group studying injuries among Aussie rules football players

These retractions — and another case in which lung cancer screening trial investigators have said 90 percent of their consent forms are unobtainable, according to The Cancer Letter and The New York Times — raise some important ethical questions that we explore in our latest LabTimes column. Excerpt: Continue reading Should authors have to retract papers based on data obtained unethically?

Is it time for a Retraction Index?

We often hear — with data to back the statement — that top-tier journals, ranked by impact factor, retract more papers than lower-tier journals. For example, when Murat Cokol and colleagues compared journals’ retraction numbers in EMBO Reports in 2007, as Nature noted in its coverage of that study (h/t Richard van Noorden):

Journals with high impact factors retract more papers, and low-impact journals are more likely not to retract them, the study finds. It also suggests that high- and low-impact journals differ little in detecting flawed articles before they are published.

One thing you notice when you look at Cokol et al’s plots is that although their models seem to take retractions “per capita” — in other words per study published — into account, they don’t report those figures.

Enter a paper published this week in Infection and Immunity (IAI) by Ferric Fang and Arturo Casadevall, “Retracted Science and the Retraction Index.” Continue reading Is it time for a Retraction Index?

Why do — and don’t — journal editors retract articles?

Liz Wager, the chair of the Committee on Publication Ethics, knows something about retractions. In April, she and University College London’s Peter Williams published a paper in the Journal of Medical Ethics showing that journal editors’ approaches to retractions aren’t uniform.

The pair is back with another paper, using the same dataset of retractions and published in Science and Engineering Ethics, in which they ask journal editors why they retract — or don’t. The findings — more on them below — informed COPE’s 2009 guidelines on retractions, as did those in the April paper.

From the introduction to the new paper (link added): Continue reading Why do — and don’t — journal editors retract articles?

So how often does medical consensus turn out to be wrong?

In a quote that has become part of medical school orientations everywhere, David Sackett, often referred to as the “father of evidence-based medicine,” once famously said:

Half of what you’ll learn in medical school will be shown to be either dead wrong or out of date within five years of your graduation; the trouble is that nobody can tell you which half–so the most important thing to learn is how to learn on your own.

Sackett, we are fairly sure, was making an intentionally wild estimate when he said “half.” [See note about these strikethroughs at bottom of post.]  But aA fascinating study out today in the Archives of Internal Medicine gives a clue as to the real figuresuggests that he may have been closer than any of us imagined. Continue reading So how often does medical consensus turn out to be wrong?

No academic matter: Study links retractions to patient harm

Flawed research that leads to retractions is a problem for editors, publishers and the scientific community. But what about patients?

In a recent issue of the Journal of Medical Ethics, R. Grant Steen asks the question — and answers it in the affirmative.

We’ve heard from Steen before; he has written two recent papers on the scope of retractions, finding that the number of retractions seems to be rising faster than the number of publications on the shelves.

This time, Steen takes a crack at ferreting out what he calls “harm by influence,” the admittedly subtle effect that troubled studies have on downstream research. His findings certainly raise concerns. Continue reading No academic matter: Study links retractions to patient harm

Want to avoid a retraction? Hire a medical writer, say medical writers

A team of Australian medical writers who  analyzed four decades worth of retractions has reached the conclusion — we trust you’re sitting — that people in their profession are more honest than, well, the rest of us.

According to the authors,  articles in the medical literature are substantially less likely than other papers to be retracted for any reason, including mistakes or misconduct, if they have a medical writer as a declared co-author. The same applies for articles produced with the help of drug and device makers, either financial support or authorship assistance, the study found. And when both occur, retractions are vanishingly rare. Indeed, they found no instance of a retraction resulting from misconduct.

The study, which appears in Current Medical Research & Opinion, has flaws, which we’ll lay out in a minute. Continue reading Want to avoid a retraction? Hire a medical writer, say medical writers

Why was that paper retracted? Peer-reviewed evidence that Retraction Watch isn’t crazy

Retraction Watch readers will no doubt have realized by now that we are often frustrated by the opacity of many of the retraction notices we cover. And some critics may wonder if we’re overstating that case.

Well, wonder no more.

In a study published online yesterday in the Journal of Medical Ethics, Liz Wager and Peter Williams looked at retractions from 1988 to 2008. Their findings: Continue reading Why was that paper retracted? Peer-reviewed evidence that Retraction Watch isn’t crazy

Is scientific fraud on the rise?

As readers of this blog have no doubt sensed by now, the number of retractions per year seems to be on the rise. We feel that intuitively as we uncover more and more of them, but there are also data to suggest this is true.

As if to demonstrate that, we’ve been trying to find time to write this post for more than a week, since the author of the study we’ll discuss sent us his paper. Writing about all the retractions we learned about, however, kept us too busy.

But given how sharp Retraction Watch readers are, you will be quick to note that more retractions doesn’t necessarily mean a higher rate. After all, there were about 527,000 papers published in 2000, and 852,000 published in 2009, so a constant rate of retractions would still mean a higher number. Here’s what Grant Steen, who published a paper on retractions and fraud last month in the Journal of Medical Ethics, found when he ran those numbers: Continue reading Is scientific fraud on the rise?

How much plagiarism should editors tolerate? A poll

Photo by captain.tucker via flickr http://www.flickr.com/photos/russell300d/

Over the past few weeks, you’d have been forgiven for wondering if the name of this blog should be “Plagiarism Watch” instead of Retraction Watch. Just take a look at all of the recent plagiarism cases:

That last example inspired this poll. When we brought an example of likely plagiarism by the same author to the attention of one journal editor, he was nonplussed. “[A]s all editors know there are rarely absolutely clear cut issues in which the line is unequivocally drawn in the sand,” said the editor-in-chief of Biomaterials, David Williams of Wake Forest. (Williams also suggested that the relative obscurity of the plagiarizers’ institution, and of the journal where they published, meant the case wasn’t worth investigating.)

So where is that line in the sand? Take our poll:

“What were you thinking? Do not manipulate those data”

The title of this post is stolen, with adoring attribution, from a piece in the November 16, 2010 issue of Autophagy, because we couldn’t have said it better ourselves.

In the piece, the journal’s editor, Dan Klionsky, focuses on images. It reads, in part: Continue reading “What were you thinking? Do not manipulate those data”