In the paper, “Retraction Publications in the Drug Literature,” which appears in Pharmacotherapy, Jennifer C. Samp, Glen T. Schumock, and A. Simon Pickard take a look at previous studies of retractions, including those by Grant Steen and John Budd, both of whose work we’ve covered. They also identify 742 retractions in the biomedical literature from 2000 to 2011, 102 of which were of drug studies, to analyze.
Noting the growing interest in retractions, they write that
although the number of retractions is increasing, it is possible that the percentage of total publications that are retracted is not changing.
Actually, Thomson Reuters data, reported by the Wall Street Journal — in a story the authors cite, in fact — and Nature have shown quite convincingly that while the number of studies published has grown by 44%, the number of retractions per year has grown at least 10-fold.
So we were already making skeptical notes when we arrived at the results:
We found that 73 publications (72%) were retracted for scientific misconduct whereas 29 (28%) were retracted because of error.
We found that the proportion of retractions for scientific misconduct was much higher in drug therapy studies (72%) than in the broader biomedical literature as reported in the 1998 study (37%) and in another study in 2011 (27%).
After running through a few different permutations of why these numbers could change depending on the definitions of misconduct, they conclude:
If these differences are real, it implies that drug therapy studies hold greater attraction for fraud than other types of scientific studies. New indications based on drug therapies have great potential to capture the attention of the public, perhaps more than a behavioral or education-based intervention, and this may hold attraction for opportunists. Pressure to publish and obtain grant funding among researchers and academicians may also lead to the misconduct we observed; it is possible that expectations are greater in this regard among those who publish drug therapy literature.
Those are all reasonable theories, but the problem is in the first phrase: “If these differences are real…” It’s pretty difficult to tell if they are. That’s because, as the authors point out elsewhere in the article, describing the studies in their dataset retracted for misconduct:
More than one half of these articles were authored by three individuals: Naoyuki Nakao (3 retractions), Scott S. Reuben (15 retractions), and Joachim Boldt (25 retractions).
For one thing, we’re not sure those numbers reflect the real number of retractions by each author. Reuben, for example, retracted 22 studies, not 15. Boldt’s tally is 88, although some of them may not have shown up in PubMed by the time the authors searched. More to the point, though, if you remove those frequent offenders from the analysis, you’re left with a very different picture.
To their credit, Pickard and his colleagues note a number of limitations of the study, in particular a problem that Retraction Watch readers will find frustratingly familiar: One in ten of the retraction notices said nothing about why the paper was retracted.
But at the end of the day, what they’ve done is sort of like doing a meta-analysis and counting the same patients numerous times. And even 102 papers isn’t a huge dataset. We’ve said before that gathering data is critical, but drawing conclusions from relatively small numbers — particularly when you consider the 1.4 million or so papers published every year — is fraught with peril. Does the ratio of drug trials to other life science research in retractions reflect the overall ratio of what’s going on in science?
Still, there’s plenty to chew on in this paper, including the role of industry funding.
We observed that retracted studies typically did not receive external funding support, and many were internally funded. We hypothesize that fewer checks and balances exist when funding is from an internal source and that this may facilitate the opportunity to forge or fabricate data.
The data are similarly limited, and the fact that data wasn’t fabricated doesn’t mean it wasn’t the result of a trial set up to favor a particular conclusion, but hypothesis generation is always welcome when it comes to retractions, right?
Hat tip: Neuro_Skeptic