Retraction Watch

Tracking retractions as a window into the scientific process

Is misconduct more likely in drug trials than in other biomedical research?

with 4 comments

A new paper by Chicago pharmacy researchers suggests that researchers performing drug studies are more likely to commit fraud than are their colleagues in the rest of biomedicine.

In the paper, “Retraction Publications in the Drug Literature,” which appears in Pharmacotherapy, Jennifer C. Samp, Glen T. Schumock, and A. Simon Pickard take a look at previous studies of retractions, including those by Grant Steen and John Budd, both of whose work we’ve covered. They also identify 742 retractions in the biomedical literature from 2000 to 2011, 102 of which were of drug studies, to analyze.

Noting the growing interest in retractions, they write that

although the number of retractions is increasing, it is possible that the percentage of total publications that are retracted is not changing.

Actually, Thomson Reuters data, reported by the Wall Street Journal — in a story the authors cite, in fact — and Nature have shown quite convincingly that while the number of studies published has grown by 44%, the number of retractions per year has grown at least 10-fold.

So we were already making skeptical notes when we arrived at the results:

We found that 73 publications (72%) were retracted for scientific misconduct whereas 29 (28%) were retracted because of error.

And:

We found that the proportion of retractions for scientific misconduct was much higher in drug therapy studies (72%) than in the broader biomedical literature as reported in the 1998 study (37%) and in another study in 2011 (27%).

After running through a few different permutations of why these numbers could change depending on the definitions of misconduct, they conclude:

If these differences are real, it implies that drug therapy studies hold greater attraction for fraud than other types of scientific studies. New indications based on drug therapies have great potential to capture the attention of the public, perhaps more than a behavioral or education-based intervention, and this may hold attraction for opportunists. Pressure to publish and obtain grant funding among researchers and academicians may also lead to the misconduct we observed; it is possible that expectations are greater in this regard among those who publish drug therapy literature.

Those are all reasonable theories, but the problem is in the first phrase: “If these differences are real…” It’s pretty difficult to tell if they are. That’s because, as the authors point out elsewhere in the article, describing the studies in their dataset retracted for misconduct:

More than one half of these articles were authored by three individuals: Naoyuki Nakao (3 retractions), Scott S. Reuben (15 retractions), and Joachim Boldt (25 retractions).

For one thing, we’re not sure those numbers reflect the real number of retractions by each author. Reuben, for example, retracted 22 studies, not 15. Boldt’s tally is 88, although some of them may not have shown up in PubMed by the time the authors searched. More to the point, though, if you remove those frequent offenders from the analysis, you’re left with a very different picture.

To their credit, Pickard and his colleagues note a number of limitations of the study, in particular a problem that Retraction Watch readers will find frustratingly familiar: One in ten of the retraction notices said nothing about why the paper was retracted.

But at the end of the day, what they’ve done is sort of like doing a meta-analysis and counting the same patients numerous times. And even 102 papers isn’t a huge dataset. We’ve said before that gathering data is critical, but drawing conclusions from relatively small numbers — particularly when you consider the 1.4 million or so papers published every year — is fraught with peril. Does the ratio of drug trials to other life science research in retractions reflect the overall ratio of what’s going on in science?

Still, there’s plenty to chew on in this paper, including the role of industry funding.

We observed that retracted studies typically did not receive external funding support, and many were internally funded. We hypothesize that fewer checks and balances exist when funding is from an internal source and that this may facilitate the opportunity to forge or fabricate data.

The data are similarly limited, and the fact that data wasn’t fabricated doesn’t mean it wasn’t the result of a trial set up to favor a particular conclusion, but hypothesis generation is always welcome when it comes to retractions, right?

Hat tip: Neuro_Skeptic

Written by Ivan Oransky

May 17th, 2012 at 11:00 am

Comments
  • Conrad T Seitz MD May 17, 2012 at 1:31 pm

    Here’s the problem that really bothers me. I apologize in advance for my bias towards drug studies. I think they are more important (in the near term) than studies that identify cellular proteins and transport intermediates (possibly by Western Blot), because they have a more immediate impact. Drug studies induce doctors to prescribe medications that they hope will lead to better health/less illness among their patients. When these studies are fraudulent, patients suffer immediately by getting the wrong medication.
    The statistics appear to be showing that more and more of these studies are being exposed as fraudulent. Common sense would suggest that the exposed studies are only the tip of the iceberg, since a large proportion of the fraud is not easily discoverable.
    These discoveries worry me because they suggest that the current way of doing these studies is broken. Whether it is the choice of what to study, how to study it, how to fund the studies, or how you choose your staff, something is very wrong. I am (naively) clinging to the belief that the broken-ness is something fairly recent, within the last forty or fifty years, because otherwise how would we have reached the position we have been in, with some really great drugs to use that would be really cheap if they were off patent? (Stuff like Viagra, Lipitor, and 17-beta-estradiol, to name just a few.)
    As contrasted to the position we are increasingly finding ourselves in, with ridiculously (and increasingly) expensive drugs coming out at longer and longer intervals…with more and more studies retracted for fraud…

  • Fernando Pessoa May 18, 2012 at 6:41 am

    “I am (naively) clinging to the belief that the broken-ness is something fairly recent, within the last forty or fifty years”

    More like a 100 years I would say.

    When Edward Bernays was asked towards the end of his life about the last time he contacted the press he said something to the effect of about 60 years ago. A much cleverer man than his uncle. His method was to fund “research”, get it published in a scientific /medical journal, and then let the newspapers report it to the public.

    http://www.npr.org/templates/story/story.php?storyId=4612464

    http://en.wikipedia.org/wiki/Full_breakfast

    “To promote sales of bacon, he conducted a survey of physicians and reported their recommendations that people eat hearty breakfasts. He sent the results of the survey to 5000 physicians, along with publicity touting bacon and eggs as a hearty breakfast”.

  • Jennifer C. Samp May 30, 2012 at 5:53 pm

    I would like to clarify the point made regarding the discrepancy in number of retracted publications of Dr. Scott S. Reuben and Dr. Joachim Boldt. Several of these articles were outside the time period that we searched, 2000-2011 (i.e these articles were published in the 1990’s and therefore, were not included in our analysis). Furthermore, many of the publications by Dr. Boldt did not involve drug therapy. For example, many of this research examined non-drug therapy interventions, such as surgical techniques, and, therefore, did not meet our inclusion criteria.

    In regards to the argument that there is convincing evidence that the rate of retractions in the literature is increasing, it is important to remember that the studies on this topic have generally been limited to retractions from PubMed. While PubMed is a main database for scientific publications, it is not the only database. Further studies should examine other scientific journal databases to determine whether this trend persists.

    Finally, it is important to note that editors may have recently become more diligent in retracting misleading studies. The recent publication of guidelines for retractions by the Committee on Publication Ethics (COPE) have assisted editors in this issue. Prior to this publication in 2009, no guidance was available.

  • Richard Van Noorden June 14, 2012 at 6:33 am

    Jennifer, re>> “In regards to the argument that there is convincing evidence that the rate of retractions in the literature is increasing, it is important to remember that the studies on this topic have generally been limited to retractions from PubMed. While PubMed is a main database for scientific publications, it is not the only database. Further studies should examine other scientific journal databases to determine whether this trend persists.”.

    Analysts have already examined the Thomson Reuters database and found the same trend. See http://www.nature.com/news/2011/111005/full/478026a.html

  • Post a comment

    Threaded commenting powered by interconnect/it code.