Should systematic reviewers report suspected misconduct?

BMJ Open

Authors of systematic review articles sometimes overlook misconduct and conflicts of interest present in the research they are analyzing, according to a recent study published in BMJ Open.

During the study, researchers reviewed 118 systematic reviews published in 2013 in four high-profile medical journals — Annals of Internal Medicine, the British Medical Journal, The Journal of the American Medical Association and The Lancet. In addition, the authors contacted review authors to ask additional questions; 80 (69%) responded. The review included whether the authors had followed certain procedures to ensure the integrity of the data they were compiling, such as checking for duplicate publications, and analyzing if the authors’ conflicts of interest may have impacted the findings. 

Carrying out a systematic review involves collecting and critically analyzing multiple studies in the same area. It’s especially useful for accumulating and weighing conflicting or supporting evidence by multiple research groups. A byproduct of the process is that it can also help spot odd practices such duplication of publications

The findings of the study suggest that sometimes systematic reviewers fall short — authors of seven of these reviews said in follow-up questioning that they included studies they believed contained signs of misconduct, such as data falsification. But only two reviews reported their suspicions. 

Eighty-one review authors noted that they searched for duplicates; three review authors checked whether or not the original authors had obtained ethical approvals, and only five reported the conflicts of interest of the original authors.

Review authors also didnt consistently perform checks to avoid publication bias, such as making sure negative findings werent sitting in unpublished studies, or contacting original authors to ask about unreported outcomes.

First author Nadia Elia, a public health physician and a clinical epidemiologist at Geneva University Hospitals and the University of Geneva, told us that she thinks her results are an “underestimation of the real problem,” as many systematic review authors were not ready to point fingers or accuse others of misconduct. What’s more, when reviewers do suspect misconduct, they don’t know what to do with that information, Elia noted.

Some review authors took action after publication: One review included multiple studies from Joachim Boldt, who was later shown to have committed misconduct in multiple studies. Once this came to light, the review authors took a second look at those potentially problematic data, Elia and her co-authors note:

…a survey performed in 2010, and focusing on Boldt’s research published between 1999 and 2010, had led to the retraction of 80 of his articles due to data fabrication and lack of ethical approval. The seven articles co-authored by Boldt were kept in the review as they had been published before 1999. Nonetheless, the reviewers performed sensitivity analyses excluding these seven articles, and showed a significant increase in the risk of mortality and acute kidney injury with hydroxyethyl starch solutions that was not apparent in Boldt’s articles.

Elia said she understands why systematic reviewers are not explicitly looking out for the conflicts of interests of the original studies’ authors. Although conflicts of interests are now exhaustively reported by journals, this information is rarely useful, Elia said. What would be useful, she added, is if authors explained how their declarations may have ultimately introduced biases in their conclusions:

…we don’t understand what the impact of having received money from Pfizer [is, for example].

Another caveat is that all the systematic reviews being evaluated were published in the field’s most reputable journals. Elia added:

That was a discussion we had with one of the [peer] reviewers who said it was not clear how we could extrapolate to other journals, and the reviewer said it is probably worse in other journals.

Out of Elia’s sample of systematic reviews, 27 (23%) looked at which of the original papers had declared funding sources, but only 6 (5%) analysed whether sponsored articles favored the treatment that was being tested. The authors summarize this in the paper:

One reviewer claimed that sponsor bias was unlikely, while three were unable to identify any sponsor bias. Finally, two reviews identified sponsor bias.

When asked if systematic reviewers should be actively looking at their sample studies’ sponsors, Elia added:

I’m personally quite interested in those kinds of things because only systematic reviews can do this…it’s one of the roles of systematic reviews to try to identify those meta-biases.

Finally, three reviews (2.5%) evaluated whether the original papers had associated ethical approvals. But, Elia said:

I’m not sure systematic reviewers are ready to do this and I’m not sure it’s their job.

Charles Seife, a journalism professor at New York University, previously examined 78 publications resulting from trials in which the US Food and Drug Administration (FDA) found serious misconduct, but only three mentioned the issues unveiled by the FDA. Seife wrote to us about the current study:

It’s fairly clear that a large proportion of research misconduct isn’t getting reported in the peer-reviewed literature, and this study implies that even high-quality systematic reviews have a spotty record of detecting and reporting misconduct. For me, the most surprising finding was how many reviews apparently failed to check for publication bias something I’d consider to be an essential element to a good review. I wouldn’t take this study’s results as gospel, but they’re certainly suggestive that systematic reviewers should spend more effort in looking for missing studies and for misconduct.

Paul Shekelle, co-director of the Southern California Evidence-based Practice Center for the global policy think tank RAND in Santa Monica, California, and co-editor-in-chief of the journal Systematic Reviews, told us:

Making sure your included studies aren’t duplicates is at most an incremental effort on top of everything else being done. Some of the other steps being looked for here, in particular searching for unpublished data, and attempting contact with original authors to clarify issues about their studies, are ones that are much more than an incremental effort. Many systematic review groups have tried and their experience is pretty uniform that as a routine practice these aren’t worth the time and resources spent on them.

Shekelle said that the test for publication bias is “technically not feasible,” describing his personal experience:

…our prior attempts to contact original authors led to getting responses from only about 25%-30% of the authors. Even when we could contact them, some of these studies were performed many years ago and the authors tell us the details we are looking for are no longer known or otherwise lost to time. Nearly all our attempts at finding truly unpublished studies — meaning something that did not appear in even an abstract that we can identify via one of the databases that indexes scientific abstracts — never lead to anything. So we’ve discarded both of these as routine measures we do in our reviews, and expect other review groups have done likewise.

 He explained what his team did when they suspected misconduct:

In the cases where we’ve found duplicate publication, we’ve done pretty much what the one author quoted in the article says: “we do not think we should be distracted from our main goal…” which in our case is to turn in a review to the government within their timeline. So we’ve deleted the duplicates, noted it, and moved on. I wouldn’t even know the appropriate body to report this to, and the duplicate publication may have happened 10 years ago or more, and the authors may not even be at the same institutions any longer. This is not a process I would want to get involved in.

Elia added that, nevertheless, she thinks that clear guidelines need to be established on what to do when encountered with these situations in literature.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

7 thoughts on “Should systematic reviewers report suspected misconduct?”

  1. Hmm, and who said that we don’t? If we happen to come by something we actually do.
    I have to say though, it does take certain amount of time to prepare a comprehensive description of the case, so often it happens sometime after the fact.

  2. If you’re writing a review, per definition you are evaluating the studies. So do your job. That said, the simplest solution is to NOT write a review. It’s a lot more work than doing an original research study.

  3. I agree with Dean: a lot of work to write reviews. But following up on problems uncovered in the reviews can lead down a rabbit-hole, and there are negative rewards as colleagues and journal editors wish you would go away.

  4. Writing a review is a lot of work, and there may be clear cases where research misconduct has taken place. There are also other times when I feel the reviewer is better served by Hanlon’s razor. To paraphrase, never attribute to malice that which is adequately explained by carelessness. I find this is usually a more suitable assumption when questionable choices in statistical usage or analysis arise, for example.

  5. How about the other way around: every retracted article should at least inform all of the authors/journals of all citations to that retracted article.
    Seems to me to be the more appropriate “punishment” for the cause of the retaction.

    1. +1, however I’d place this burden on the journals, which published the retraction. Obviously this would have to be done by some “governing body”, i.e. COPE etc.

  6. What I meant is, that this new rule should be enforced through institutions/guidelines responsible for journal quality.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.