Retraction Watch

Tracking retractions as a window into the scientific process

Did a clinical trial proceed as planned? New project finds out

with 8 comments

Ben Goldacre

Ben Goldacre

A new project does the relatively straightforward task of comparing reported outcomes from clinical trials to what the researchers said they planned to measure before the trial began. And what they’ve found is a bit sad, albeit not entirely surprising.

As part of The Compare Project, author and medical doctor Ben Goldacre and his team have so far evaluated 36 clinical trials published by the top five medical journals (New England Journal of Medicine, the Journal of the American Medical Association, The Lancet, Annals of Internal Medicine, and British Medical Journal). Many of those trials included “switched outcomes,” meaning the authors didn’t report something they said they would, or included additional outcomes in the published paper, with no explanation for the change.

Here are the latest results from the project, according to its website:

…as of 36 trials assessed to date, the current overall mean proportion of prespecified outcomes reported is 66.8%, and the mean number of non-prespecified outcomes added is 5.4. The total number of prespecified outcomes left unreported to date is 121. The total number of non-prespecified outcomes reported is 195.

Making such changes without a heads up is potentially problematic, the team says:

Before carrying out a clinical trial, all outcomes that will be measured (e.g. blood pressure after one year of treatment) must be pre-specified in a trial protocol and on a clinical trial registry (e.g. clinicaltrials.gov). This is because if researchers measure lots of things, some of those things are likely to give a positive result by random chance (a false positive). A pre-specified outcome is much less likely to give a false-positive result. In the trial report, all pre-specified outcomes must then be reported, to ensure a fair picture of the trial results.

However, pre-specified outcomes are often left unreported, while novel outcomes that were not pre-specified are reported. This is an extremely common problem that distorts the evidence we use to make real-world clinical decisions.

To determine if trials contained switched outcomes, Goldacre and his team simply compare a trial’s pre-specified outcomes to what end up being published. If they find issues, and the authors have offered no explanation for why they made the switch, the team submits a letter to the journal.

The site lists the trials they have evaluated, including many letters — but has withheld results from trials where the team has found switching but the journal hasn’t yet published the letter, in order to sidestep any journal’s objections to “prior publication.” For now, Goldacre told Retraction Watch he is choosing to trust the system:

…we’re sending letters to journals, and we’re strictly abiding by journals’ rules about not prepublishing the content of letters to them, because we want to see if the journal system is fit for purpose. To expand on that a little: we entrust journals with an incredibly important job, a huge portion of the knowledge management in medicine and science.  We should be able to expect that journals routinely police something as unambiguous as outcome switching, during the peer review and editing process. Where there are slip-ups, we should be able to expect that those errors are corrected swiftly.

But he already has doubts, he told us:

It’s early days, but the evidence from our project so far is that journals are not managing this information effectively on behalf of the community. We are already seeing a systematic problem of outcome switching, as other researchers have found before. But after sending in letters on every single trial with outcome switching, we are also now seeing, live, on the site, as the data accumulates, a startling lack of responsiveness when this problem is pointed out on individual trials. I think that’s very troubling. Meanwhile doctors, researchers and patients are taking the results of these trials at face value, with an implicit belief that such basic issues are being managed.

It’s a straightforward task to check whether a trial contains a swapped outcome, he said, and a letter describing that shouldn’t need to be peer-reviewed or sent to the authors for a response:

It’s not clear to me what’s there to discuss.

Not every trial changes its outcomes for nefarious reasons, he added. But every time outcomes are switched, that creates a “culture of permissiveness” that lets other people do the same to tweak the trial’s conclusions:

Even if you are not doing that yourself, you are giving cover to people who are.

The ideal scenario, he said, is when the journal recognizes that the authors left out key outcomes, and issues a correction reporting those missing results:

My personal view is that if a trial has misreported its outcomes, then the journal should simply issue a correction where the prespecified primary and secondary outcomes are reported.

If a journal ever refuses to publish one of the project’s letters detailing the missing outcomes, “then we will work on several different ways of communicating that to people.”

The research project is based at the University of Oxford, where Goldacre is a Senior Clinical Research Fellow in the Centre for Evidence Based Medicine. Goldacre has received funding from the Laura and John Arnold Foundation, which also supports our parent organization.

The eventual goal of the project is to make it less necessary, said Goldacre. In other words, have journals start checking trials for missing outcomes themselves:

We are not going to go away. We are here to stay. We think this is a widespread problem. [Journals] are going to be publishing an awful lot of letters from us. I don’t know at what point this becomes ridiculous, and they say: ‘maybe we should be monitoring primary and secondary outcomes ourselves.’

Although checking for swapped outcomes is straightforward, it’s not easy, Goldacre told us:

This is a phenomenally laborious process. Not a week goes by that we don’t curse the day we set out to do this.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

Written by Alison McCook

December 4th, 2015 at 9:30 am

Comments
  • Ken Pimple December 4, 2015 at 10:57 am

    This is an informative post, and the Compare Project is a serving an important service. We should all support it, especially journal editors.

  • Toby White December 4, 2015 at 12:46 pm

    It’s a small sample, but it seems that virtually all of the studies which reported a non-pre-specified outcome involved treatments which have generated some public controversy (e.g., benefits of red wine, acupuncture, knee replacement, etc.). Is there some reason for this?

  • EOAutomaton December 4, 2015 at 3:59 pm

    Author accountability and transparency are difficult things for journals to monitor. That being said… what happens if a journal contacts an author to let them know they have been prompted to issue a correction, and then the author just updates their registration record after being alerted by the journal? It seems like the Compare Project – based on the protocol – is trying to force journals to issue corrections using an “or else” approach (publish the CP’s requested correction, or else the letter and journal’s potential rejection get publish on their own website). How are journals supposed to deal with the inevitable pushback from the author and the CP?

  • Henk Jan Out December 5, 2015 at 3:23 am

    We did similar research in 226 manuscripts and found discrepancies between registered and published primary outcomes in nearly 30% of the papers. The occurrence of these discrepancies did not influence publication acceptance. Journals can easily contribute to prevention of selective publication by monitoring this more closely. http://www.sciencedirect.com/science/article/pii/S0895435614004971

  • James C. Coyne December 6, 2015 at 10:19 am

    The issue of switched outcomes is an important one that should be brought to light but it is not limited to pharmacological trials. The PACE trial of psychological interventions for chronic fatigue syndrome is centered at Oxford and made important switches in outcomes from its published protocol. The PACE investigators defy requests for their data and aggressively protect the secrecy concerning how their switching affected the outcomes they so vigorously promote.

    Some of us have urged that Ben Goldacre comment on the withholding of PACE data and the switching of outcomes but he declines to do so.

    It may be relevant that Goldacre is a former employee of Simon Wessely, a key spokesperson for the PACE trial. Wessely states this in providing praise for Goldacre’s recent book: http://www.researchgate.net/publication/273789688_Finding_the_serious_in_the_absurd

  • agm December 8, 2015 at 1:00 pm

    Could you explain why a pre-specified outcome is less likely to give a false positive?

    • james December 8, 2015 at 8:14 pm

      agm: If you prespecify the outcomes, this is based on your hypothesis. If you do your research, find your outcomes are not significant but some non-prespecified ones are positive, what is the chance of your hypothesis being true?

      Or, a different way of looking at it is to consider the number of prespecified outcomes (say 10) and the number of non-prespecified outcomes (infinite). You’ll have a lot of positives in an infinite set.

      Essentially it’s about making sure that your work is not derailed by your own wishful thinking.

    • Emil OW Kirkegaard December 14, 2015 at 11:52 am

      It is not, but it is less likely to be a false-positive due to selective reporting of positive results.

  • Post a comment

    Threaded commenting powered by interconnect/it code.