Intent was there, but not the intention-to-treat analysis: Breast cancer study retracted

A group of Dutch researchers has retracted a paper they published in March after apparently learning that they’d bungled their statistical analysis in the study.

The article, “Effects of a pre-visit educational website on information recall and needs fulfilment in breast cancer genetic counselling, a randomized controlled trial,” was published in Breast Cancer Research by Akke Albada of the Netherlands Institute for Health Services Research and colleagues.

But according to the notice, Utrecht, we have a problem:

The authors would like to retract their article “Effects of a pre-visit educational website on information recall and needs fulfilment in breast cancer genetic counselling, a randomized controlled trial” [1]. After publication of this paper the co-authors noticed a discrepancy between the analyses as described (intention-to-treat analysis) and the analyses as performed (per-protocol analysis), leading to an overestimation of the intervention effects. Therefore the authors have decided to retract this paper in its current form.

Those results:

Intent-to-treat analysis showed that counselees in the intervention group (= 103) had higher levels of recall of information from the consultation (β = .32; confidence interval (CI): .04 to .60; P= .02; d = .17) and post-visit knowledge of breast cancer and heredity (β = .30; CI: .03 to .57; P= .03) than counselees in the UC group (= 94). Also, intervention group counselees reported better fulfilment of information needs (β = .31; CI: .03 to .60; = .03). The effects of the intervention were strongest for those counselees who did not receive an indication for DNA testing. Their recall scores showed a larger increase (β = .95; CI: .32 to 1.59; = .003; d = .30) and their anxiety levels dropped more in the intervention compared to the UC group (β = -.60; CI: -1.12 to -.09; = .02). No intervention effects were found after the first visit on risk perception alignment or perceived personal control.

So what, exactly, was their mistake? In a nutshell, an intention to treat analysis takes into account whether subjects in a trial have dropped out. If too many have, the results may skew in a particular direction, often that an intervention works better than it does. The intention to treat analysis tries to keep the statistics honest by assuming that the reason people dropped out was because they had a negative outcome.

Although these sorts of errors aren’t particularly common, they’re not unheard of. Indeed, we reported recently on a similar case involving a former Pfizer researcher. And one of us (Ivan) has pointed out the importance of such problems in print in the past. So it’s good to see authors taking responsibility in this case.

3 thoughts on “Intent was there, but not the intention-to-treat analysis: Breast cancer study retracted”

  1. Actually your description of an intention to treat analysis isn’t really correct.

    An intention to treat analysis means that all randomized subjects are analyzed in the groups that they were randomized to, no matter how they were actually treated. The retraction contrasts this to a per protocol analysis. So what might happen is that a patient is randomized to the control group, but the doc thought they should get the new treatment anyway so they receive the new treatment (or counseling method or whatever). Or the alternative, the patient decides they don’t want to take their assigned treatment.
    Intention to treat analysis keeps the patient and all their results in the group they were randomly assigned to, even if they got the other treatment.
    A per protocol analysis analyzes the patients according to what they actually received, even if they were assigned to the other group.
    A per protocol analysis is sometimes done in the early stages of development of a therapy, if you want to know what the effects of a treatment might be if all the patients actually got the assigned treatment, but it is open to very large biases, so it should only ever be presented as a supplementary analysis.
    Definitive clinical trials should not even report per-protocol analyses.
    The problem of drop out and missing data is a different problem, equally likely to introduce bias, and very difficult to deal with as once a patient has dropped out you often don’t have any data to analyze! Assuming that patient dropped out because they had a negative outcome is a particular way of trying to deal with missing data, but it isn’t intention to treat analysis.

    1. This isn’t totally correct either. ITT also means including the dropouts within the analysis, as they are treated they therefore should be analysed. In a per-protocol usually we can ignore those subjects, but that is up to whatever the protocol states.

  2. Trial registration reveals much more trouble here.Trial registration allows authors to record say what they plan to do before starting. It should stop them altering things like sample size or the primary outcome after looking at the data.

    The trial was registered here http://www.controlled-trials.com/ISRCTN82643064. The sample size (100 per group) and the primary outcome “Counselees’ participation, i.e. content and amount of questions asked and information received during the visit” were pre-specified. Apart from three primary outcomes, so far so good.

    But in the paper,none of the three primary outcomes were reported.

    Ten secondary outcomes (cleverly numbered 1-9!) were listed on the trial registration site. Of these four were reported correctly in the paper (one twice, same values!), one other which was planned as a change score was reported as an actual score, and five were not reported at all.

    In total eight knowledge scores are reported (table 3), ten “fulfilment of needs” scores (table 4), six scores measuring “risk perception, anxiety and perceived personal control” (table 5), and eight variations on “topics discussed and recalled” (table 6). Tests of statistical significance were performed on all 32 of these secondary outcomes, of which five were nominally significant at the 5% level.

    Failing to report any of your three primary outcomes, reporting only 4/10 of your secondary outcomes and then 28 other non pre-specified outcomes, must be some sort of record.

    It’s good to take responsibility, but I suspect, when the authors realised the full extent of the nonsense they’d written, they were hoping to bury the paper quick.

    I note that Breast Cancer Research is an open access journal, i.e. the author pays to get their work published. The old name for this was vanity publishing. It rarely resulted in high quality work.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.