Science retracts paper after Nobel laureate’s lab can’t replicate results

Science is retracting a 2014 paper from the lab of a Nobel winner after replication attempts failed to conclusively support the original results.

In January, Bruce Beutler, an immunologist at University of Texas Southwestern Medical Center and winner of the 2011 Nobel Prize in Physiology or Medicine, emailed Science editor-in-chief Jeremy Berg to report that attempts to replicate the findings in “MAVS, cGAS, and endogenous retroviruses in T-independent B cell responses” had weakened his confidence in original results. The paper had found that virus-like elements in the human genome play an important role in the immune system’s response to pathogens.

Although Beutler and several co-authors requested retraction right off the bat, the journal discovered that two co-authors disagreed, which Berg told us drew out the retraction process. In an attempt to resolve the situation, the journal waited for Beutler’s lab to perform another replication attempt. Those findings were inconclusive and the dissenting authors continued to push back against retraction.

Berg told us:

The question is more nuanced than with many retractions. It’s not a question about whether the result is “right” or “wrong’ but about how robust it is.

If we had known that it was going to take as long as it did to reach a conclusion, we might have issued an expression of concern.

Finally, about a month ago, Science decided that the journal itself, rather than a subset of co-authors, would retract the paper.

Here’s the full notice, published today and attributed to Berg:

Bruce Beutler has informed Science that experiments performed in his laboratory have failed to reproduce clearly the foundational observations of the 2014 article, “MAVS, cGAS, and endogenous retroviruses in T-independent B cell responses.” In contrast to data presented (Figs. 1 and 3), he now finds that deficiency of MAVS and/or cGAS do not cause a robust decrease in type II T-independent B cell responses. At most, a decreased antibody response is observed in Stinggt/gt mice. Although some of the data shown in the paper may be correct, the core observations and conclusions are not. Beutler and a majority of coauthors have therefore requested retraction of the paper.

The editors nonetheless note that authors Ming Zeng and Xiaolei Shi stand by the findings of the paper. These authors do not agree to this retraction due to disagreement with the design of the reproduction experiments.

The editors have worked with the authors to determine the appropriate outcome and have decided retraction is appropriate in light of the lack of robustness of the main finding.

The paper has been cited 50 times, according to Clarivate Analytics’ Web of Science — including eight citations since the end of January, the month when Beutler first asked for a retraction.

Zeng, the paper’s first author, did not respond to our questions. We were unable to find contact information for Shi, who was listed as a co-second author.

In a statement, UT Southwestern told us:

Dr. Beutler is fully committed to the integrity and transparency needed for the proper conduct of scientific work. He has therefore informed Science that certain experiments performed in his laboratory have failed to reproduce. Specifically, deficiency of MAVS and/or cGAS does not cause a robust decrease in type II T-independent B cell responses. 

The school also told us:

Dr. Beutler and his collaborators identified the problem with reproducibility, made multiple attempts to reproduce the data, and based on their results concluded retraction was the appropriate step. No outside labs were involved.

UT Southwestern added that it couldn’t comment on whether Zeng and Shi were still at the school:

As a matter of policy, UT Southwestern does not comment on personal information.

Waiting for clarity

Berg said Science allowed the replication attempt because it was “trying to help [the authors] resolve their differences to find a situation where everyone was comfortable with the retraction.” He said he doesn’t think anyone at Science suggested the idea, but that he and the journal were “supportive” of it.

Berg told us:

We were aware at the time there were three possible outcomes. One, a clear confirmation of results as published. Two, fairly clear evidence they weren’t [confirmed]. And a lot of room in the middle where things were pointed in the right direction but not enough to be clear.

The results ended up right in that “middle ground,” Berg said, where Beutler and the rest of the co-authors weren’t comfortable letting the paper stand, but where Zeng and Shi didn’t believe the results of the replication attempt justified retracting the paper.

Berg said he considered issuing an expression of concern (EoC) at several points, but didn’t because he thought the situation would be resolved sooner. Both in the weeks after receiving Beutler’s retraction request and in June, when the replication study results came in, Berg said:

We thought that things were moving forward so we didn’t issue an EoC.

Even in June, we did not expect that the process would continue as long as it did. Once the results came in, we solicited an expert opinion and then worked with the authors to see how they would respond to the findings. We did not expect that the process to come to a conclusion and agree on final language for the editorial retraction would take as long as it did.

The paper has been cited five times since June, when Science first received the replication study results.

Beutler shared the Nobel in 2011 for helping discover the proteins that recognize pathogens and activate the body’s immune response.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at [email protected].

17 thoughts on “Science retracts paper after Nobel laureate’s lab can’t replicate results”

  1. Unless I’m missing something, this shouldn’t be grounds for retraction. If there was no fraud, the methods were reported accurately, etc., then these are data in good standing. Obviously, the inability to replicate should also be published. Retraction isn’t a mechanism for scouring the literature of findings that don’t hold up. The mechanism for that in the lack of convergent evidence over time. Pieces of the publication may still the relevant, and the mystery of why a result doesn’t hold up is itself potential important.

    1. When the same lab with at least some of the original people on the paper using largely the same exact reagents and methods finds out they can’t confirm very significant, published results they published just 2/3 years ago, then this is not “scouring the literature for findings that don’t hold up”, it is clearly an issue that something was done wrong and put into the paper which is most likely not accurate/correct. Finding who exactly was at fault and whether they intentionall fabricated data is a whole different ball game which has nothing to do with the issue at hand.

      1. The new data suggest that the original finding may have been a false positive. In general, this could have happened for two reasons: chance or systematic error. You’re assuming the latter, others here are assuming the former. If it were chance, then retraction is not in order – rather the publication of the new findings (as others have also argued). If it were systematic error, then that is grounds for retraction, but then I think the possibility of a false positive by chance should be ruled out with a high degree of certainty, and ideally, the error that was made should be tracked down and reported. Otherwise you’re just erasing evidence, which moves our knowledge backwards, not forwards.

        And then there is a third option: that there was no false positive initially, and no error, but that the observed effects are simply very sensitive to the exact experimental design. You may argue that such a lack of robustness would mean the findings aren’t as noteworthy as they were initially presented to be, but the data themselves would hold up, and here too, a better solution would be to publish the results of the failed replication, so that we have all the facts out in the open.

  2. Beutler is doing the right thing, and his action improves the standing of any research with his name attached to it. Retraction is not just for fraud. You can retract a paper if you no longer believe the conclusions. Here the (majority of) the team realise that there was something special about the original experimental run that was making it turn out one way, but now they can’t find what that thing is – it’s not one of the features they thought were important. Some contaminant, some experimental condition that nobody though was important and so didn’t document, something funny about one batch of chemicals – it could be anything. They’ve checked all the things they can think of and they are still puzzled, and they are convinced that the published result set is the outlier and now they don’t want other people to go down that dead end.

    When this happens only the best scientists correct the scientific record in this way.

    Once I get a Nobel prize, I will be confident enough to correct my errors. For now, I will just pretend they don’t exist, argue that they are misprints, claim that they “don’t change the message of the paper”, fire a subordinate, hire a lawyer, or sue my university. These are all much easier to do.

  3. I totally agree with Steven St. John that the above evidence shouldn’t be grounds for paper retraction. I am more curious about 1. whether the so-call replication experiments technically and strictly followed the “Method” reported; 2. Why not involve the 3rd lab or party to do the experiment which would be more objective?
    Since the first and second author consisted on the original conclusions even after the replication experiments, there must be reasons that they believe the result can be reproduced. For biological experiment, one single factor may confound the final result. It would be help to publish the comparison of details between the original experiments and the replication experiments, such as the methods and the figures,etc to help understand what factors may influence the robustness of the conclusions.

  4. I agree with Steven St. John. If replication were to be added to the grounds for retraction, entire disciplines would face extinction, including economics, sociology, political science, some branches of psychology, and much of biomedical research–particularly randomized controlled trials (RCTs). See: https://www.nature.com/articles/s41562-016-0021.

    This is not to say that the removal of so much of the crap that regularly gets published wouldn’t improve the enterprise of science. But think of all the jobs in academia, government, and journal publishing that would be lost! The cost of waste contributes to our gross national product.

    1. But there are entire disciplines that should be lost. Economics would be a good place to start because most research is unsupported by empirical observations.

      But there is a bigger problem – only ‘positive’ findings are published, and the ‘positive’ findings may very well be due to luck. The standard of a 95% Confidence Interval means that 1 experiment in 20 will yield a positive result simply because o fhe process of creating random samples.

  5. Not sure whether the reproduction experiment involved the authors who disagreed with the retraction, but it is obvious that Beutler and the other 2 authors did not reach an agreement on the experiment design of the reproduction experiment. It sounds wired to me because the reproduction experiment is supposed to use the same method as used in the original published paper in order to determine whether the results can be reproduced under the same conditions.

    Since the result of the replication experiment is in the “middle ground” and not conclusive as the Editor in Chief of Science Dr. Berg said, why the journal or Beutler would still proceed with the retraction without collecting more convincing data or work with independent labs to reach a firm conclusion similar to the standard used in the publication process, which is critical for the whole field? It just sounds like not a responsible way of resolving this issue.

  6. Just wondering if the journal would have had the same policy of waiting and asking the authors to replicate if it were not from a Nobel prize winning lab

  7. I totally agree with Darrel Francis. Retraction should apply to all publications whose conclusions can be proven fatally wrong. How can one still leave his papers in the journal when he could actually prove himself wrong? I could hardly bear that even a second. If that mean the extinction of most publications, let it be! We would deserve it.

  8. I would prefer (maybe it’s just me) that there be a clear distinction between removing a paper for ethical concerns and removing a paper for honest error.

    I suggest that retraction be used for the former and withdrawal be used for the latter.

    1. Unfortunately, withdrawal is already used as a general term for retracting papers that have not been formally published.

  9. As a matter of policy, UT Southwestern does not comment on personal information.

    Or spell “personnel” correctly.

  10. I was not aware that “withdrawal is already used as a general term” in that way; I wonder how many people are? In any case, I would counsel against rfg’s suggestion that “retraction” and “withdrawal” be used, unglossed and therefore (for many readers) unclearly, to express what certain is the “clear distinction between removing a paper for ethical concerns and removing a paper for honest error.” Rather, I think that “clear distinction” would best be expressed by a pair of self-glossing terms, for instance (copying the military usage for “discharge”) by the pair “dishonorable retraction”/”honorable retraction”, or the similar pair “This paper has been dishonorably retracted”/”This paper has been honorably retracted”.

    Failing that, in case rfg’s suggestion were adopted verbatim, I would very strongly counsel any journal or organization to adopt it also to clearly and explicitly gloss “retraction” and “withdrawal” with their newly specific meanings, in many places scattered throughout their editorial or organizational boilerplate. (Of course people don’t read boilerplate. That’s why I prefer self-glossing jargon.)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.