Retraction Watch

Tracking retractions as a window into the scientific process

“I kind of like that about science:” Harvard diabetes breakthrough muddied by two new papers

with 23 comments

douglas_melton

Doug Melton

Harvard stem cell researcher Doug Melton got a lot of press last year for research on a hormone he named betatrophin, after its supposed ability to increase production of beta cells, which regulate insulin.

Now, the conclusions from that paper, which has been cited 59 times, according to Thomson Scientific’s Web of Knowledge, have been called into question by research from an independent group, as well as follow-up work from the original team.

The interest was driven by the hormone’s potential as a new treatment for diabetes. In 2013, Melton told the Harvard Gazette that betatrophin could be in clinical trials within three to five years. Here’s Kerry Grens in The Scientist:

A study published in Cell last year offered evidence that a hormone called betatrophin, or Angiopoietin-like protein 8 (ANGPTL8), could ramp up pancreatic β cell proliferation in a mouse model of insulin resistance. The results made quite a splash; the study’s authors—led by Doug Melton at Harvard University—even wrote that betatrophin treatment “could augment or replace insulin injections by increasing the number of endogenous insulin-producing cells in diabetics.”

However, a new paper in Cell from researchers at Regeneron Pharmaceuticals and elsewhere has found that betatrophin has no effect on beta cell growth. From a blog post by Paul Knoepfler, a UC Davis stem cell biologist:

…the authors report that Betatrophin, which now should probably go by the more objective name ANGPTL8, does not substantially impact beta cell growth, but rather seems to have a notable role in mouse triglyceride metabolism. ANGPTL8 is probably a very interesting molecule, but it is not what it seemed to be.

It’s now unclear what the fate of the 2013 Betatrophin paper will be moving forward given that its central argument is incorrect and even the naming of the molecule “Betatrophin” is indeed perhaps not appropriate any more.

Melton and his team wrote a Perspectives article, also in Cell, including some of their own research:

Our own follow-up experiments using an independent knockout of ANGPTL8 produced the same result. Taken together, these new data contradict our previous conclusion ( Yi et al., 2013) that betatrophin is the sole agent responsible for beta cell proliferation following S961 treatment. Furthermore, these new results cast doubt on the finding that beta cell proliferation can be induced by overexpression of ANGPTL8/betatrophin.

We spoke with Melton for more details:

We reported in the first Cell paper that an insulin antagonist called S961 results in robust beta cell replication. And then we attributed that activity to a member of the angiopoietin-like protein family which we named betatrophin. We showed in that paper that when we injected betatrophin via tail vein injection, it increased beta cell replication and beta cell mass by a significant amount.

This implied that betatrophin would be responsible for the S961 activity. Then the recent paper by Gusarova did an experiment, which we agree with and confirmed, that a knock-out of betatrophin does not prevent the amplification or replication of beta cells by the insulin antagonist.

There are two possibilities: one is that betatrophin doesn’t have that activity. The other is that when you knockout one member of the family, there’s compensation by the others.

If you look at [the new research] you see considerable variation in both beta cell replication and beta cell mass…we said, that’s an unsatisfactory variation. If I had to do those experiments again, I would not use tail vein injection as a method to provide the protein. So now you have a highly variable result, and you ask the question, should you believe those cases where there’s significant replication, or would you say the whole thing is an artifact of how you presented the protein?

That’s how science progresses sometimes. You make a claim, based on evidence. You then reinterpret it, you do more experiements, and you’re wiser for it in the end.

When they got word of the Regeneron paper, Melton’s team was already in the process of writing an article with the new data that they later used in their Perspective:

We were just in the process of submitting a paper showing that the betatrophin knockout was inconsistent with our original proposal…when I was then asked to review the Gusarova paper, I said, it’s a really good paper, they’ve beat us to the punch, and I think it should be published. It shows things aren’t as simple and straightforward as we thought they were in 2013. I may be old fashioned, but I think that’s the point of making things public.

The Perspectives article has gotten some very critical comments on PubPeer in the last few days. One of them was regarding this figure from the Perspectives paper:

 

Screen Shot 2014-10-31 at 2.34.27 PM

 

 

I stand by the commentary we wrote for Cell. I have read these comments on PubPeer. I haven’t read them before, but I must say I find them unprofessional, bordering on ad hominem. They don’t really seemed to be aimed at trying to solve the puzzle, so I’m not encouraged by trying to engage with those annonymous people and answer their questions.

The first comment had one interesting point, but unfortunately that poor person did not read the commentary carefully and has completely misunderstood the statistical argument.

It’s a complicated and evolving science. There are new papers coming out that I think are going to be very interesting to the community, so I’ve decided it’s not really helpful to talk to people who write comments like that, so I won’t be providing any rejoinder.

In an earlier conversation, Melton didn’t have an explanation for why they presented the data this way visually, but he did tell us why he chose not to retract the paper:

For me, retraction is when you’ve done something wrong, or the data has been fraudulently produced, or something like that. That’s not the case here. This is more an example of how science progresses. If you were going to say, everything that’s ever been published which has been reinterpreted should be retracted, it would be well near half of all publications, wouldn’t it.

Here’s one more PubPeer comment on the case:

It was nice of Cell to allow the authors to “retract” their paper by adding another Cell item to their CVs under a neutral heading like “Perspectives”.

We’ve reached out to a number of betatrophin researchers. Many have expressed concerns about the original research, but none will go on the record about it. Several have cited Melton’s influential position as the reason they were declining to comment. He’s certainly a major figure in the field – his research group also got a lot of press for a Cell article earlier this month that might herald a “cure” for diabetes, involving production of beta cells using human embryonic stem cells. We asked Melton about this problem:

People should say whatever they want in analyzing the data. It’s not a matter of commenting on a person, it’s really, what does the data tell us?…Evidence is hardly ever so incontrovertible that you know something with absolute certainty. So you do more and more experiments to build a case for the conclusion you think best supports all the evidence. I don’t think there is a strong conclusion about betatrophin…but I kind of like that about science, because it makes lots of experiments you can design to go about and test it.

So what are the next steps for betatrophin research? Melton again:

If you asked me, what am I most worried about, it’s actually not the mouse work. It’s a paper we didn’t even comment on here, which is the paper by Jiao saying that when they transplant human beta cells into this mouse model, there’s no beta cell replication, even in the presence of this insulin antagonist. And that’s what gives me pause. because if it turns out you can replication mouse beta cells, but not human ones, then I would say it’s sort of interesting to figure out what the mouse is doing, but it’s way down on the priorities for me to try and get to the bottom of.

Written by Cat Ferguson

November 10th, 2014 at 9:30 am

Comments
  • erico November 10, 2014 at 10:48 am

    This remind me an old case of a cell paper from a brazilian group. They reported that genes from chagas disease agent are incorporated into host genome. After one year of replication attempts it was clear that the results were insuitable, and sample contamination was the best explanation.

    authors refused to retract the paper, saying no misconduct was committed. Cell then forcedly withdrawn the paper with editorial retraction, claiming that this was warranted due fail attempts to replicate the work.

    the situation now it is almost identical, although instead of retraction, cell prefers to offer more space to the authors on the journal, in a somehow bizarre perspectives. Ms emilie marcus need to explain why cell behave so differently in two very very equal situations.

    • Takver November 12, 2014 at 4:33 am

      — Contamination is one thing and means something was clearly wrong and should be retracted. The two examples are not the same AT ALL.

  • genetics November 10, 2014 at 11:05 am

    I did not go into the details of the original paper. But it looks very much like it was published in quite a hurry. And if you have n=7 and you see an impressive effect in n=3 and no effect in n=4, I would agree with the pubpeer peer that it is poor science if you simply give a mean +/- SEM. Especially since the methodology looks like it’s prone to error.

    For me, it really looks like they showed irreproducibility even within their original n=7. If you have a situtation like that, you have to boost the numbers so that in the end you can report with some confidence that a subset of the treated population shows a certain effect. The best explanation for the whole situation is simply that they were looking at three artifacts.

    I totally agree with Melton that errors and misinterpretations are part of the normal scientific process and that it does not make sense to retract everything that over the years turns out not to be fully correct.
    However, if you show within a year of the original paper that for what ever reasons your results are not reproducible and the entire conclusion is completely invalid, I think retraction would be the correct thing to do. Especially if your results were so prone to artifacts due to low numbers.

  • Dario November 10, 2014 at 11:35 am

    I’m a bit confused about Melton’s statements. Does ANGPTL8 have an effect on beta-cell replication or not?

    Gusarova et al paper says no and Melton agrees with them.

    Then, in the same perspective piece, Melton and co-authors say “the conclusion from Yi et al. must be corrected and modified with respect to the magnitude of the effect [..] some mice respond strongly to ANGPTL8/betatrophin expression but many do not. When all mice are taken into account the results show a modest average increase in beta cell replication. So, above question’s answer is: yes, a little bit.

    Whatever the answer, there’s indeed a jackpot effect in their first paper!!! I wish I had one during my PhD but it turned out “science is complicated”

  • JATdS November 10, 2014 at 3:39 pm

    Not a specialist, but I’m curious. I wonder if Regeneron Pharmaceuticals (whose researchers published the new Cell paper) have a commercial product that increases the production of beta cells, or if they are working on one such product? I am trying to assess the level of actual or potential conflicts of interest that might exist, given the potentially massive economic off-spin from such a product that Melton claimed would likely exist in about 2016-2018.

    As for reproducibility of the original Melton paper, is it not possible that in their first set of experiments that the results were reproducible? One would have to look at the number of replicates, independent trials, etc, I guess. And is the rest of the data set totally invalid, or useless? What I am saying is, is there no middle ground between an erratum, a discussion and a retraction? I have seen one or two papers retracted based on, for example, a tiny error, but whose 99% of the study seems to have been trashed in whole. Rather than a retraction, may I suggest, for those cases where one cannot say that things are as they are with 100% certainty, that a new category of manuscript emerge in science publishing, something like “Ongoing assessment” or similar, in which comments, critiques, defenses and rebuttals could be provided, in open access format, with a single DOI assigned, but updated once a year, for example, to accomodate all updates, and to not overwork the editor board? If publishers would start to more readily embrace the concept of post-publicaton peer review as part of the new publishing model, they would save themselves alot of future headaches.

    • AUC November 15, 2014 at 3:54 pm

      I haven’t got access to Cell right now, but based on Melton’s comments regarding tail vein injections, I suspect that his lab did not confirm that they had adequate ANGPTL8 exposure in his experiments.

      In my experience, the ridiculous concept of a “no regrets dose” coupled with inadequate characterization of drug exposure after dosing is one of the main reasons for lack of reproducibility of academic studies.

      It’s fairly standard practice in Pharma to make sure negative results aren’t simply due to missing a vein, crappy pharmacokinetics, or anti-drug antibodies. I’m pretty sure that the Regeneron folks would have done those experiments before going to the trouble of knocking out the gene.

  • Scotus November 10, 2014 at 4:09 pm

    From the Harvard Gazette article:

    “Working with Harvard’s Office of Technology Development, Melton and Yi already have a collaborative agreement with Evotec, a German biotech firm that now has 15 scientists working on betatrophin, and the compound has been licensed to Janssen Pharmaceuticals, a Johnson & Johnson company that now, too, has scientists working to move betatrophin toward the clinic.”

    I wonder how these companies feel about entering into these collaborative and licensing agreements, particularly after finding out what the original primary data look like?

  • imohacsi November 10, 2014 at 4:22 pm

    Well, this can be kind of expected. You think that you notice an effect in the first few specimens. You rush to publish it just to be the first. And finally your own following specimens disprove you…

  • tekija November 10, 2014 at 4:23 pm

    I would have spotted this as a reviewer. Not because I have any expertice on this topic, but because I regularly ask to replace bar graphs and box’n’whiskers with scattergrams that show the underlying data.

    • Takver November 12, 2014 at 4:35 am

      Yep– and we “caught” this in journal club at the time!

    • cathy s. November 12, 2014 at 7:34 pm

      One wonder who actually reviewed this paper?

      Someone else referred to the CalTech commencement address by Feynman, any of us doing science would do well to re- read this every few weeks !

      http://calteches.library.caltech.edu/51/2/CargoCult.pdf

  • Andrew Paterson November 10, 2014 at 6:27 pm

    It’s a bit difficult to take Melton’s comment ‘If you look at [the new research] you see considerable variation in both beta cell replication and beta cell mass…we said, that’s an unsatisfactory variation.’ at face value.

    One just needs to look at Supplementary Fig 3F in the original Cell paper (on page 7 of the supplementary pdf) you can see the data for ‘The average beta cell
    replication rate per islet for each individual mouse injected with either GFP or
    betatrophin’ to see that it shows clear bimodality in the betatrophin treated group:

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3756510/bin/NIHMS472535-supplement-01.pdf

    So the biomodal distribution was presented in the original paper and replotted in their recent Fig 1B.

    The statistics methods of the paper imply that all analysis was two-tailed t-test, clearly inappropriate for small samples sizes that violate normality assumptions.

    • Conrad Seitz MD November 10, 2014 at 7:32 pm

      I agree, with only seven subjects shouldn’t they have used non parametric statistics? I think for seven data points you don’t have significance unless all seven are pointing in the same direction…
      And showing the results as a bar chart instead of a scattergram is totally inappropriate.
      In my humble opinion.

  • John Smith November 10, 2014 at 8:06 pm

    Also note that the first author on the paper, Peng Yi, was Melton’s postdoc, and he recently got a job as an assistant professor at the prestigious Joslin Diabetes Center on the basis of this betatrophin work. As noted on the institute website, a major thrust of his research will be around this so called “beta-trophin” (see below). Not sure there’s much worth characterizing!

    “The major focus in Dr. Yi’s laboratory right now is to characterize a newly discovered liver/fat secreted protein, Betatrophin (also known as RIFL, Lipasin and Angptl8), that can specifically promote pancreatic beta cell proliferation in rodents. Using proteomics, biochemical and genetic approaches, Dr. Yi’s laboratory is investigating the regulation mechanism of Betatrophin, the active form of Betatrophin in circulation and searching for receptors of Betatrophin.”

  • j. doe November 10, 2014 at 11:15 pm

    JATdS
    Rather than a retraction, may I suggest, for those cases where one cannot say that things are as they are with 100% certainty, that a new category of manuscript emerge in science publishing, something like “Ongoing assessment” or similar, in which comments, critiques, defenses and rebuttals could be provided, in open access format, with a single DOI assigned, but updated once a year, for example, to accomodate all updates, and to not overwork the editor board? If publishers would start to more readily embrace the concept of post-publicaton peer review as part of the new publishing model, they would save themselves alot of future headaches.

    It seems your view is rather limited. The above would make many fields to only have these “ongoing assessments”. Maybe it is already like that in many fields, and I am not even talking about humanities etc. Maybe the cell field etc. should just align towards other fields: no 100 % assurances (who reaches this in the sense of probability?!), more short papers and reviews, more commentaries, more discussion, etc. You know, scientific debates are possible also in the traditional forums.

    I also agree with Melton about reserving retractions to wrongdoings.

    • genetics November 11, 2014 at 5:26 am

      j.doe, if you say retractions should be reserved to “wrongdoings”, what is your definition of “wrongdoings”?
      I would only agree if irreproducible results are also “wrongdoings”. The reason for the irreproducibility is not relevant. You make a claim “a causes b” that is backed up by experimental results. If you and others repeat these experiments and cannot reproduce the results, your claim is invalid and the work should be retracted. There really is not much difference whether irreproducilility is caused by fraud (experiments never actually happened, results were made up), honest error (wrong reagents were used) or if it is unexplainable.

      In this case here, even the name of the protein that the group coined is misleading.

  • JATdS November 11, 2014 at 12:50 am

    j.doe, the above was simply an alternative and supplementary suggestion, but not in any way the solution. Traditional peer review does not – nor should ever be – erased, or substituted, despite its faults. But it needs to be strengthened. For exmaple, a minimum of 3-5 peers for every paper, and all carefully vetted, and checked on data-bases and Google before being contacted. Peers should be remunerated on a percentual pabsis of publisher’s annual profits associated with that paper, if published, to be fair to the professional who ensure the quality of papers, and to keep the peer pool motivated. The fact that you have a demotivated peer pool, an uneven system between journals of vetting and recruiting peers has now sown deep distrust not only in scientists, but also in the peer review system, the submission system, the journals and in the publishers.

    In addition to fortifying traditional peer review, there should be pre-submission peer review, made responsible by the research institute, and post-publication peer review, including my suggested additional “Ongoing assessment” category, would be a timeless parameter. It would burden the system, no doubt, a little more, but such checks and balances would slow down the overall paper-pumping industry a notch, and perhaps make sure that errrs get minimized. Scientific debates that are made in other forums such as blogs, or even PubPeer or PubMed Commons, or other online boards are fine, but too many discussions starts to dilute the message, and the debate, because scientists have less time to trawl though al the sites to find a concentrated debate. So, my idea of an “Ongoing assessment” category simply allows ideas, good or bad, valid or not, critical or supportive, to be concentrated, right next to the main article’s PDF file, as a supplementary, free, open access PDF file. I don’t see what is so limited about an idea that would fortify journals, publishers and science credibility.

  • Samson November 11, 2014 at 2:00 am

    In the end, two Cell papers, one assistant professorship, wide publicity, and one big blunder! Beware of karma

  • Neuroskeptic November 11, 2014 at 8:08 am

    “I have read these comments on PubPeer. I haven’t read them before, but I must say I find them unprofessional, bordering on ad hominem. They don’t really seemed to be aimed at trying to solve the puzzle, so I’m not encouraged by trying to engage with those annonymous people and answer their questions.”

    When scientists resort to this kind of evasion it generally means PubPeer is on the right track and a retraction will follow before too long.

  • cathy s. November 12, 2014 at 7:28 pm

    I am curious why Melton had chosen the hydrodynamic tail vein injection, which is notoriously hard to carry out in a reproducible manner, over viral delivery. Seems rather odd.

  • Larry Raff November 26, 2014 at 2:07 pm

    This is careless work and does not have the integrity of thoroughness. I think Melton’s publicist is to be commended for satisfying Melton’s constant need for vacuous headlines. He has “encapsulated” an old discovery, claimed new properties and saw the capsule dissolve in the light of day.

  • isabel Gibbs February 16, 2015 at 7:41 pm

    Since this level of scientific knowledge is basically impossible for most of us to understand lest of all formulate our own opinions, personally, I would rather trust a Harvard researcher than a for-profit pharmaceutical company. There is a revolting conflict of interest when you see cures to life threatening diseases in the hands of those who stand to make billions by their mere existence. I refuse to believe that KNoepfer labs and Regeneron Pharmaceutical experts saw in a couple of months what the very qualified Dr. Melton and his Harvard team failed to see after decades of research motivated by Melton’s desire to find a cure for a decease that threatens his own children. Yet let’s not forget that this very accomplished scientist took this long in his research to make sure he was on the right track.
    It is not unusual for corporations who stand to lose billions in profits to take drastic measures to silence those who threaten them. This is what international politics is made of. Dr. Melton is quick to accept this criticism yet reluctant to retract all proof of his findings altogether, wonder why?

  • Dave July 26, 2016 at 5:52 pm
  • Post a comment

    Threaded commenting powered by interconnect/it code.