On Dec. 2, 2013, Alison Lakin, the research integrity officer at the University of Colorado Denver, received a concerning email.
The emailer was alleging several problems in a 2012 paper in the Journal of Clinical Investigation, co-authored by one of its high-profile faculty members. Lakin discussed the allegations with some administrators and agreed they had merit; Lakin sequestered an author’s laptop and other materials. Over the next few months, the university learned of additional allegations affecting other papers — and discovered even more serious problems in the JCI paper. Namely, the first author had inserted changes to 21 figures in the paper after submitting it, without alerting the other authors, journal, or reviewers.
Recently, a biostatistician sent an open letter to editors of 10 major science journals, urging them to pay more attention to common statistical problems with papers. Specifically, Romain-Daniel Gosselin, Founder and CEO of Biotelligences, which trains researchers in biostatistics, counted how many of 10 recent papers in each of the 10 journals contained two common problems: omitting the sample size used in experiments, as well as the tests used as part of the statistical analyses. (Short answer: Too many.) Below, we have reproduced his letter.
The timing was tight, but Sergio Gonzalez had done it. Gonzalez, a postdoctoral researcher at the Institute for Neurosciences of Montpellier (INSERM) in France, had a paper accepted in a top journal by the end of 2015, just in time to apply for a small number of highly sought-after permanent research positions that open up in France each year.
If Gonzalez had missed the January deadline for this system of advancement, known as concours, he would have had to wait until the following cycle to apply.
Once his paper was accepted by the Journal of Clinical Investigation, Gonzalez could breathe a sigh of relief. He began being invited to interviews. But then, a comment showed up at PubPeer.
Researchers at Columbia University have retracted a 2013 paper in The Journal of Clinical Investigation, after uncovering abnormalities in the stem cell lines that undermined the conclusions in the paper.
Last year, corresponding author Dieter Egli discovered he could not reproduce key data in the 2013 paper because almost all the cell lines first author Haiqing Hua used contained abnormalities, casting doubt on the overall findings. When Egli reached out to Hua for answers, Hua could not explain the abnormalities. As a result, Hua and Egli agreed the paper should be retracted.
Since some of the details of how the paper ended up relying on abnormal cells remain unclear, the university confirmed to us that it is investigating the matter.
We’ve found another retraction for Erin Potts-Kant, a former researcher at Duke, bringing her total to 15.
Yesterday we reported on two new retractions for Potts-Kant in PLoS ONE, which earned her a spot in the top 30 on our leaderboard. As with the others, the latest paper, in the Journal of Clinical Investigation, is marred by “unreliable” data.
Retraction number nine, by The Journal of Clinical Investigation, is for duplicating data from another publication — which has also faced questions on PubPeer about image manipulation, along with many other papers by Fusco.