Retraction Watch

Tracking retractions as a window into the scientific process

Co-author of retraction record-holder likely fabricated his own data, analysis shows

with 2 comments

In 2012, John Carlisle, a British anesthesiologist, demonstrated conclusively using statistics that Yoshitaka Fujii had faked data in many studies. Fujii — as followers of this blog well know — now holds the record for most retractions by an individual author (183).

Carlisle’s work accomplished two things: It put to rest any doubt that problems with Fujii’s work might have resulted from innocent mistakes, and it gave journals a mathematical tool for conducting investigations into potential cases of misconduct.

Now comes the payoff. In a new paper, Carlisle and another anesthesiologist, John Loadsman, take aim at one of Fujii’s frequent co-authors, Yuhji Saitoh of Yachiyo Medical Center and Tokyo Women’s Medical University in Japan. The pair analyzed data from 31 studies Saitoh published between 1993 and 2012 — including one study that was rejected in 2015 — for a total of 32 papers. Of those, 23 did not include Fujii as an author.

Writing in the journal Anaesthesia, where Carlisle published his first study about Fujii, he and Loadsman state that:

Combining the continuous and categorical probabilities of the 32 included trials, we found a very low likelihood of random sampling: p = 1.27 × 108 (1 in 100,000,000). The high probability of non-random sampling and the repetition of lines in multiple graphs suggest that further scrutiny of Saitoh’s work is warranted.

In the Fujii case, a consortium of journal editors was instrumental in pushing their colleagues in Japan to take action against the fraudster. Andrew Klein, who edits Anaesthesia, told us that probably won’t be necessary this time:

The Japanese Society also agreed that there was sufficient evidence to conduct a full investigation, and they are in possession of all John’s analysis of the data (which is pretty damning). We discussed a joint EiC statement, but this time (in contrast to the Fujii case), the investigation is already under way, therefore I am not sure it will make any difference at all. However, all the EiCs have discussed this case together and are working together as a consortium on this case, ready to issue retractions as and when the investigation is concluded.

As Carlisle and Loadsman note, the latest analysis points to another unfortunate but realistic fact:

The possibility of a more widespread problem within a research network suggests that such institutional investigations should not be restricted to single authors. …

The findings of this analysis support further institutional investigations into research published by Dr Yuhji Saitoh. Until such a time that these results can be explained, as was also recommended in the case of Fujii [3], we think it is important that Dr Saitoh’s data are excluded from meta-analyses or other reviews of the relevant subjects.

Although the P values in his analysis about Fujii cast stronger doubt about Fujii’s work, Carlisle noted:

There isn’t yet evidence to know how extreme a p value has to be for one to correctly conclude that the paper is wrong. That is a whole topic. … In this case I personally conclude there is enough evidence to conduct an investigation is all of Saitoh’s work. The failure of the Fujii investigation to question Saitoh and his work should be reviewed: one lesson is, I think, that future investigations should assess coauthors more thoroughly.

Saitoh already has 32 retractions from his collaborations with Fujii. In August, we covered a retraction of a 2012 article in the Journal of Anesthesia co-authored by a Yuhji Saitoh — but not Yoshitaka Fujii. According to the notice, the paper was retracted because the research was conducted “without appropriate patient consent.”

Meanwhile, Anaesthesia says it is taking the commendable step of adopting the “Carlisle method” more widely. How much more? Klein writes in an editorial that:

We have decided to screen all randomised controlled trials submitted to the journal using the Carlisle Method. Any that fall foul due to suspicious data that are not consistent with random sampling will be rejected and the authors informed of the reason for rejection.

Klein writes more that’s worth reading (we encourage you to take a look), but we thought this passage was particularly important:

We hope that by screening all submissions from now, we will not in the future publish data from a randomised trial that is not consistent with random sampling. We also hope that that other journals will follow suit and also screen submissions. Only if all journals screen randomised controlled trials before acceptance and publication is there a chance that we can stop this in its tracks. Hence, we call for other editors, statisticians, authors and readers to apply the Carlisle Method for themselves and help validate it.

Indeed, we recently reported on a researcher’s use of a similar method to analyze 33 randomized clinical trials by a bone researcher in Japan, which also found they exhibited patterns that suggest systematic problems with the results.

Incidentally, this morning we reported that four of Fujii’s papers flagged by Carlisle’s analysis in 2012 are now — finally — being retracted.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Comments
  • Helene Z Hill, PhD December 20, 2016 at 4:30 pm

    John Carlisle wrote a commentary in my book “Hidden Data: The Blind Eye of Science” (available on Amazon in print and downloadable forms). He compared numerical data of 15 individuals (A-D, F-O and Q) and concluded that “the discrepancy of Q’s data dwarfed the doubt pooled for other data sources. It is so extreme that one would conclude that the data are invalid”. Q generated most of the data in 8 journal articles in 4 different journals. Attempts to get editors of 3 of these journals to retract have met with failure. My conclusion is “who cares?” This includes the corresponding author of all 8 papers.

  • Hiding the problem December 20, 2016 at 4:52 pm

    Screening of manuscripts in private is not a great solution. The proposed sanction, rejecting with an explanation will just allow the authors to simulate random sampling a bit more accurately. There should be publicity and an institutional investigation should be requested.

  • Post a comment

    Threaded commenting powered by interconnect/it code.