To catch a cheat: Paper improves on stats method that nailed prolific retractor Fujii

anaesthesiaThe author of a 2012 paper in Anaesthesia which offered the statistical equivalent of coffin nails to the case against record-breaking fraudster Yoshitaka Fujii (currently at the top of our leaderboard) has written a new article in which he claims to have improved upon his approach.

As we’ve written previously, John Carlisle, an anesthesiologist in the United Kingdom, analyzed nearly 170 papers by Fujii and found aspects of the reported data to be astronomically improbable. It turns out, however, that he made a mistake that, while not fatal to his initial conclusions, required fixing in a follow-up paper, titled “Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials,” also published in Anaesthesia.

According to the abstract:

The Monte Carlo analysis nevertheless confirmed the original conclusion that the distribution of the data presented by Fujii et al. was extremely unlikely to have arisen from observed data. The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals.

Carlisle told us by email (and in statistics speak):

I used too many degrees of freedom in the chi-squared tests. So the correction involved using fewer degrees of freedom. Chi-squared values are turned in to p values in the context of the degrees of freedom – the same chi-squared value will generate different p values with different degrees of freedom. The more the degrees of freedom the smaller the p value for any given chi-squared value. The correct degrees of freedom were fewer than those I used, so the corrected p values were bigger.

I was aware that the chi-squared test wasn’t working quite right in 2012, which is why I applied an arbitrary correction factor. Steve Shafer [editor of Anesthesia & Analgesia, and a board member of the Center for Scientific Integrity] was interested and simulated the behaviour of the chi-squared method and identified the main problem during email correspondence (shared with Franklin Dexter). This happened in 2012 right after the original article was published. We’ve spent 3 years getting to this point that we could publish the correction.

Carlisle said the conclusion of his first paper was “unaltered.”

Fujii et al.’s work still generated very small p values (see table 2 towards the end of the paper).

The corrected chi-squared method turned out not to be the best method when we simulated data. The best method was to use Monte Carlo simulations, which is the main conclusion of the just-published paper.

We’re sieving through other RCTs to identify those that exhibit unlikely distributions of baseline data. We have a number of leads that we’re pursuing, but it will take time to investigate properly, following COPE’s guidance.

Having an improved method of assessing statistical viability is good news for Carlisle, who has launched something of a second career for himself as a data detective. We hope journals outside the field of anesthesiology are paying attention to his work.

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

2 thoughts on “To catch a cheat: Paper improves on stats method that nailed prolific retractor Fujii”

  1. In the states, I wonder if you could maintain faculty tenure just by identifying the statistical basis of bad studies…would need a method paper every once in a while.

  2. Uh, yeah, you could support an entire cohort of research faculty from cradle to grave with the sole mission of identifying bad science. This is a very bad problem.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.