Retraction Watch

Tracking retractions as a window into the scientific process

Who has the most retractions? Introducing the Retraction Watch leaderboard

with 25 comments

Ever since we broke the news about the issues with the now-retracted Science paper about changing people’s minds on gay marriage, we’ve been the subject of a lot of press coverage, which has in turn led a number of people to ask us: Who has the most retractions?

Well, we’ve tried to answer that in our new Retraction Watch leaderboard.

Here is the current list (click here for more detailed information about our methodology and additional notes):

  1. Yoshitaka Fujii (total retractions: 183)
  2. Joachim Boldt (89)
  3. Peter Chen (60)
  4. Diederik Stapel (54)
  5. Shigeaki Kato (36)
  6. Hendrik Schön (36)
  7. Hyung-In Moon (35)
  8. Naoki Mori (32)
  9. Scott Reuben (22)
  10. John Darsee (17)

While this post will remain the same, the always-current version will be available here and in the right-hand column of our pages.

We also like to think of retractions as more than just notches to add up, and consider what we can learn from studying them – with the ultimate goal of improving the way we do science.

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

Written by Alison McCook

June 16th, 2015 at 2:00 pm

  • Shecky R June 16, 2015 at 2:14 pm

    Boy, I remember back when the John Darsee case was a cause celebre, and now he’s only number 10; my how times fly… (of course there’s probably another 20 ahead of him who just haven’t been nailed yet).

  • EL June 16, 2015 at 2:28 pm

    What about Claudio Airoldi? He has 11 as PI + I think at least one as a collaborator (his name is on the retraction though) and now a new one recently:

  • fernando pessoa June 16, 2015 at 2:40 pm

    Below the top 19 according to the always-current version
    “Next up are Alirio Melendez (17), Ulrich Lichtenthaler (16), Jesus Angel Lemus (12) and Anil Potti (11.5, counting a partial retraction as a half)”,

    yet Silvia Bulfone-Paus has 13 retractions

  • alamode June 16, 2015 at 2:58 pm

    Fred Walumbwa (currently on seven retractions) has a number of articles of flagged on pubpeer.

  • Note June 16, 2015 at 2:58 pm

    One things stands out: three Japanese in the top 10.

    • Raymond Wan June 16, 2015 at 8:15 pm

      True, but I’m not sure about the significance of that. Somehow, I think such a list should be “normalized” by the amount of research done by the country, population, etc. And one’s work is retracted when it’s published in a “notable” journal (generally) since if it was published in an unknown journal, I’ll bet no one catches them. I think this kind of list is great, but it should be viewed as just a top-10 list of individuals… I think there are plenty of problems if it’s used as a form of sampling of a larger population…

    • Bernd June 17, 2015 at 6:13 am

      Given that the updated list also has three Germans in the top 10 (and two more in the runner-ups), this finding might not be so outstanding after all.

  • Leonid Schneider June 16, 2015 at 3:38 pm

    We live in the times of the Mighty Impact Factor, where those who publish in Nature, Science and Cell rule. How about a different ranking, who has the biggest retractions? Cumulative Retracted Impact Factor?

    • Elliott June 17, 2015 at 3:41 am

      So you demand that the Impact Factor should not count in evaluating science but should count in evaluating retractions?

      • Leonid Schneider June 18, 2015 at 2:11 pm

        Why do you think this is a contradiction? Exposing big IF retractions may speed up the demise of the impact factor

  • Paul Brookes June 16, 2015 at 4:06 pm

    Nice article in the NYT, which it should probably be emphasized is part of a series over the past month or so (altho’ not specifically tagged as such)…

    I’d like to see a few more articles on solutions to these problems, rather than just providing fodder for the next member of Congress’s “defund the NIH” rant.

  • oldnuke June 16, 2015 at 9:00 pm

    How about looking at the impact of these retractions of articles with LIVE human clinical trials in progress? Potentially of major impact to real people with real problems!

  • Erp June 17, 2015 at 9:45 am

    Well more broadly you could rank it by patients affected, I think Boldt’s papers were used to justify best practice with certain anaesthetics…
    You could also rank the list by the amount of grant funding obtained

    • Peter Klaren June 21, 2015 at 5:39 am

      Agree. The list should be weighted for lifes lost, patients affected, colleagues’ careers damaged and grant/tax payers’ money wasted.

  • Anonymous June 18, 2015 at 2:00 am

    What I think is remarkable is that all individuals come from countries without stable job prospects in science. For countries with permanent positions for scientists (like France) the retraction rate appears to be lower. Probably because you don’t need to fake results to survive if you already have a stable job.

    • cvdolan June 18, 2015 at 9:34 am

      Really? My guess is that all these individuals were in good tenured positions, and that therefore their “faking data” had little bearing on their job prospects.

    • fernandopessoa June 18, 2015 at 10:40 am

      France has not yet come under the microsope. Too early to tell.

    • fernando pessoa June 18, 2015 at 3:52 pm

      Publication from France commented on at Pubpeer:

  • Gary June 18, 2015 at 9:17 am

    Nice – often wanted to see something like the above (wow 183!). Bit disappointed to see the UK is not up there – come on Britain try harder! 🙂

  • Shayne June 18, 2015 at 1:39 pm

    It would be interesting to rank journals by number of retractions, and maybe do an impact factor to retraction number ratio for each journal.

  • Possible June 18, 2015 at 3:12 pm

    I think that the retraction data-base that RW is now developing will bring us one step closer to resolving this whole issue, and will allow others to mine the data and see the trends based on: a) publisher; b) journal; c) IF vs non-IF; d) authors; e) country; etc.

    The only query/concern I have about the data-base is how will silent retractions be factored in?

  • Todd June 19, 2015 at 7:41 pm

    Shouldn’t stop with IF and journal, should also list number of people and number of retractions by institution / organization / agency, and so forth.

  • Post a comment

    Threaded commenting powered by interconnect/it code.