Who has the most retractions? Introducing the Retraction Watch leaderboard

Ever since we broke the news about the issues with the now-retracted Science paper about changing people’s minds on gay marriage, we’ve been the subject of a lot of press coverage, which has in turn led a number of people to ask us: Who has the most retractions?

Well, we’ve tried to answer that in our new Retraction Watch leaderboard.

Here is the current list (click here for more detailed information about our methodology and additional notes):

  1. Yoshitaka Fujii (total retractions: 183)
  2. Joachim Boldt (89)
  3. Peter Chen (60)
  4. Diederik Stapel (54)
  5. Shigeaki Kato (36)
  6. Hendrik Schön (36)
  7. Hyung-In Moon (35)
  8. Naoki Mori (32)
  9. Scott Reuben (22)
  10. John Darsee (17)

While this post will remain the same, the always-current version will be available here and in the right-hand column of our pages.

We also like to think of retractions as more than just notches to add up, and consider what we can learn from studying them – with the ultimate goal of improving the way we do science.

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

25 thoughts on “Who has the most retractions? Introducing the Retraction Watch leaderboard”

  1. Boy, I remember back when the John Darsee case was a cause celebre, and now he’s only number 10; my how times fly… (of course there’s probably another 20 ahead of him who just haven’t been nailed yet).

    1. True, but I’m not sure about the significance of that. Somehow, I think such a list should be “normalized” by the amount of research done by the country, population, etc. And one’s work is retracted when it’s published in a “notable” journal (generally) since if it was published in an unknown journal, I’ll bet no one catches them. I think this kind of list is great, but it should be viewed as just a top-10 list of individuals… I think there are plenty of problems if it’s used as a form of sampling of a larger population…

    2. Given that the updated list also has three Germans in the top 10 (and two more in the runner-ups), this finding might not be so outstanding after all.

  2. We live in the times of the Mighty Impact Factor, where those who publish in Nature, Science and Cell rule. How about a different ranking, who has the biggest retractions? Cumulative Retracted Impact Factor?

    1. So you demand that the Impact Factor should not count in evaluating science but should count in evaluating retractions?

  3. Nice article in the NYT, which it should probably be emphasized is part of a series over the past month or so (altho’ not specifically tagged as such)…

    http://www.nytimes.com/2015/06/01/business/beyond-publish-or-perish-scientific-papers-look-to-make-splash.html

    http://www.nytimes.com/2015/05/30/science/michael-lacour-gay-marriage-science-study-retraction.html

    http://www.nytimes.com/interactive/2015/05/28/science/retractions-scientific-studies.html

    http://www.nytimes.com/2015/05/23/opinion/whats-behind-big-science-frauds.html

    I’d like to see a few more articles on solutions to these problems, rather than just providing fodder for the next member of Congress’s “defund the NIH” rant.

  4. How about looking at the impact of these retractions of articles with LIVE human clinical trials in progress? Potentially of major impact to real people with real problems!

  5. Well more broadly you could rank it by patients affected, I think Boldt’s papers were used to justify best practice with certain anaesthetics…
    You could also rank the list by the amount of grant funding obtained

    1. Agree. The list should be weighted for lifes lost, patients affected, colleagues’ careers damaged and grant/tax payers’ money wasted.

  6. What I think is remarkable is that all individuals come from countries without stable job prospects in science. For countries with permanent positions for scientists (like France) the retraction rate appears to be lower. Probably because you don’t need to fake results to survive if you already have a stable job.

    1. Really? My guess is that all these individuals were in good tenured positions, and that therefore their “faking data” had little bearing on their job prospects.

  7. Nice – often wanted to see something like the above (wow 183!). Bit disappointed to see the UK is not up there – come on Britain try harder! 🙂

  8. It would be interesting to rank journals by number of retractions, and maybe do an impact factor to retraction number ratio for each journal.

    1. I am afraid such a ranking would actually discourage the journals from actively pursuing retractions. In fact, they might even try as hard as possible to avoid publishing retractions – only not to further climb up the list.

  9. I think that the retraction data-base that RW is now developing will bring us one step closer to resolving this whole issue, and will allow others to mine the data and see the trends based on: a) publisher; b) journal; c) IF vs non-IF; d) authors; e) country; etc.

    The only query/concern I have about the data-base is how will silent retractions be factored in?

  10. Shouldn’t stop with IF and journal, should also list number of people and number of retractions by institution / organization / agency, and so forth.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.