The Retraction Watch Leaderboard

Who has the most retractions? Here’s our unofficial list (see notes on methodology), which we’ll update as more information comes to light:

  1. Joachim Boldt (210) See also: Editors-in-chief statement, our coverage
  2. Yoshitaka Fujii (172) See also: Final report of investigating committee, our reporting, additional coverage
  3. Hironobu Ueshima (124) See also: our coverage
  4. Yoshihiro Sato (122) See also: our coverage
  5. Ali Nazari (103) See also: our coverage
  6. Jun Iwamoto (90) See also: our coverage
  7. Diederik Stapel (58) See also: our coverage
  8. Yuhji Saitoh (56)  See also: our coverage
  9. Adrian Maxim (48) See also: our coverage
  10. A Salar Elahi (44) See also: our coverage
  11. Chen-Yuan (Peter) Chen (43) See also: SAGE, our coverage
  12. Jose L Calvo-Guirado (42) See also: our coverage
  13. Fazlul Sarkar (41) See also: our coverage
  14. Shahaboddin Shamshirband (41) See also: our coverage
  15. Hua Zhong (41) See also: journal notice
  16. Shigeaki Kato (40) See also: our coverage
  17. James Hunton (36) See also: our coverage
  18. Hyung-In Moon (35) See also: our coverage
  19. Dong Mei Wu (35) See also: National Natural Science Foundation of China finding
  20. Antonio Orlandi (34) See also: our coverage
  21. Dimitris Liakopoulos (33) (NB: We’re counting a book he co-authored as a single retraction. The book has 13 retracted chapters with DOIs that are not included in this figure.) See also: our coverage
  22. Jan Hendrik Schön (32) See also: our coverage
  23. Amelec Viloria aka Jesus Silva (32) See also: our coverage
  24. Naoki Mori (31) See also: our coverage
  25. Jun Ren (31) See also: our coverage
  26. Prashant K Sharma (31) See also: our coverage
  27. Bharat Aggarwal (30) See also: our coverage
  28. Victor Grech (30) See also: our coverage
  29. Soon-Gi Shin (30) See also: our coverage
  30. Tao Liu (29) See also: our coverage

Men continue to dominate the leaderboard, which agrees with the general findings of a 2013 paper suggesting that men are more likely to have papers retracted for fraud.

Notes:

Many accounts of the John Darsee story cite 80-plus retractions, which would place him sixth on the list, but Web of Science only lists 17, three of which are categorized as corrections. That’s not the only discrepancy. We’ve used our judgment based on covering these cases to arrive at the highest numbers we could verify.

Shigeaki Kato is likely to end up with 43 retractions, based on the results of a university investigation.

See an important update on The Retraction Watch Database, and our sustainability, here.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

40 thoughts on “The Retraction Watch Leaderboard”

  1. Q number of those in the list have papers that look like they should be retracted but haven’t. I look forward to the “retraction curve”, is it hyperbolic or is it best described by some other function?

  2. Curious…how many of these leading offenders worked for pharmaceutical/device companies? Given your reference to anesthesiology, it is interesting to note that the editor of “Anesthesia and Analgesia” published that “Anesthesia & Analgesia has experienced its share of fraud. Not a single case, including this one, has involved a study directly sponsored by a drug or device company. Sponsored studies are very closely audited, with each case report form checked against patient and laboratory data.” Keeping an eye on industry (‘the usual suspect’) is understandable, intense, somehow comforting, and helps sell books, but what if the real enemy isn’t within industry?

    1. Having written the quoted text, I have yet to see outright fraud in any study that was directly sponsored by a pharmaceutical company. I have significant issues with other kinds of misrepresentation. A classic example is Merck claiming that naproxen reduced risk of myocardial infarction, rather state the study finding that rofecoxib increased risk (see NEJM 353;26, 2005). The cases of fraud that I’ve personally handled, including Fujii, Boldt, and Reuben, were not directly sponsored by industry. One of the few Boldt studies NOT retracted was funded by pharma. This study had an actual IRB approval, underwent patient-level auditing by the company, and (no surprise) took years longer to complete than his fraudulent studies.

      One has to be alert for fraud and misrepresentation in any study. However, pharmaceutical companies have too much at risk to engage in outright fabrication.

      1. Hi Steven,
        Thank you for the additional information. To reduce the risk of retraction, industry sponsors are introducing publication audits (eg, to check authors meet authorship criteria, to check disclosures are made, to ensure authors had access to data etc…). For the first time, “auditing” appears in the Good Publication Practice guidelines (now in its third version – GPP3; published Annals of Internal Medicine 2015; disclosure – I’m a co-author). All sponsors, industry or otherwise, should consider a risk-based approach to publication audits. The possibility of an audit may help PREVENT (vs detect) publication misconduct. If most retractions occur with academic-sponsored research, then academia needs to find funds to conduct publication audits. Clinical trial audits are conducted by industry and others…why not publication audits?

  3. “We note that all but one of the top 25 are men”

    Marion A. Brach (14) Woman
    Silvia Bulfone-Paus (13) Woman

  4. I’m very impressed by the rapid updates here, and I stand corrected on Chiranjeevi’s tally. As with Darsee and Fujii, many journals seem to have more important matters to attend to than retracting fakes. Perhaps you could have a leaderboard for journals: the number of fraudulent papers identified in formal investigations that remain unretracted.

    One more contender, who sadly won’t make the top ten:
    Gilson Khang: 21
    http://www.webcitation.org/6VD9lOA5o
    http://www.webcitation.org/6VD9x5Ewi

    These papers, all in “Tissue Engineering and Regenerative Medicine”, are mentioned in Grieneisen’s 2012 review. I’m glad I archived them, because they have since vanished.

  5. What to do with co-authors? Probably, most of them are “innocent”, of course, and therefore shouldn’t be listed.

    However, about Hidenori Toyooka, co-author on several dozens of record holder Fuji’s retracted papers, RW wrote:

    The investigators do identify one co-author, Hidenori Toyooka, who appears to have known about the fabrication and yet still co-authored “dozens” of papers with Fujii. According to the report, Toyooka “recognized the suspicion” raised against his colleague in 2000, but “did not take any action.” (http://retractionwatch.com/2012/07/02/does-anesthesiology-have-a-problem-final-version-of-report-suggests-fujii-will-take-retraction-record-with-172/)

    Does he “deserve” to be listed?

    Another example: Elena Bulanova and Vadim Budagian, the two former postdocs of Silvia Bulfone-Paus, appeared as co-authors on twelve of the thirteen retracted Bulfone-Paus papers. In the formal investigation both were officially blamed responsible for the misconduct in all these twelve papers. See, for example: https://www.timeshighereducation.co.uk/news/new-retraction-of-paper-by-husband-and-wife-research-team/418431.article

    I think, those two definitely deserve to be listed.

    1. The “innocence” of co-authors is a difficult and problematic concept. Clearly, co-authorship confers many rewards – citations, acknowledgments, H-(and other) indexes, expanding lists of publications. It also confers responsibilities. The ICMJE “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals” (often referred to as the Vancouver Protocols) state this clearly:

      “Authorship confers credit and has important academic, social, and financial implications. Authorship also implies responsibility and accountability for published work.”

      Since the first edition in 1979, there have been statements voicing the expectation that those named as authors take responsibility for the content of papers bearing their names. Consistently, it has recommended that a “covering letter should contain a statement that the manuscript has been seen and approved by all authors”. By 1988, this had firmed up into the form of words that underlie Authorship Criteria in most national and international statements today:

      “All persons designated as authors should qualify for authorship. Each author should have participated sufficiently in the work to take public responsibility for the content.”

      For large-scale multi-centre projects, a caveat was added in about 1999:
      “Each author should have participated sufficiently in the work to take public responsibility for appropriate portions of the content. … Some journals now also request that one or more authors, referred to as “guarantors,” be identified as the persons who take responsibility for the integrity of the work as a whole, from inception to published article, and publish that information.”

      Nevertheless, it was reasserted that those named as authors must have met all the criteria for authorship and have given “final approval of the version to be published.” By 2013, the provision for assigning overall responsibility to a single ‘guarantor’ (even in large multi-centre, multi-group studies) had been abandoned and a stronger statement added as a 4th criterion for authorship:

      “4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investi- gated and resolved.
      In addition to being accountable for the parts of the work he or she has done, an author should be able to identify which co-authors are responsible for specific other parts of the work. In addition, authors should have confidence in the integrity of the contributions of their co- authors.
      All those designated as authors should meet all four criteria for authorship, and all who meet the four criteria should be identified as authors. Those who do not meet all four criteria should be acknowledged—see Section II.A.3 below. These authorship criteria are intended to reserve the status of authorship for those who deserve credit and can take responsibility for the work.”

      It must be really tough if you have been “deceived” by one of your colleagues into believing that some data or other images, or key calculations were accurate when they were not but it seems to me that one of the guarantees on which we depend when we read co-authored papers is that integrity of those key elements in which scientific discoveries are based have been validated, subjected to at least some due diligence by those who have signed their names to the paper.

      I think that when co-authors understand the gravity of the consequences of not doing that, we might just have fewer cases of research misconduct appearing in the literature. I hope.

    2. This brings up a very interesting topic for debate. Lots of discussion about relative credit for authorship and what it means to be first, second, or corresponding author. Not so much on how to allot credit for retractions and how much co-authors should have known or done.

  6. Following the Maryka Quik link it seems that you’ve counted both the retracted paper and the retraction letter. I actually count 8, not 16, retractions. You should give at least a quick check on all of these.

  7. I think Adrian Maxim’s correct tally is 48, as given in Grieneisen and Zhang 2012. If you broaden the search to “Maxim, A” they all show up on the IEEE search.

  8. Given that systematic reviews have been at the forefront of picking up fraud resulting in subsequent retraction (See Tramer EJAnaesthesiol 2013 30: 195), it is worth seeing how many systematic reviews might blindly continue to add retracted data. As best I can see Fujii data still appears in systematic reviews – probably in this one, for example.

    Surg Laparosc Endosc Percutan Tech. 2013 Feb;23(1):79-87. doi: 10.1097/SLE.0b013e31827549e8.
    Comparison of the efficacy of ondansetron and granisetron to prevent postoperative nausea and vomiting after laparoscopic cholecystectomy: a systematic review and meta-analysis.
    Wu SJ1, Xiong XZ, Lin YX, Cheng NS.

    If they have included retracted data, shouldn’t the systematic review also be retracted?

    1. Great question, I think they should. I would retract at least half of all scientific papers published, as they are not reproducible and flawed.

  9. Is this a truly scientific way to measure retractions? Perhaps we should be more interested in someone’s “retraction ratio” – eg something like the number of retractions as a percentage of their total output of papers.

    1. That’s certainly an interesting idea. But we’ve never suggested this is a “truly scientific way to measure retractions.” It’s just a list.

  10. Like so many Bernie Madoffs, or Lance Armstrongs who, for the esteem and glory and power and influence associated with success, were willing to lie and cheat to get it.

    One lie begatting more lies until it’s careening out of control.

    I mean if you can’t trust a guy in a lab coat, who can you trust?

    1. Thanks for the suggestion. Our rule is that only researchers who have been first or last authors of retracted papers are listed, and Yahaghi has only been a middle author.

  11. Ivan, regarding the RW Leaderboard stats, is it worth considering a caveat to J Darsee’s that many of his #s may have been only abstracts published solely in the proceedings volume for the annual meeting of the American Heart Association? (Our Cardiology Fellows went to the annual meetings to find jobs, and what better way to get noticed than to make lots of presentations? The AHA happily complied by having no limit on the number that could be submitted and accepted. (Contrast that with the annual Biophysical Society meeting, which wisely limited required sponsorship by a member, and limited that number to two.)

    Also, where do the Freidhelm Herrmann and Marion Brach publications (circa 1999) fall on your list? The journal published an exhaustive and detailed results of the review by an eminent scientist (who demurred our invitation to talk about his investigation.)

      1. Thanks for the questions, John.

        Re: Darsee, we count abstracts as publications, as the investigations calling for 82 retractions did.

        Re: Brach and Herrmann, our database is much more current than that 2006 story. Herrmann has 22 retractions http://retractiondatabase.org/RetractionSearch.aspx#?auth%3dHerrmann%252c%2bFriedhelm and Brach has 14: http://retractiondatabase.org/RetractionSearch.aspx#?auth%3dBrach%252c%2bMarion%2bA

        Neither count meets the bar — 25 — to be in the top 30 of our leaderboard.

        1. So, there are publications with a small ‘p’ or a large “P”. (ORI once got blown off by the New York Academy of Sciences, saying by policy they don’t retract “proceedings” . . . yet a third “p.”) Would qualifying the ‘RW measure’ maybe better weigh the impact of the bulk retractions to a field of research? (For example, the significance of the ‘Darsee’ #s looks big but it pales in comparison to another HMS example with ~1/3 the recommended retractions.)

          Thanks for update on Regarding Herrmann and Brach. Ulf Rapp’s published DFG investigation found falsified or suspect data in 94 publications, but as of 2006 only had 19 had been retracted, with 2 corrections. Science 312, pp41-42, 2006. Best, JK.

  12. The notes state that “Many accounts of the John Darsee story cite 80-plus retractions, which would place him third on the list”

    That is no longer true. A few recent entries have made it that he would now be number five or six.

  13. It’s very nice to have these frequency counts, but surely we need a retraction impact factor to rank order these fine fellows in a more meaningful way? Partly tongue in cheek, though I do wonder what that would look like. Thank you for all your fine work. It strikes me that your site is a great educational resource for students, because vigilance and education on these matters seems to have been seriously lacking. This is a project that is certainly worthy of all our support. It is a shame that our governments don’t fund something of this manner, but given how corrupt they can be perhaps we shouldn’t want them to…

  14. Just a lay person, but what I noticed is that most of the names are non-Western. Many appear to be clearly Asian names. The list consists of all men as one of the previous comments states. What I wonder is why? Perhaps there are translational / language issues; perhaps scholarship standards are different (not worse/better) in less westernized cultures; perhaps if one is flagged with retraction in western cultures, one will no longer have an opportunity to be a serial offender.

  15. After so many years, can we see the trend? Increase or decrease, what is the role of the RW in this so-called activity? Is the RW merely reporting/blogging?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.