Who has the most retractions? Here’s our unofficial list (see notes on methodology), which we’ll update as more information comes to light:
-
- Joachim Boldt (220) See also: Editors-in-chief statement, our coverage
- Yoshitaka Fujii (172) See also: Final report of investigating committee, our reporting, additional coverage
- Yoshihiro Sato (124) See also: our coverage
- Hironobu Ueshima (124) See also: our coverage
- Ali Nazari (104) See also: our coverage
- Jun Iwamoto (91) See also: our coverage
- A Salar Elahi (76) See also: our coverage
- Diederik Stapel (58) See also: our coverage
- Yuhji Saitoh (56) See also: our coverage
- Adrian Maxim (48) See also: our coverage
- Ashok Pandey (45) See also: The Hindu (Note: 3 of the retractions in Bioresource Technology had his name added during revisions of the manuscript, but his name was removed from the final manuscript in the course of the investigation, so we are including them in his total count.)
- Jose L Calvo-Guirado (44) See also: our coverage
- Chen-Yuan (Peter) Chen (43) See also: SAGE, our coverage
- Fazlul Sarkar (41) See also: our coverage
- Shahaboddin Shamshirband (41) See also: our coverage
- Hua Zhong (41) See also: journal notice
- Shigeaki Kato (40) See also: our coverage
- James Hunton (36) See also: our coverage
- Hyung-In Moon (35) See also: our coverage
- Dong Mei Wu (35) See also: National Natural Science Foundation of China finding
- Antonio Orlandi (34) See also: our coverage
- Dimitris Liakopoulos (33) (NB: We’re counting a book he co-authored as a single retraction. The book has 13 retracted chapters with DOIs that are not included in this figure.) See also: our coverage
- Jan Hendrik Schön (32) See also: our coverage
- Amelec Viloria aka Jesus Silva (32) See also: our coverage
- Naoki Mori (31) See also: our coverage
- Jun Ren (31) See also: our coverage
- Prashant K Sharma (31) See also: our coverage
- Bharat Aggarwal (30) See also: our coverage
- Victor Grech (30) See also: our coverage
- Soon-Gi Shin (30) See also: our coverage
Men continue to dominate the leaderboard, which agrees with the general findings of a 2013 paper suggesting that men are more likely to have papers retracted for fraud.
Notes:
Many accounts of the John Darsee story cite 80-plus retractions, which would place him sixth on the list, but Web of Science only lists 17, three of which are categorized as corrections. That’s not the only discrepancy. We’ve used our judgment based on covering these cases to arrive at the highest numbers we could verify.
Shigeaki Kato is likely to end up with 43 retractions, based on the results of a university investigation.
See an important update on The Retraction Watch Database, and our sustainability, here.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Q number of those in the list have papers that look like they should be retracted but haven’t. I look forward to the “retraction curve”, is it hyperbolic or is it best described by some other function?
Is it possible that crying in the lab and falling in love with the PI make you more honest?
http://blogs.scientificamerican.com/voices/furor-over-tim-hunt-must-lead-to-systemic-change/
Marion A. Brach, the other active part in them widely-known “Herrmann-Brach affair”, has 14 retractions: http://www.ncbi.nlm.nih.gov/pubmed/?term=brach+m+AND+retracted
Jon Sudbø, formerly at the Norwegian Radium Hospital and the University of Oslo, has 12 retractions. The fabrications in his papers were revealed in 2006. See http://www.ncbi.nlm.nih.gov/pubmed/?term=sudbo+j+AND+retract* and https://en.wikipedia.org/wiki/Jon_Sudbø
What about Khalid Zaman, who has 16 retractions – at least according to your own post (http://retractionwatch.com/2014/12/19/elsevier-retracting-16-papers-faked-peer-review/)?
Curious…how many of these leading offenders worked for pharmaceutical/device companies? Given your reference to anesthesiology, it is interesting to note that the editor of “Anesthesia and Analgesia” published that “Anesthesia & Analgesia has experienced its share of fraud. Not a single case, including this one, has involved a study directly sponsored by a drug or device company. Sponsored studies are very closely audited, with each case report form checked against patient and laboratory data.” Keeping an eye on industry (‘the usual suspect’) is understandable, intense, somehow comforting, and helps sell books, but what if the real enemy isn’t within industry?
Having written the quoted text, I have yet to see outright fraud in any study that was directly sponsored by a pharmaceutical company. I have significant issues with other kinds of misrepresentation. A classic example is Merck claiming that naproxen reduced risk of myocardial infarction, rather state the study finding that rofecoxib increased risk (see NEJM 353;26, 2005). The cases of fraud that I’ve personally handled, including Fujii, Boldt, and Reuben, were not directly sponsored by industry. One of the few Boldt studies NOT retracted was funded by pharma. This study had an actual IRB approval, underwent patient-level auditing by the company, and (no surprise) took years longer to complete than his fraudulent studies.
One has to be alert for fraud and misrepresentation in any study. However, pharmaceutical companies have too much at risk to engage in outright fabrication.
Hi Steven,
Thank you for the additional information. To reduce the risk of retraction, industry sponsors are introducing publication audits (eg, to check authors meet authorship criteria, to check disclosures are made, to ensure authors had access to data etc…). For the first time, “auditing” appears in the Good Publication Practice guidelines (now in its third version – GPP3; published Annals of Internal Medicine 2015; disclosure – I’m a co-author). All sponsors, industry or otherwise, should consider a risk-based approach to publication audits. The possibility of an audit may help PREVENT (vs detect) publication misconduct. If most retractions occur with academic-sponsored research, then academia needs to find funds to conduct publication audits. Clinical trial audits are conducted by industry and others…why not publication audits?
Some champion chemists will be feeling neglected:
Hua Zhong: 41
http://journals.iucr.org/e/issues/2010/01/00/me0404/index.html
Tao Liu: 29
http://journals.iucr.org/e/issues/2010/01/00/me0405/index.html
Pattium Chiranjeevi: ~30 retracted out of ~70 fakes.
google scholar {(retraction OR retracted) author:”chiranjeevi p”}
http://www.rsc.org/chemistryworld/News/2008/March/25030801.asp
“We note that all but one of the top 25 are men”
Marion A. Brach (14) Woman
Silvia Bulfone-Paus (13) Woman
That needs to be normalized. Simple fisher test for example…
“In 2000, a commission of inquiry found that Herrmann and his then partner Brach and other employees had published 94 of their 400 scientific publications in the field of cancer research with falsified data.”
https://plagiat.htw-berlin.de/hop/hos.html#brach
Same Marion Brach?
I’m very impressed by the rapid updates here, and I stand corrected on Chiranjeevi’s tally. As with Darsee and Fujii, many journals seem to have more important matters to attend to than retracting fakes. Perhaps you could have a leaderboard for journals: the number of fraudulent papers identified in formal investigations that remain unretracted.
One more contender, who sadly won’t make the top ten:
Gilson Khang: 21
http://www.webcitation.org/6VD9lOA5o
http://www.webcitation.org/6VD9x5Ewi
These papers, all in “Tissue Engineering and Regenerative Medicine”, are mentioned in Grieneisen’s 2012 review. I’m glad I archived them, because they have since vanished.
Great idea about the retraction-resistant journals
What to do with co-authors? Probably, most of them are “innocent”, of course, and therefore shouldn’t be listed.
However, about Hidenori Toyooka, co-author on several dozens of record holder Fuji’s retracted papers, RW wrote:
The investigators do identify one co-author, Hidenori Toyooka, who appears to have known about the fabrication and yet still co-authored “dozens” of papers with Fujii. According to the report, Toyooka “recognized the suspicion” raised against his colleague in 2000, but “did not take any action.” (http://retractionwatch.com/2012/07/02/does-anesthesiology-have-a-problem-final-version-of-report-suggests-fujii-will-take-retraction-record-with-172/)
Does he “deserve” to be listed?
Another example: Elena Bulanova and Vadim Budagian, the two former postdocs of Silvia Bulfone-Paus, appeared as co-authors on twelve of the thirteen retracted Bulfone-Paus papers. In the formal investigation both were officially blamed responsible for the misconduct in all these twelve papers. See, for example: https://www.timeshighereducation.co.uk/news/new-retraction-of-paper-by-husband-and-wife-research-team/418431.article
I think, those two definitely deserve to be listed.
The “innocence” of co-authors is a difficult and problematic concept. Clearly, co-authorship confers many rewards – citations, acknowledgments, H-(and other) indexes, expanding lists of publications. It also confers responsibilities. The ICMJE “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals” (often referred to as the Vancouver Protocols) state this clearly:
“Authorship confers credit and has important academic, social, and financial implications. Authorship also implies responsibility and accountability for published work.”
Since the first edition in 1979, there have been statements voicing the expectation that those named as authors take responsibility for the content of papers bearing their names. Consistently, it has recommended that a “covering letter should contain a statement that the manuscript has been seen and approved by all authors”. By 1988, this had firmed up into the form of words that underlie Authorship Criteria in most national and international statements today:
“All persons designated as authors should qualify for authorship. Each author should have participated sufficiently in the work to take public responsibility for the content.”
For large-scale multi-centre projects, a caveat was added in about 1999:
“Each author should have participated sufficiently in the work to take public responsibility for appropriate portions of the content. … Some journals now also request that one or more authors, referred to as “guarantors,” be identified as the persons who take responsibility for the integrity of the work as a whole, from inception to published article, and publish that information.”
Nevertheless, it was reasserted that those named as authors must have met all the criteria for authorship and have given “final approval of the version to be published.” By 2013, the provision for assigning overall responsibility to a single ‘guarantor’ (even in large multi-centre, multi-group studies) had been abandoned and a stronger statement added as a 4th criterion for authorship:
“4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investi- gated and resolved.
In addition to being accountable for the parts of the work he or she has done, an author should be able to identify which co-authors are responsible for specific other parts of the work. In addition, authors should have confidence in the integrity of the contributions of their co- authors.
All those designated as authors should meet all four criteria for authorship, and all who meet the four criteria should be identified as authors. Those who do not meet all four criteria should be acknowledged—see Section II.A.3 below. These authorship criteria are intended to reserve the status of authorship for those who deserve credit and can take responsibility for the work.”
It must be really tough if you have been “deceived” by one of your colleagues into believing that some data or other images, or key calculations were accurate when they were not but it seems to me that one of the guarantees on which we depend when we read co-authored papers is that integrity of those key elements in which scientific discoveries are based have been validated, subjected to at least some due diligence by those who have signed their names to the paper.
I think that when co-authors understand the gravity of the consequences of not doing that, we might just have fewer cases of research misconduct appearing in the literature. I hope.
This brings up a very interesting topic for debate. Lots of discussion about relative credit for authorship and what it means to be first, second, or corresponding author. Not so much on how to allot credit for retractions and how much co-authors should have known or done.
“Mastuyama” should be “Matsuyama”
Following the Maryka Quik link it seems that you’ve counted both the retracted paper and the retraction letter. I actually count 8, not 16, retractions. You should give at least a quick check on all of these.
I think Adrian Maxim’s correct tally is 48, as given in Grieneisen and Zhang 2012. If you broaden the search to “Maxim, A” they all show up on the IEEE search.
Given that systematic reviews have been at the forefront of picking up fraud resulting in subsequent retraction (See Tramer EJAnaesthesiol 2013 30: 195), it is worth seeing how many systematic reviews might blindly continue to add retracted data. As best I can see Fujii data still appears in systematic reviews – probably in this one, for example.
Surg Laparosc Endosc Percutan Tech. 2013 Feb;23(1):79-87. doi: 10.1097/SLE.0b013e31827549e8.
Comparison of the efficacy of ondansetron and granisetron to prevent postoperative nausea and vomiting after laparoscopic cholecystectomy: a systematic review and meta-analysis.
Wu SJ1, Xiong XZ, Lin YX, Cheng NS.
If they have included retracted data, shouldn’t the systematic review also be retracted?
Great question, I think they should. I would retract at least half of all scientific papers published, as they are not reproducible and flawed.
would it be informative if you insert last update date at the bottom of this post?
Interesting! The partition of frauds on genre would suggest the women’s superiority
Is this a truly scientific way to measure retractions? Perhaps we should be more interested in someone’s “retraction ratio” – eg something like the number of retractions as a percentage of their total output of papers.
That’s certainly an interesting idea. But we’ve never suggested this is a “truly scientific way to measure retractions.” It’s just a list.
Like so many Bernie Madoffs, or Lance Armstrongs who, for the esteem and glory and power and influence associated with success, were willing to lie and cheat to get it.
One lie begatting more lies until it’s careening out of control.
I mean if you can’t trust a guy in a lab coat, who can you trust?
Please search Emad Yahaghi in your database. it has 22 retracted papers.
Thanks for the suggestion. Our rule is that only researchers who have been first or last authors of retracted papers are listed, and Yahaghi has only been a middle author.
Ivan, regarding the RW Leaderboard stats, is it worth considering a caveat to J Darsee’s that many of his #s may have been only abstracts published solely in the proceedings volume for the annual meeting of the American Heart Association? (Our Cardiology Fellows went to the annual meetings to find jobs, and what better way to get noticed than to make lots of presentations? The AHA happily complied by having no limit on the number that could be submitted and accepted. (Contrast that with the annual Biophysical Society meeting, which wisely limited required sponsorship by a member, and limited that number to two.)
Also, where do the Freidhelm Herrmann and Marion Brach publications (circa 1999) fall on your list? The journal published an exhaustive and detailed results of the review by an eminent scientist (who demurred our invitation to talk about his investigation.)
To answer my 2nd question, Ulf Rapp’s DFG investigation found 94 papers with false or suspected data, but as of 2006 only 19 had been retracted, and only two had been corrected. See Science 312 : 41-42, 2006, or https://www.chronicle.com/article/german-investigators-seek-to-unravel-a-mammoth-case-of-scientific-fraud/
Thanks for the questions, John.
Re: Darsee, we count abstracts as publications, as the investigations calling for 82 retractions did.
Re: Brach and Herrmann, our database is much more current than that 2006 story. Herrmann has 22 retractions http://retractiondatabase.org/RetractionSearch.aspx#?auth%3dHerrmann%252c%2bFriedhelm and Brach has 14: http://retractiondatabase.org/RetractionSearch.aspx#?auth%3dBrach%252c%2bMarion%2bA
Neither count meets the bar — 25 — to be in the top 30 of our leaderboard.
So, there are publications with a small ‘p’ or a large “P”. (ORI once got blown off by the New York Academy of Sciences, saying by policy they don’t retract “proceedings” . . . yet a third “p.”) Would qualifying the ‘RW measure’ maybe better weigh the impact of the bulk retractions to a field of research? (For example, the significance of the ‘Darsee’ #s looks big but it pales in comparison to another HMS example with ~1/3 the recommended retractions.)
Thanks for update on Regarding Herrmann and Brach. Ulf Rapp’s published DFG investigation found falsified or suspect data in 94 publications, but as of 2006 only had 19 had been retracted, with 2 corrections. Science 312, pp41-42, 2006. Best, JK.
The notes state that “Many accounts of the John Darsee story cite 80-plus retractions, which would place him third on the list”
That is no longer true. A few recent entries have made it that he would now be number five or six.
Good point, updated.
It’s very nice to have these frequency counts, but surely we need a retraction impact factor to rank order these fine fellows in a more meaningful way? Partly tongue in cheek, though I do wonder what that would look like. Thank you for all your fine work. It strikes me that your site is a great educational resource for students, because vigilance and education on these matters seems to have been seriously lacking. This is a project that is certainly worthy of all our support. It is a shame that our governments don’t fund something of this manner, but given how corrupt they can be perhaps we shouldn’t want them to…
Publication Hall of Shame !!!
Just a lay person, but what I noticed is that most of the names are non-Western. Many appear to be clearly Asian names. The list consists of all men as one of the previous comments states. What I wonder is why? Perhaps there are translational / language issues; perhaps scholarship standards are different (not worse/better) in less westernized cultures; perhaps if one is flagged with retraction in western cultures, one will no longer have an opportunity to be a serial offender.
It’s also interesting that the top 2 researchers are both anesthesiologists.
After so many years, can we see the trend? Increase or decrease, what is the role of the RW in this so-called activity? Is the RW merely reporting/blogging?
I reckon that in the provided list it would be better to mention it if a scientist could clear her/his name following official investigations conducted whether by the institute or by the court.
This list should draw a distinction between a culprit and a victim. So, it should be open to any relevant update.
Would be very interesting to see the top 100 journals with the most retractions. I think that could up the pressure on the system and avoid future retractions a bit. Could you post this? Thanks!