A pair of engineering researchers has analyzed the work of a handful of prolific scientific fraudsters, and has concluded that science needs a “shame list” to deter future misconduct.
The paper, “Analysis and Implications of Retraction Period and Coauthorship of Fraudulent Publications,” by Jong Yong Abdiel Foo and Xin Ji Alan Tan, of Ngee Ann Polytechnic in Singapore, appeared online last week in Accountability in Research: Policies and Quality Assurance.
The authors write that
it would be interesting to understand the association of coauthorship and the retraction process with the objectives of this study set to be: (1) assess the period between a fraudulent works being published and it being retracted for a selected pool of researchers, (2) evaluate its correlation between fraudulent publications and the number of coauthors, (3) discuss the possible use of coauthor(s) as a strategy for publishing fraudulent work in the literature, and (4) possible approach to tighten coauthorship by implicating all coauthor(s) of the work if it is eventually found to be fraudulent.
The authors looked at five researchers who had retracted at least 15 publications: Joachim Boldt, Jan Hendrick Schon, Naoki Mori, Scott Reuben, and Wataru Matsuyama:
The obtained results shows that the retraction period is 48.96 ± 32.16 months for the 113 publications affiliated to the 5 studied researchers. There are a total of 180 coauthors with 6.40 ± 3.26 coauthors per researcher’s retracted publication. The linear regression analysis indicates that there is limited correlation (R2 = .008) between the citation counts and retraction period. The p value for multiple F-tests to assess the number of coauthors to a fraudulent publication on an interresearcher basis is found to be ranging from < .001 to .458. It is also found that a better correlation (R2 = .592) exists between the likelihood of a researcher to involve different individuals for isolated fraudulent publications while only selecting very few to be their frequent coauthors of their mischievous acts. With this study, the possible use of coauthors as a strategy for publishing fraudulent work and a potential approach to tighten coauthorship are discussed.
There are a few important caveats to the results, which the authors acknowledge. One is that five researchers probably aren’t representative. Another is that the authors relied completely on PubMed which, while very powerful, doesn’t actually include all of the retractions for these five researchers. For example, the authors analyze 33 retractions by Boldt, when in fact he has had 79 papers retracted (and promises by journals to retract another nine). The Mori and Reuben counts are also off by about 10% each.
Those discrepancies — which, if we may, a quick check of Retraction Watch would have clarified — dovetail with one of the authors’ proposed ways to decrease misconduct, a “shame list:”
…one possible mean to dampen this unethical thinking is to have a freely available online ‘shame list’ where the full name of the disgraced author and coauthor with their parent organization are included.With this list, it would be easy for institutions, grant agencies, and journal editors to check on any fraudulent history of a given researcher. Presently, free online databases such as PubMed and Google Scholar provide limited information on this. A good starting point would be to incorporate the ‘shame list’ on websites of organizations which have been gaining recognitions in their efforts on publication ethics such as the Committee on Publication Ethics (COPE).
We asked Ferric Fang, who has been studying retractions since 2010 when he discovered troubling problems in Mori’s work in his own journal, what he thought of the paper and its recommendations:
I would distinguish between (1) the concept of a centralized data repository (‘shame list’) containing the names of individuals found to have committed research misconduct and (2) the concept that all coauthors should be held equally responsible for any paper retracted due to fraud. Both concepts are proposed in this article by Foo and Tan. The former proposal is relatively uncontroversial. The latter constitutes ‘guilt by association’ and is overtly unfair.
Modern scientific practices make it logistically impossible for each coauthor to be able to vouch for the validity of every piece of data in a research article. One must be able to trust one’s collaborators. Not infrequently, research misconduct is exposed by a whistleblower in the same lab, who might also be a coauthor. Should the whistleblower be tarred with the same brush as the miscreant? This would create an even greater disincentive to whistleblowing than currently exists.
The flip side of this, of course, can be found in an editorial from Nature from last year:
It is unacceptable for lab heads – who are happy to take the credit for good work – to look at raw data for the first time only when problems in published studies are reported.
Fang also said the sample size is too small, and that
…while it is interesting to look for general trends to try to understand research misconduct, it is also hazardous because each case of misconduct is unique. It is possible that coauthors share responsibility in some cases of misconduct but it would be wrong to assume that this is a general rule.
Hat tip: Rolf Degen
Want to keep up with all things Retraction Watch? You can follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.
I think a list would be a poor idea. Valid mistakes are sometimes made….would these people appear on the same lists as serious and repeated offenders?
We already have a shame list: Retraction Watch. It is more entertaining and educational than any web site run by government could be, reveals a lot of background information and has brought a great deal of shame upon the “sinners”.
I agree.
Also, note that our host here does not denounce people as frauds and charlatans. He merely reports the facts of retractions and official findings of misconduct. A ‘shame list’ would be a liability magnet. Remember the experiences of Science-Fraud.
I think the authors would need an n of > 5 before they were in a position to offer evidence-based suggestions on how to manage retractions.
As it is they’ve just made an off-the-cuff suggestion and bolted it onto a small study. Not that the suggestion is bad, I’d welcome a shame list, but it has nothing to do with the n=5 study they happen to have done as well.
Yes! and also make nice permanent tattoos “Previously convicted” on the forehead of those who have been in prison?
I will reiterate my call for the construction of a Mount Fraudmore.
We should be positive. Nobody wants black lists. What the science needs should be rephrased to:
Did the author/institution/editor/publisher/COPE Do_the_Right_Thing in cases when evidence for misconduct (data manipulation/fabrication and/or plagiarism) was presented?
Then, the list would show who Did_the_Right_Thing and who did not.
Such state of affairs would help the above mentioned parties to come clean, if they are sincere, and would point out those who are really bad guys — the fraudsters and those who cover them up.
For more, see my comments about Transparency Index.
Possible serious problems for “Nature” and other leading journals…. New allegations (pertaining to estimates of dinosaur growth) may feature serious numerical and methodological errors, a highly uncooperative principal author hiding behind “this work was peer reviewed” and “the original data has been lost” types of evasions. Will “Nature” require corrections or retractions to improve the scientific record?
http://www.nytimes.com/2013/12/17/science/earth/outsider-challenges-papers-on-growth-of-dinosaurs.html?_r=0
This is an example of NOT Doing_the_Right_Thing by both the author and the editor.
Were COPE, publisher and the institution of the author informed about this case?
I would like a list. It could be made neutral – perhaps with a column that gives different options such as retracted/queried/disputed/withdrawn/republished. Retraction Watch is not easy to use for authors who have many citations to check. For example one day I would like to check the citations in my doctorate thesis – however I have 40 pages of references. A list would make it easier. It would also make it easier to decide which citations to include in future papers.
Continuing to submit fraudulent papers after being caught out on earlier ones is inexcusable. and should be cause for public banishment from the profession. Much more then just the reputation of a single discipline is at stake.