Does science need a retraction “shame list?”
The paper, “Analysis and Implications of Retraction Period and Coauthorship of Fraudulent Publications,” by Jong Yong Abdiel Foo and Xin Ji Alan Tan, of Ngee Ann Polytechnic in Singapore, appeared online last week in Accountability in Research: Policies and Quality Assurance.
The authors write that
it would be interesting to understand the association of coauthorship and the retraction process with the objectives of this study set to be: (1) assess the period between a fraudulent works being published and it being retracted for a selected pool of researchers, (2) evaluate its correlation between fraudulent publications and the number of coauthors, (3) discuss the possible use of coauthor(s) as a strategy for publishing fraudulent work in the literature, and (4) possible approach to tighten coauthorship by implicating all coauthor(s) of the work if it is eventually found to be fraudulent.
The obtained results shows that the retraction period is 48.96 ± 32.16 months for the 113 publications affiliated to the 5 studied researchers. There are a total of 180 coauthors with 6.40 ± 3.26 coauthors per researcher’s retracted publication. The linear regression analysis indicates that there is limited correlation (R2 = .008) between the citation counts and retraction period. The p value for multiple F-tests to assess the number of coauthors to a fraudulent publication on an interresearcher basis is found to be ranging from < .001 to .458. It is also found that a better correlation (R2 = .592) exists between the likelihood of a researcher to involve different individuals for isolated fraudulent publications while only selecting very few to be their frequent coauthors of their mischievous acts. With this study, the possible use of coauthors as a strategy for publishing fraudulent work and a potential approach to tighten coauthorship are discussed.
There are a few important caveats to the results, which the authors acknowledge. One is that five researchers probably aren’t representative. Another is that the authors relied completely on PubMed which, while very powerful, doesn’t actually include all of the retractions for these five researchers. For example, the authors analyze 33 retractions by Boldt, when in fact he has had 79 papers retracted (and promises by journals to retract another nine). The Mori and Reuben counts are also off by about 10% each.
Those discrepancies — which, if we may, a quick check of Retraction Watch would have clarified — dovetail with one of the authors’ proposed ways to decrease misconduct, a “shame list:”
…one possible mean to dampen this unethical thinking is to have a freely available online ‘shame list’ where the full name of the disgraced author and coauthor with their parent organization are included.With this list, it would be easy for institutions, grant agencies, and journal editors to check on any fraudulent history of a given researcher. Presently, free online databases such as PubMed and Google Scholar provide limited information on this. A good starting point would be to incorporate the ‘shame list’ on websites of organizations which have been gaining recognitions in their efforts on publication ethics such as the Committee on Publication Ethics (COPE).
I would distinguish between (1) the concept of a centralized data repository (‘shame list’) containing the names of individuals found to have committed research misconduct and (2) the concept that all coauthors should be held equally responsible for any paper retracted due to fraud. Both concepts are proposed in this article by Foo and Tan. The former proposal is relatively uncontroversial. The latter constitutes ‘guilt by association’ and is overtly unfair.
Modern scientific practices make it logistically impossible for each coauthor to be able to vouch for the validity of every piece of data in a research article. One must be able to trust one’s collaborators. Not infrequently, research misconduct is exposed by a whistleblower in the same lab, who might also be a coauthor. Should the whistleblower be tarred with the same brush as the miscreant? This would create an even greater disincentive to whistleblowing than currently exists.
The flip side of this, of course, can be found in an editorial from Nature from last year:
It is unacceptable for lab heads – who are happy to take the credit for good work – to look at raw data for the first time only when problems in published studies are reported.
Fang also said the sample size is too small, and that
…while it is interesting to look for general trends to try to understand research misconduct, it is also hazardous because each case of misconduct is unique. It is possible that coauthors share responsibility in some cases of misconduct but it would be wrong to assume that this is a general rule.
Hat tip: Rolf Degen