Here at Retraction Watch, we are reminded every day that everybody (including us) makes mistakes — what matters is, how you handle yourself when it happens. That’s why we created a “doing the right thing” category, to flag incidents where scientists have owned up to their errors and taken steps to correct them.
We’re not suggesting retractions have no effect on a scientist’s career — a working paper posted last month by the National Bureau of Economic Research found that principal investigators with retracted papers see an average drop of 10% in citations of their other papers, a phenomenon known as a citation penalty. But they face a bigger penalty if the retraction stemmed from misconduct, rather than an honest mistake.
This jibes with research we’ve seen before, which shows the scientific community can be forgiving when researchers own up to their mistakes – notably, a 2013 study that found scientists face no citation penalty if they ask to retract their own papers, rather than forcing the journal or publisher to act.
In the latest paper, Pierre Azoulay at MIT and his co-authors found that scientists who are more “eminent” face a stronger citation penalty after retraction – but only in the case of misconduct or fraud. When a paper dies from mistakes, a scientist’s reputation has no bearing on readers’ reactions.
Here’s more from “The Career Effects of Scandal: Evidence from Scientific Retractions”, published last month:
We find that eminent scientists are more harshly penalized than their less-distinguished peers in the wake of a retraction, but only in cases involving fraud or misconduct. When the retraction event had it source in “honest mistakes,” we find no evidence of differential stigma between high- and low-status faculty members.
Benjamin Jones, author of the 2013 study showing scientists face no citation penalty if they retract their own work, had this reaction:
The Azoulay et al. study breaks important new ground by examining career consequences of major scientific fraud. Prior research on career consequences has examined “single retraction” cases – i.e. where the authors are not shown to be engaged in systematic misconduct. In those cases, the eminent coauthors typically escape blame and the consequences fall on the junior coauthors. Azoulay et al. extend this work to consider bigger scandals, including “multiple retraction” cases where authors are found to be systematic frauds. They find that the consequences for such scandals, by contrast, appear especially severe for eminent authors.
Integrating across these studies, the following story appears to emerge. Eminence is protective when there is uncertainty. The community appears to give the eminent author the benefit of the doubt, with blame accruing toward less established coauthors. But when fraud is systematic, and there is little doubt across numerous papers who is to blame, the eminent can be severely penalized. In that case, the bigger you are, the harder you fall.
Another paper published May 13 in PLoS ONE reinforces the community’s ability to forgive. In the experiment, a group of people is told that the ethics committee of a university has forced a scientist to retract an article, while another group is told no such thing about the scientist (control). Then, part of the group that heard the accusations is told the scientist is innocent, after all, and the article won’t be retracted:
Results revealed that the exoneration effectively worked, in that participants in the exoneration condition had a more favorable attitude (post-exoneration attitude) toward the researcher than did participants in the uncorrected accusation condition. Moreover, the post-exoneration attitude toward the researcher was similar in the exoneration and the control conditions. Finally, in the exoneration condition only, participants’ post-exoneration attitude was more favorable than their pre-exoneration attitude. These findings suggest that an exoneration of an accused researcher restores the researcher’s credibility.
Another recent paper in Perspectives on Psychological Science takes an interesting approach to misconduct — it looks at scientific fraud through the eyes of game theory, represented as the classic prisoner’s dilemma. In the model, individual scientists have much to gain from cheating, such as promotions and papers. But the cost is the erosion of public trust in science, which affects research funding — too much cheating, and no individual scientist benefits.
In this way, author Christoph Engel argues, science isn’t so different from other areas, which have used such models to understand bad behavior and thus find ways to control it:
In this article, I do not claim to be developing new solutions. Rather, I aim at convincing scientists that their seemingly idiosyncratic problem actually shares the main features of a governance problem that has been well understood and for which viable institutional interventions have been found. Scientists can capitalize on this rich body of analysis and institutional design. Solving dilemma problems is not easy, but—as many other dilemmas show—there is hope.
Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.