It makes sense that scientists would adopt a sort of “buyer beware” attitude towards fraud — if researchers choose to collaborate with someone who’s been found guilty of some type of misconduct, their reputation among their peers might take a hit. But what about people who work with someone who is later convicted of misconduct — do they pay a price, as well? Yes, according to a preprint published recently by Katrin Hussinger and Maikel Pellens at the Centre for European Economic Research. We spoke with Hussinger and Pellens about how the “reputational damage” of misconduct can spread to prior collaborators.
RW: It’s not a surprise to think that people who collaborate with a known fraudster might see some impact, but were you surprised to see that people who worked with a “fraudster” in the past were potentially affected?
Maikel Pellens and Katrin Hussinger: This result might indeed seem surprising at first glance. Why would scientists be held responsible for the actions of their collaborators, when they could not have been aware of any upcoming issues when starting a collaboration? However, it is not that surprising if we consider to which extent scientists have to rely on trust. Science becomes ever more complex, and scientists do not have the resources, time, or incentives to personally validate each piece of prior research they need for their own work. Therefore, scientists need to rely on heuristics such as reputation to assess the work of their colleagues.
In order to not risk building on invalid work, and to protect their own reputation, researchers shy away from previous insights that have the slightest chance of being tainted. Associating with a fraudster, be it knowingly or unknowingly, could be interpreted by others as an indication of bad ethics. That in turn might lead to a worse evaluation of the quality of associated scientists’ research. In addition, if scientists see that peers shy away from a researcher’s prior work they may believe that these colleagues have more information about the honesty of the researcher in question and follow their example – whether or not the initial distrust was justified.
From that perspective, we interpret our results as an overly carefully interpretation of a noisy quality indicator: the reputational damage of misconduct ripples out to prior collaborators. This phenomenon, labelled stigmatization by mere association, has been documented before in other contexts.
RW: You note: “The result suggests that scientific misconduct generates large indirect costs in the form of mistrust against a wider range of research findings than was previously assumed.” Can you say more about that?
MP and KH: Previous research found that misconduct cases affect the reputation of the misconducting scientist(s), her co-authors, and lowers the attractiveness of the entire research line. We show that these effects spread out even to prior collaborators who are clearly not related to misconduct. To us, this represents an unwarranted and wasteful disregard of valid research findings which slows down progress and wastes resources, as scientist shy away from valid research results in which they no longer believe. The public money which has been used to fund honest research which becomes mistrusted is also clearly wasted.
RW: Do your findings show a correlative or a causative relationship between the drop in citations and a previous association with someone who committed misconduct? Do you have any ideas as to what else might explain the relationship, such as a change of interest in the field, etc?
MP and KH: We can interpret our results as causal, as long as we believe in the assumptions of our statistical models. Our approach relies on a control group and compares prior collaborators of fraudsters to collaborators of equally eminent, but honest researchers. The scientific community cites both groups’ work similarly in the years before the misconduct was revealed, which means that they are comparable. Change of interest in the field should affect both the prior collaborator of the fraudster and the control scientist in the same way, as we compare scientists active in the same broad field. We can also rule out that our results are driven by the possibility that the fraudulent scientists are publishing less or dropping out of research altogether. Additionally, we exclude any direct negative spillovers from the fraudulent scientist by construction, as we disregard papers coauthored with them, and exclude co-authors of affected work from the analysis.
RW: You note that collaborators experience a drop in citations starting one year before the misconduct is revealed. You suggest this is “likely explained by the presence of rumours of misconduct during the investigation.” But in our experience, investigations are notoriously secretive, to protect researchers’ reputation in case there is no evidence of misconduct. So how could the larger community be aware of what’s going on?
MP and KH: Indeed, secrecy is an important part of misconduct investigations, and rightfully so. But despite best efforts, it is imaginable that some sense that something is up might leak out to the wider community. This might already suffice, considering that we believe this is about heuristics. Misconduct allegations also seem to regularly take place in the public space. Retraction Watch has in that sense documented numerous public statements of concern by journal editors, documented public skepticism about scientific results, and so on. All of these might occur after, but also before, a formal and secret examination of misconduct.
RW: Do you think it’s unfair for researchers to mistrust someone who collaborated in the past with a scientist who ends up committing misconduct?
MP and KH: Of course, it is not fair that prior collaborators of scientists who end up committing misconduct are mistrusted. However, that does not mean that cannot be rational from the perspective of the scientific community. As we argued above, scientists cannot easily directly assess the quality of the work of their peers and therefore need to rely on heuristics to determine trustworthiness. Given the high costs of pursuing potentially shaky or invalid research, it is no surprise that we see scientists behaving conservatively, not wanting to rely on science whose validity is even slightly in doubt.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at email@example.com.