Although many scientists fear putting their data to the test of replication efforts, due to the embarrassment they’d feel if their findings couldn’t be repeated, a new study suggests those fears are unfounded.
The paper, published last week in PLOS ONE, found that scientists overestimate how much having non-replicable data will hurt their careers, and the community values those who are honest about what went wrong.
Specifically, as the authors note:
..the current data suggests that while many are worried about how a failed replication would affect their reputation, it is probably not as bad as they think. Of course, the current data cannot provide evidence that there are no negative effects; just that the negative impact is overestimated. That said, everyone wants to be seen as competent and honest, but failed replications are a part of science. In fact, they are how science moves forward!
First author Adam K. Fetterman at the University of Essex in the UK told us he wasn’t surprised by this finding:
Based on our previous research, and the work of others, we assumed that we would find what we did. Admitting that you were wrong about a finding gives the impression that you are a humble person who is interested in the pursuit of knowledge. Being wrong is a part of science and no one should blame anyone for being wrong. We are all learning as we go and no one is perfect, and very few researchers are sinister in their intentions. We are just trying to do the best we can.
Many scientists’ fears about their reputations likely stem from believing having non-replicable data means they have failed as a scientist, he added:
…we suggest that scientists may be hesitant to embrace replication efforts because it seems like one is admitting to being a bad researcher. That is, not that they would necessarily be accused of committing fraud (though that is possible too), but that it may lead to any type of negative judgement, such as being accused of using questionable research practices, p-hacking, or just being incompetent.
During the study, Fetterman and Kai Sassenberg at the University of Tübingen in Germany presented 281 scientists with hypothetical scenarios, about which they answered a brief questionnaire. Here’s the description of the scenario, from “The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists:”
In two of the scenarios, participants were told to think about a specific finding, of their own, that they were particularly proud of (self-focused). They then read about how an independent lab had conducted a large-scale replication of that finding, but failed to replicate it. Since the replicators were not successful, they tweaked the methods and ran it again. Again, they were unable to find anything. The participants were then told that the replicators published the failed replication and blogged about it. The replicators conclusion was that the effect was likely not true and probably the result of a Type 1 error. Participants were then told to imagine that they posted on social media or a blog one of the following comments: “in light of the evidence, it looks like I was wrong about the effect” (admission) or “I am not sure about the replication study. I still think the effect is real” (no admission).
Two other groups of scientists were presented with the same scenarios, but instead of the story being about their own work, they were asked to imagine it was happening to someone else in their field. As Fetterman told us:
So, in short and every day terms, we asked a bunch of researchers “How would people judge you if other researchers failed to replicate an effect that you discovered?” and they responded with “not so good”. We asked second set of researchers “How would you judge a person if other researchers failed to replicate one of the effects they discovered?” and they responded with “not so bad”. The relative difference between the first group of people and the second group indicates that maybe researchers think they’ll be judged worse than reality.
Indeed, the authors found that people did overestimate how much having a non-replicable finding would hurt their careers — people predicted more negative effects of the problematic research when it came from their lab than when it came from someone else’s.
Furthermore, accepting that their data were not replicable seemed to cushion people from the negative impacts, the authors note in the paper:
Significant Wrongness Admission effects on 3 of 5 reputation measures suggest that wrongness admission will not hurt one’s reputation, but may actually have a more positive effect. That is, relative to those in the No Admission conditions, those in the Admission conditions were less negative in their judgments.
However, there are some caveats to note about the experiment, Fetterman added:
First, they were reading hypothetical situations. Therefore, the term “actual” is bit of a stretch. Second, we didn’t ask the same person how they would be judged and how they would judge another researcher. We had two conditions to avoid influence on a second response from a first response (e.g., “now that I’ve said I wouldn’t judge harshly, I don’t think I would be judged harshly”). Third, importantly, we did not measure people’s judgements before and after a failed replication effort, so we cannot say that such efforts will have zero negative impact on one’s reputation. In fact, the wrongness admission may actually be repairing one’s reputation.
In the “Limitations” section of the paper, the authors note that many of their participants heard about the study via Twitter, and scientists active on social media may be more biased in favor of ongoing replication efforts.
Nevertheless, these findings align with what we’ve seen — and pushed for — in the past: Transparency pays. For instance, a 2013 paper showed that scientists who come forward to retract their own research do not face a so-called citation penalty, or a drop in citations following the event.
In other words, “doing the right thing” is not only the right thing, but not a bad thing for your career.
Fetterman told us he hopes these findings help reduce scientists’ fears about shining a light on their data:
I think it would be best if everyone, on all “sides” of the conversation, exercised a bit more humility. In my view, publishing our research is about publishing the pursuit of knowledge, not publishing what we know to be true. If we are only to publish what we know for sure, there would be no more publications. As such, it is ok if there are things in our journals that are not correct and it is ok to correct them. Everyone from the original researchers to the replicators participated in answering a question. Indeed, the incorrect information led to new conclusions, even if it only told us that the piece of information was wrong. At least we now know and maybe new information will show that we were wrong about that too. That’s science.
It’s not just the researchers submitting their data to replicators who need to embrace humility, Fetterman noted:
We are all learning as we go and no one has all the answers. If it seems like we are going to be punished or mocked or seen as a less competent scientist for being wrong, we will avoid any indication that we are wrong. Therefore, I think it is important for individuals to take into consideration how their comments might be perceived by a person who was excited about her/his finding and wanted to share it with the world. We should probably also not revel in other people’s wrongness. Someday, we will all be wrong about something, too.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.