Doing the right thing: Scientists reward authors who report their own errors, says study

scientificreportsWe’ve always like to highlight cases in which scientists do the right thing and retract problematic papers themselves, rather than being forced to by editors and publishers. Apparently, according to a new paper by economists and management scholars, scientists reward that sort of behavior, too.

The study by Benjamin Jones of the Kellogg School of Management at Northwestern University and the National Bureau of Economic Research and colleagues, “The Retraction Penalty: Evidence from the Web of Science,” was published yesterday in Scientific Reports, a Nature Publishing Group title.

The authors lay out what they do:

In this paper, we draw on all retraction notices in the Web of Science (WOS) database. We focus on the post-2000 period when WOS indexing of retractions appears relatively complete (see supporting information for detailed discussion of the database) and use the WOS to expand our analysis across the known universe of fields. Our analysis can thus provide a more comprehensive cross-field view of retractions than the existing literature. Most importantly, we examine a new dimension: We analyze the effect of retraction on scientists’ prior work, thus quantifying a potentially critical consequence, and disincentive, for being associated with false scientific results. Our analysis further shows how chain reactions to retraction hinge on whether authors self-report errors.

(Speaking of reporting our own errors, we tweeted yesterday that we’d covered this paper last month as a working paper about which members of a team suffered most when studies were retracted. The new one was actually a precursor to that paper, Jones tells us.)

The subject-by-subject comparisons are illuminating. Retraction was rare across all disciplines, but was incredibly rare in the arts and humanities — .01 retractions per 10,000 papers — and the social sciences — .02 per 10,000. Biology and medicine had .14 retractions per 10,000 papers.

About 22% of retractions were self-reported, while about 71% were not self-reported, with the rest unknown.

Not surprisingly, given previous analyses, the authors found that citations of a retracted paper declined after it was withdrawn. One particular figure from the paper, however, really highlights the difference in citations of an authors’ prior papers based on who reported the errors (click to enlarge):

selfreport

The authors conclude:

…retractions can create substantial citations penalties well beyond the retracted paper itself. Citation penalties spread across publication histories, measured both by the temporal distance and the degrees of separation from the retracted paper. These broad citation penalties for an author’s body of work come in those cases, the large majority, where authors do not self-report the problem leading to the retraction. By contrast, self-reporting mistakes is associated with no citation penalty and possibly positive citation benefits among prior work. The lack of citation losses for self-reported retractions may reflect more innocuous or explainable errors, while any tendency toward positive citation reactions in these cases may reflect a reward for correcting one’s own mistakes.

“A reward for correcting one’s own mistakes” — we’re smiling.

9 thoughts on “Doing the right thing: Scientists reward authors who report their own errors, says study”

  1. Was there anything in the study about the effect of self-retraction on the citation of subsequent papers? It might also be interesting to look into whether scientists who self-retract are more likely to have subsequent retractions compared to those who are forced into it. Intuitively speaking, if you acknowledge your mistake you might be more careful in the future.

  2. Does this conclusion not depend on the retracting authors honestly reporting by whom the error was found? For example, I can imagine there may be retractions which merely state “it was discovered that”, when it was a reader or a (non-author) colleague who pointed out the error.

  3. It’s really sad that a blog trying to promote scientific integrity commits the cardinal sin of science reporting: making a wildly exaggerated claim about a paper. Contary to what the title of the blog posting suggests, the authors do not claim that scientists reward self-retracting authors. They talk about it as a possible explaination for something that might or might not be significant. If even you guys don’t get it right, it’s hardly surprising to see all this ridiculous hype in the media and coming out of academic press departments.

    1. I am confused. The graph, which first of all is flawed because it shows no control group, indicates two negative trends, at least in the long run. Even though the total of the self-reported retraction appears less than that conducted by others (-5 vs -12) after 5 years, the fact is still true: both cases lead to NEGATIVE VALUES. Moreover, just look at the gradient of the left-hand graph. If you extend that line over the next 5 years, in fact the EXACT opposite can be said about the conclusions (if you take into consideration the gradient of the right-hand graph). I fail to see how negative values can be interpreted as a reward to authors (by which scientists exactly?). At least one control group is required: non-retracted papers and how their citation changes over time. In that sense, I have to agree with Bernd’s 2nd and 3rd sentence (only). This study is just sensationalism and is almost as bad as the Bohannon paper in Science.

  4. @Bernd is in error. NOT SENSATIONALISM. HE does not get it right. I know I’m commenting only 10 years later. The left-side graph (the self-reported or “honest” retraction) showed an increase in citations until the 3-4 year mark after retraction. The 5-year mark after retraction where there is a significant decline can be interpreted as a decline in interest or a decline in the study’s impact because it is often the case that it may be considered “old” or even outdated by then. The trend thus is not linear, for good reasons. The right-side graph is linear showing a clear decline from year 0 to year 5, for good reasons, interpreted as a penalty. Zero uptrend from year 0. This makes sense and NOT SENSATIONALISM as stated by @JATds.

    (To @Neuroskeptic, “may” is to indicate less than absolute confidence or to indicate relative uncertainty, often used in conjunction with notions of likelihood or probability, and is a way of paring one’s intellectual hubris. It’s not a weasel word to escape suspicion, by any means. You would know that if you knew what basic statistical analysis that looks up to standard is like.)

    I have lost faith in researchers (I assume those who read RW are researchers and not random nomads) in general, because there are some who cannot read the most basic-looking graphs and express them in their most likely contexts. And yet these “researchers” above possess the highest hubris in any community, online or offline.

    I should commit to being a nihilist about science research and researchers’ understanding of statistics or even the very basic ability of reading graphs. But I’m reasonably confident that all three commenters above are likely overconfident laypeople, if not researchers overconfident about their statistical abilities. At least one of you three is or ought to be a Francesca Gino apprentice.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.