Doing the right thing: Psychology researchers retract paper three days after learning of coding error

Gesine Dreisbach

We always hesitate to call retraction statements “models” of anything, but this one comes pretty close to being a paragon. 

Psychology researchers in Germany and Scotland have retracted their 2018 paper in Acta Psychologica after learning of a coding error in their work that proved fatal to the results. That much is routine. Remarkable in this case is how the authors lay out what happened next.

The study, “Auditory (dis-)fluency triggers sequential processing adjustments:”

investigated as to whether the challenge to understand speech signals in normal-hearing subjects would also lead to sequential processing adjustments if the processing fluency of the respective auditory signals changes from trial to trial. To that end, we used spoken number words (one to nine) that were either presented with high (clean speech) or low perceptual fluency (i.e., vocoded speech as used in cochlear implants-Experiment 1; speech embedded in multi-speaker babble noise as typically found in bars-Experiment 2). Participants had to judge the spoken number words as smaller or larger than five. Results show that the fluency effect (performance difference between high and low perceptual fluency) in both experiments was smaller following disfluent words. Thus, if it’s hard to understand, you try harder.

According the notice, which is written from the perspective of the senior author, Gesine Dreisbach, of the University of Regensburg:

On April 17th 2019 a befriended colleague with whom we had shared the material of the experiments (E-Prime files, Matlab files) approached me (GD) at a conference. She informed me (the first/corresponding author had left my lab by the end of March 2019) about a coding error in the script, namely that something was off with the FluencyN−1 coding. It was not immediately apparent whether this coding error had also affected the data analysis of the published data. 

Dreisbach wrote that she returned from the conference and set to work re-analyzing the data:

While the coding of FluencyN was correct, it turned out that the coding of FluencyN−1 was wrong. And the re-analysis showed that the published mean RTs were actually based on this wrong coding. The new analysis including the correct N−1 coding (based on the correct FluencyN coding) no longer showed the significant interactions we had predicted: For Experiment 1, the respective interaction FluencyN × FluencyN−1 pointed in the predicted direction but was now only marginally significant (p = .07). For Experiment 2, unexpected and significant interactions Block × FluencyN and Block × FluencyN × FluencyN−1 occurred. This was due to a significant interaction of FluencyN × FluencyN−1 in the predicted direction in Block 1 that was virtually absent in Block 2 and significantly reversed in Block 3. Obviously, these results (see below for a detailed overview of the re-analysis) are inconclusive and certainly do not provide solid evidence for the predicted and reported interaction FluencyN × FluencyN−1 in the paper. On April 20th 2019, I therefore contacted the Editor-in-Chief Dr. Wim Notebaert and asked for the retraction of the paper. …

The first author of the paper, who programmed the MatLab and E-Prime files and originally analyzed the data, takes full responsibility of the error and all authors regret the publication of the invalid results. We hope that not much damage has been caused (at least the paper has not been cited yet — according to Google Scholar, April 26th 2019). 

That much would be sufficient for a detailed and transparent retraction statement — but the notice continues. Dreisbach and her colleagues lay out the results of their re-analysis, which we encourage you to read. 

Dreisbach told Retraction Watch that:

we all understood immediately that clarity and transparency is the only way to deal with this mistake. 

As asked for by the editor, I tried to provide as much detail as possible. A befriended colleague, whom I also consulted  in this matter, then sent me a link to a retraction note from a recent Psych Science [Psychological Science] paper as an example. I think based on that, we then added the (for us self-evident) explicit conclusion that we regret the publication of the data. Other than that, I just tried to provide all the information needed to make transparent how all this happened. 

Dreisbach, who was traveling, said she did not have the citation for the Psychological Science retraction on hand, but the journal has run at least a few that seem to fit the bill, including this one.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

4 thoughts on “Doing the right thing: Psychology researchers retract paper three days after learning of coding error”

  1. If I were in their field I would make sure to always cite any of their relevant papers. These are people you can trust!

  2. Doing the right thing by retracting, and retracting fast. Sure. But you have to wonder why they did not check the data before they submitted the paper. Surely if collecting the raw data are worth spending a lot of time on, their processing is worth a check by at least one person beyond an underpaid and overworked student or postdoc who was tasked with coding a computer to get some p-value out of them. Is this not 101?

  3. Agree with Klavs.

    And why should they let the first author take “full responsibility for the error”? The error was something simple enough that an outside colleague and the senior author were both able to quickly catch once they looked at the code.

    If I let something like this get out of my lab (and sure it’s possible), I would take full responsibility (or at least a substantial fraction of it) for the error due to poor supervision and project management. I’d never let a trainee take full responsibility.

    To me, project management is all about making sure there are checks and balances for the most important stuff, no one trainee can be completely responsible. Any “transparent” discussion of “how all this happened” would ideally include the steps the co-authors took to validate the code and why they failed.

    So I don’t agree with RW that this is a paragon for how a retraction notice should be written.

    1. Hey, it’s great to see two researchers who apparently are infallible. Recent psychological research on human errors has shown that (1) true experts (like top surgeons) aren’t making fewer mistakes than their less skilled colleagues but they handle them better; and that (2) errors are most efficiently reduced by creating a climate that lowers the threshold for reporting one’s own errors. Hence, if you really want to make science better, you better give your respect to colleagues reporting their errors (as the TW comment rightly intends to do), rather than picking on them–especially if you hide behind an alias.
      “Let him who is without sin among you be the first to cast a stone at her” (John 8:7)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.