We always hesitate to call retraction statements “models” of anything, but this one comes pretty close to being a paragon.
Psychology researchers in Germany and Scotland have retracted their 2018 paper in Acta Psychologica after learning of a coding error in their work that proved fatal to the results. That much is routine. Remarkable in this case is how the authors lay out what happened next.
The study, “Auditory (dis-)fluency triggers sequential processing adjustments:”
investigated as to whether the challenge to understand speech signals in normal-hearing subjects would also lead to sequential processing adjustments if the processing fluency of the respective auditory signals changes from trial to trial. To that end, we used spoken number words (one to nine) that were either presented with high (clean speech) or low perceptual fluency (i.e., vocoded speech as used in cochlear implants-Experiment 1; speech embedded in multi-speaker babble noise as typically found in bars-Experiment 2). Participants had to judge the spoken number words as smaller or larger than five. Results show that the fluency effect (performance difference between high and low perceptual fluency) in both experiments was smaller following disfluent words. Thus, if it’s hard to understand, you try harder.
On April 17th 2019 a befriended colleague with whom we had shared the material of the experiments (E-Prime files, Matlab files) approached me (GD) at a conference. She informed me (the first/corresponding author had left my lab by the end of March 2019) about a coding error in the script, namely that something was off with the FluencyN−1 coding. It was not immediately apparent whether this coding error had also affected the data analysis of the published data.
Dreisbach wrote that she returned from the conference and set to work re-analyzing the data:
While the coding of FluencyN was correct, it turned out that the coding of FluencyN−1 was wrong. And the re-analysis showed that the published mean RTs were actually based on this wrong coding. The new analysis including the correct N−1 coding (based on the correct FluencyN coding) no longer showed the significant interactions we had predicted: For Experiment 1, the respective interaction FluencyN × FluencyN−1 pointed in the predicted direction but was now only marginally significant (p = .07). For Experiment 2, unexpected and significant interactions Block × FluencyN and Block × FluencyN × FluencyN−1 occurred. This was due to a significant interaction of FluencyN × FluencyN−1 in the predicted direction in Block 1 that was virtually absent in Block 2 and significantly reversed in Block 3. Obviously, these results (see below for a detailed overview of the re-analysis) are inconclusive and certainly do not provide solid evidence for the predicted and reported interaction FluencyN × FluencyN−1 in the paper. On April 20th 2019, I therefore contacted the Editor-in-Chief Dr. Wim Notebaert and asked for the retraction of the paper. …
The first author of the paper, who programmed the MatLab and E-Prime files and originally analyzed the data, takes full responsibility of the error and all authors regret the publication of the invalid results. We hope that not much damage has been caused (at least the paper has not been cited yet — according to Google Scholar, April 26th 2019).
That much would be sufficient for a detailed and transparent retraction statement — but the notice continues. Dreisbach and her colleagues lay out the results of their re-analysis, which we encourage you to read.
Dreisbach told Retraction Watch that:
we all understood immediately that clarity and transparency is the only way to deal with this mistake.
As asked for by the editor, I tried to provide as much detail as possible. A befriended colleague, whom I also consulted in this matter, then sent me a link to a retraction note from a recent Psych Science [Psychological Science] paper as an example. I think based on that, we then added the (for us self-evident) explicit conclusion that we regret the publication of the data. Other than that, I just tried to provide all the information needed to make transparent how all this happened.
Dreisbach, who was traveling, said she did not have the citation for the Psychological Science retraction on hand, but the journal has run at least a few that seem to fit the bill, including this one.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at firstname.lastname@example.org.