‘In hindsight the mistake was quite stupid’: Authors retract paper on stroke

File this under “doing the right thing:” A group of stroke researchers in Germany have retracted a paper they published earlier this year after finding an error in their work shortly after publication that doomed the findings. 

Julian Klingbeil, of the Department of Neurology at the University of Leipzig Medical Center, and his colleagues had been looking at how the location of lesions in the brain left behind by cerebral strokes were associated with the onset of depression after the attacks. According to the study, “Association of Lesion Location and Depressive Symptoms Poststroke”:

Lesions in the left ventrolateral prefrontal cortex increase the risk of depressive symptoms 6 months poststroke. Lesions within the right hemisphere are unrelated to depressive symptoms. Recognition of left frontal lesions as a risk factor should help in the early diagnosis of poststroke depression through better risk stratification. The results are in line with evidence from functional imaging and noninvasive brain stimulation in patients without focal brain damage indicating that dysfunction in the left lateral prefrontal cortex contributes to depressive disorders.

The article received attention on social media, with Altmetric logging at least 50 tweets about the paper.

But Klingbeil’s group said that, after publication of their paper, which appeared in the journal Stroke in January, they identified a fatal mistake in their analysis.  

The retraction notice states

After publication, the authors identified a serious mistake in the analysis that invalidates all reported lesion-symptom associations. Specifically, the authors discovered that lesion images and depression scores were incorrectly merged via a pseudonymized identifier resulting in a random assignment of behavior and lesions. Because this error occurred before 53 patients were excluded due to missing follow-ups, the lesion characteristics on the group level also slightly changed. As these errors significantly affect the conclusions of the original article, the authors have requested to retract the article.

Klingbeil, who described the experience as “quite a nightmare,” told us: 

we discovered the error in further analyses of our data and soon realized that the error was severe. But it was by chance that we discovered it at all. We then repeated the analyses, which completely changed our results, and contacted the editors.

In hindsight the mistake was quite stupid. We store all data in databases, which are secure, but to use the statistical tools we had to export them into Excel – which resulted in the mistake when some columns of an Excel-sheet were resorted.

Like Retraction Watch? You can make a one-time tax-deductible contribution or a monthly tax-deductible donation to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

6 thoughts on “‘In hindsight the mistake was quite stupid’: Authors retract paper on stroke”

  1. So maybe Excel, although easy, is a dangerous tool to use for big statistical analyses?
    This is not the only paper that has cratered because of weirdness with Excel of which authors were unaware.

  2. Using a spreadsheet for data analysis is prone to error, hard to document, and probably not reproducible. While spreadsheets can be used for prototyping of an algorithm, the final analysis should be performed with software written against a standard statistical package where the packages versions are documented and the software and modifications are kept in a source code repository.

    1. “to use the statistical tools we had to export them into Excel”.
      The authors agree with you that analysis in Excel is a poor choice. Unfortunately, it is often the easiest format to move data files between platforms, which can lead to this issue.

  3. Why everyone condems Excel? It is quite powerful to do statistical analyses and many studies have proven that the results arebequal to the fancy statistical software.. probable the author did not clean the information (the most important part before any analysis is attempted) before running the test.

    1. Because, as others have already mentioned, data manipulation in a spreadsheet is not repeatable, and manual editing (data cleaning, especially sorting and copy/paste) can introduce errors that are untraceable and/or undetectable. Excel has a very limited capability for analysis compared to other statistical packages, little or no means to assess model fit, and nothing to warn a user they may be applying the wrong methods.

      I can and do use Excel for certain limited tasks, but it is the wrong tool for most scientific analysis.

  4. As a Biostatistician, this is a common error in Excel that I have been warning clients about for 20 years. It is difficult to detect, and seems nearly certain to slip past undetected in a few cases.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.