A grad student finds a ‘typo’ in a psychedelic study’s script that leads to a retraction

Paul Lodder

Sometime after it was published, Paul Lodder, a graduate student at the University of Amsterdam, had been trying without success to replicate the findings of a 2020 paper in Scientific Reports

The original article was written by a group led by Rubén Herzog, of the Universidad de Valparaíso in Chile. Titled “A mechanistic model of the neural entropy increase elicited by psychedelic drugs,” the paper purported to help illuminate what happens in the brain under the influence of substances like LSD. 

But the findings of the study wouldn’t replicate. And unlike some researchers who might blow off criticism of their work, or blame the replicators for the failure, Herzog sent Lodder the scripts his team had used.

Lodder found the problem quickly. As Herzog related to Retraction Watch, Lodder (whose schedule has been challenging the past few weeks as we’ve played phone tag) [See update on this post.]:

…pointed out an error on the formula that estimated the differential entropy from the Gamma function parameters. After correcting the error in the formula, the results did not hold: with the wrong formula we found both increases and decreases of entropy, and with the corrected function we found only increases, which resembles more the empirical results, given that no empirical evidence support decreases in entropy with psychedelic drugs. 

And that led to a retraction – and now an opportunity for everyone involved. According to the retraction notice

After publication, it was brought to the Author’s attention that there was a typo in the script used to calculate the differential entropy. Therefore, the reported entropy estimations are invalid and do not produce the expected increase in entropy when using the neural mass model presented in this Article. The Authors recognise this error and apologise for the confusion it may have caused. The Authors are currently working on a corrected version of the model, which includes the activation of the 5HT2A receptor on both excitatory and inhibitory pools; and will test whether it reproduces the entropy increase.

All Authors agree with this retraction and its wording.

Herzog said the incident underscores the importance of using: 

a more systematic peer-review process of computational codes, in particular in works that are based fundamentally on computational and mathematical methods. Even when my colleagues checked my codes, none picked the error up, so a replication process ‘from scratch’ would be needed to avoid this kind of mistake. 

The opportunity? Herzog added that his group is working on a new version of the paper – which has been cited 14 times, according to Clarivate’s Web of Science – with the corrected formula and plan to submit it as soon as possible. And that gives him a chance to acknowledge Lodder publicly: 

Another good lesson was to include the student who noticed the error as co-author in the new version of the papers; this is, to give scientific credit to people who are dealing with replicating results.

Indeed.

Update, 1130 UTC, Oct. 7, 2022: When originally posted, this story identified the incorrect Paul Lodder. We have edited the first paragraph and the paragraph beginning “The opportunity?” to reflect the correct Lodder, and have replaced the photo. We apologize for the error.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

13 thoughts on “A grad student finds a ‘typo’ in a psychedelic study’s script that leads to a retraction”

  1. Feedback: Somehow the timescale of your article feels off?

    It starts with “A few years ago,” for Lodder’s replication for a paper that was published 24 months ago.

    You also state he was a grad student back then and is now an assistant professor already? Good for him if true, but it feels a bit fast.

    1. The Paul Lodder that participated in the phone tag speaking here: this post references the wrong Paul Lodder and the timescale is off as well.

      I’m a MSc Artificial Intelligence student at the University of Amsterdam and as my schedule indeed has been quite challenging I haven’t had a chance yet to provide the author with my comments. I informed them I was going to do so this week, but it seems they decided to fill in the gaps themselves and made some errors in the process.

      I’ve already informed the author that I’ll get back to them with my comments by the end of this week, after which this erroneous article will hopefully be rectified.

  2. This kind of error might have been discovered if Universities accepted ISO research protocols. When I worked for a commercial company, all of my syntaxes and scripts needed to be checked by a second person. This person had to accord the syntax before we used them in the production of reports for health care organizations. In my latest job at a university, I challenged (instructed!) all Phd students and all Post-docs to check my work and search for errors. An error is so easily made and often very hard to detect. One becomes blind for one’s owns mistake. A procedure where an external or second check is compulsory might be adopted by universities as well. They could hire personal with this as their only task.

    1. Excellent idea!

      Also, the scripts should be available online linked from the paper so that any may review it. Post publication review by a larger pool of interested scientists would greatly improve the quality of the science.

      I too have spotted mathematical errors in papers and have pointed them out. Sadly, only once did an author respond with a correction (spotted and corrected within days of online publication). The rest either ignored me or blew me off.

      1. There’s an irony here, in that, if the journal had provided the authors’ scripts online, this graduate student may have simply used these scripts rather than attempting to build from scratch, never identifying the error, and propagated that error further, e.g. in another paper using the same scripts.

        Not sure what the ultimate lesson is here, but kudos to all involved in trying to sort this out!

        1. I work in hardware verification, and this is something we have to deal with too. When we’re implementing testing code, it’s generally bad practice to use the vhdl as reference. Instead you should refer to an external spec when designing the test code – in our case, either the ISA for programmer-visible behavior, or internal documentation for smaller units.

          The goal of course isn’t to completely reimplement the model, but to identify observable and predictable behaviors that it should exhibit (or shouldn’t exhibit) and write checkers to verify that they always hold true, plus drivers to push the model into interesting states where we suspect bugs lie.

          Perhaps there’s an opportunity for some interdisciplinary work here. My verif experience is limited to this particular application, but I’m guessing there exists (or could be developed) a more general study of verification that can be applied to different kinds of work, including verifying paper results.

  3. This “retraction” is what good science should be about. Errors happen, but good scientists attempt to identify them. Kudos to all involved in this — it’s ironic that it inhabits the same space in RW with so much sloppy or even downright fraudulent science. It reminds me of the late Efraim Racker who, decades ago, actually picked up problems from his own lab, and “blew the whistle” on himself. Even at the time, I thought this was admirable, and was dismayed when some actually used this as an example of fraud in science as opposed to the self-correction that it was (to be clear, most scientists admired what he did, but some used this as a political club). Maybe RW should place situations like this into a “Correction Watch” when it involves the best scientific behavior. I have contacted a couple authors about truly problematic data, and in all of these cases they attempted to justify the unjustifiable. Lodder and Herzog did it right.

  4. Now I am even more convinced that of the millions of scientific studies, some leave more questions than correct answers. I am still a cataract surgery patient who developed an iritis. I am still uncertain of it’s cause (as the medical professionals also seem to be). I understand from one of the professionals that about 30% of cataract surgeries result in iritis. That is Avery high percentage
    I am STILL waiting to get a new refraction.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.