Frequent Retraction Watch fliers rack them up: Stapel hits 51, Lichtenthaler scores number 9

freq flyer
Rewards may vary

Quick updates on work by two people whose names appear frequently on Retraction Watch: Diederik Stapel and Ulrich Lichtenthaler.

Last month, we reported on the 50th retraction for Stapel. Here’s number 51 in the Journal of Experimental Social Psychology, for “The flexible unconscious: Investigating the judgmental impact of varieties of unaware perception:”

This article has been retracted: please see Elsevier Policy on Article Withdrawal (

This article has been retracted at the request of the Editor-in-Chief.

The Noort Committee concluded from the statistical analyses of author Stapel’s publications and in accordance with the decision rules given in Section 2.4.2 of the report found at that there is evidence of fraud in this article.

For full information, please see

The paper has been cited 19 times, according to Thomson Scientific’s Web of Knowledge.

Lichtenthaler’s ninth retraction appears in Technological Forecasting and Social Change. Here’s the notice for “Technology commercialization intelligence: Organizational antecedents and performance consequences,” which was first reported on the Open Innovation blog:

This article has been retracted: please see Elsevier Policy on Article Withdrawal (

This article has been retracted at the request of the authors. The first author proactively informed the editor that the paper uses data from the same dataset as other articles by the first author, including Lichtenthaler, U. and Ernst, H., Research Policy, 36(2007)37–55. The first author takes full responsibility.

Lichtenthaler is the first author of the now-retracted paper, which has been cited 11 times.

Hat tips: Rolf Degen and Philipp Hermanns (who wrote a piece on Lichtenthaler in Telepolis)

10 thoughts on “Frequent Retraction Watch fliers rack them up: Stapel hits 51, Lichtenthaler scores number 9”

  1. I wonder, will any future scientist top the current generation of retractionists? Or will reforms to science and publishing prevent anyone being this bad?

    1. “how do these superstars stay in business” ?

      Never replicate anything, never publish attempted replications, keep a big file-drawer hidden, don’t publish any data and articles online, keep “peer review” to a limited no. of people (maybe they can even be your friends), with every rule or system change that could improve science, say things like “but you just have to trust us, without trust there can be no science”, complain about all these possible “rules” and how bothersome they are while in the meantime have no problem whatsoever with a mandatory APA publication guideline rule book about super important things like when to use single or double quotation marks. And have journals and institutions that simply allow this all. Something like that perhaps…

    2. I wonder if Stapel also gave lectures to students and talked about his own research. That must have been weird for him I would think: talking about research you know is fake and keep a straight face while seriously discussing all the details of it, and conclusions in it, to these kids.

      The same would go for conferences and the like. It would be so absurd and ridiculous that the whole thing would just become extremely funny in a way. I wish Stapel would write an English version of his book, or someone would translate the Dutch version.

      1. I just came across this interview with Stapel in which he says something about some of this:

        “What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman. I am on the road. People are on the road with their talk. With the same talk. It’s like a circus.”


        “Several times in our conversation, Stapel alluded to having a fuzzy, postmodernist relationship with the truth, which he agreed served as a convenient fog for his wrongdoings. “It’s hard to know the truth,” he said. “When somebody says, ‘I love you,’ how do I know what it really means?” At the time, the Netherlands would soon be celebrating the arrival of St. Nicholas, and the younger of his two daughters sat down by the fireplace to sing a traditional Dutch song welcoming St. Nick. Stapel remarked to me that children her age, which was 10, knew that St. Nick wasn’t really going to come down the chimney. “But they like to believe it anyway, because it assures them of presents,” he told me with a wink.”


        “While there, Stapel began testing the idea that priming could affect people without their being aware of it. He devised several experiments in which subjects sat in front of a computer screen on which a word or an image was flashed for one-tenth of a second — making it difficult for the participants to register the images in their conscious minds. The subjects were then tested on a task to determine if the priming had an effect. (…) The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me. (…) Doing the analysis, Stapel at first ended up getting a bigger difference between the two conditions than was ideal. He went back and tweaked the numbers again. It took a few hours of trial and error, spread out over a few days, to get the data just right.

        He said he felt both terrible and relieved. The results were published in The Journal of Personality and Social Psychology in 2004. “I realized — hey, we can do this,” he told me”


        Well, at least he had a good time doing it I hope. Earning a nice salary, flying of to nice places for conferences (or “selling your stories”?) and enjoying teaching and closely supervising adolescents/ young adults and shaping their minds and what not. Awesome way to make a living, who cares about contributing to science or acting in a scientific way/ being an actual scientist…

        He did at least one thing right: voluntarily giving up his Phd title stating that his “behavior of the past years are inconsistent with the duties associated with the doctorate” (source:

        It has become quite apparent that some (social) psychology “professors” maybe aren’t really professors at all, let alone have any mathematic, statistical, and methodoligical knowledge, or use scientific languange carefully and appropriately, or have any idea how to use logical argumentation correctly, or generally behave like a scientist. Maybe those “professors” or “scientists” are just trying to sell their stories, laughable and superficial stories based on flawed “emprical data” which almost never gets replicated or performed outside the safe contents of their ingenious “labs” and freely available student populations…

        It’s just funny. Maybe they can be re-educated to learn how to do a real job like building a bridge or doing some drywalling.

        1. Scientists being more like businessmen or -women selling their stories/advertising their own research in (social) psychology is of little importance compared to other types of research I think. Nobody takes the time to really investigate things thorougly before trying to get a result in the journals. This “publish or perish” issue creates much bigger problems, for instance, in cancer related research.

          They have tried to replicate the findings of 53 landmark cancer studies and could only replicate 6 ( And apparently, mathematical, statistical, and methodological issues are also present in life-sciences (

  2. It might be interesting to think about the following:

    -it seems like (social) psychology rarely engages in replications in general, let alone publish them

    -the Levelt report mentions “sloppy science” (so leaving aside coming up with fictional data in the strictest sense) was found in a large part of Stapel’s publications (

    -this “sloppy science” could lead to a higher probability of false-positives/ non-replicable findings

    If these above 3 points hold any ground, could the net-result of coming up with fictional data (like Stapel did) essentially boil down to the same thing as the possible cumulative effect of engaging in practises like the above 3 things?

    If you have things like the verification bias (“Verification bias refers to something more serious: the use of research procedures in such a way as to ‘repress’ negative results by some means.”) present in your research practises, you might as well make up data: maybe the net result essentially boils down to the same thing, it just saves you a lot of time, money, and effort. Maybe Stapel was just very economical in a sense, and just ‘cut out the middle man’.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.