New Dutch psychology scandal? Inquiry cites data manipulation, calls for retraction

sppsThe University of Amsterdam has called for the retraction of a 2011 paper by two psychology researchers after a school investigation concluded that the article contained bogus data, the Dutch press are reporting.

The paper, “Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought,” was written by Jens Förster and Markus Denzler  and published in Social Psychological & Personality Science. It purported to find that:

Holistic (global) versus elemental (local) perception reflects a prominent distinction in psychology; however, so far it has almost entirely been examined in the domain of vision. Current work suggests that global/local processing styles operate across sensory modalities. As for vision, it is assumed that global processing broadens mental categories in memory, enhancing creativity. Furthermore, local processing should support performance in analytic tasks. Throughout separate 12 studies, participants were asked to look at, listen to, touch, taste or smell details of objects, or to perceive them as wholes. Global processing increased category breadth and creative relative to analytic performance, whereas for local processing the opposite was true. Results suggest that the way we taste, smell, touch, listen to, or look at events affects complex cognition, reflecting procedural embodiment effects.

But according to this article in the NRC Handelsblad (with an assist from Google Translate):

A scientific article by Jens Förster professor of social psychology at the University of Amsterdam ( UvA ) and his colleague Markus Denzler should be withdrawn because of manipulation research. This has the Executive Board of the University of Amsterdam decided on the advice of the National Committee for Scientific Integrity (NCSI), NRC Handelsblad writes today.

In the decision, the names of anonymous Förster and Denzler are not mentioned. Who knows NRC Handelsblad in the complaint (in possession of the newspaper). The UvA said in September 2012 an integrity committee following a complaint over three articles Förster (two from him only in 2009 and 2011, the third in 2012 with Denzler ) . According to the complaint, the risk of such fine results 1 to 508 trillion. Was also notable that none of the 2,284 participants dropped out, exceptional in psychological experiments. The complainant requested by Förster the raw data from the study in 2012, but that she could not talk .

The NRC reports that the inquiry found:

The conclusion must be that the research data is manipulated is considered inevitable […] On this basis, and taking part on the basis of inadequate justification of the data collection and the original data is a violation of academic integrity.

The Netherlands, of course, is also home to Dirk Smeesters and Diederik Stapel.

Update 4/29/14, 2:10 p.m Eastern: Jens Förster last year received funding from the prestigious Alexander von Humboldt Foundation. According to this blurb:

Jens Förster is one of the world’s most productive and distinguished experimental psychologists in the field of social cognition and motivation. His work and concepts on areas like the regulation of approach and avoidance, distance, emotion and cognition, or on the situational factors influencing creativity and endurance have had an impact on economic, social and medical themes. At [Ruhr-Universität Bochum] Jens Förster would continue developing the research focus on psychology, especially business and social psychology, and steer it to the head of the international field. For this purpose, a Centre of Self-Regulation is also planned. He would cooperate with clinical psychologist and Alexander von Humboldt Professor Jürgen Margraf who is already in Bochum.

The award is worth as much as 5 million euros.

Please see an update on this post.

28 thoughts on “New Dutch psychology scandal? Inquiry cites data manipulation, calls for retraction”

  1. I hope that this will not escalate, as this could get ugly for the field of psychology. Jens Förster, a German, is a bigger name than Stapel ever was. He was repeatedly portrayed in the German media, not the least because of his second calling as a singer and a cabaret artist, and he has published an enormous amount of books, studies and review papers, all high quality stuff.

    1. “According to the complaint, the risk of such fine results 1 to 508 trillion” presumably means that someone has complained that the chances of the data in this paper being so ‘fine’ were 1 in 508 trillion – that sounds an awful lot like the argument that Uri Simonsohn used to bring down Smeesters.

      1. Apparently it wasn’t that high quality in that the data could not be handed over

        “The complainant had asked Förster for the original data from the 2012 study, but these could not be provided.”

        I would think that caring for one’s data, and being able to show it upon request, could be considered as part of a researcher’s responsibility and as part of performing “high quality” research.

        1. The more I read about psychological science the more it makes me laugh. It seems so amateuristic. How can distinguished researchers not keep their data from a study from only 2 years ago? Or why can’t they show the data? And this is supposed to be a science and to be taken seriously?

          1. Pointing to other fields where problems might very well exist isn’t reason to not be able to speak about problems in psychological science I would think. My use of the word “amateuristic” maybe harsh but I stand by it. For me it is echoed in some parts of what Förster states in his reply (see below):

            “The only thing that can be held against me is the dumping of questionnaires (that by the way were older than 5 years and were all coded in the existing data files ) because I moved to a much smaller
            office. I regretted this several times in front of the commissions. However, this was suggested by a colleague who knew the Dutch standards with respect to archiving. I have to mention that all this happened
            before we learned that Diederik Stapel had invented many of his data sets. This was a time of mutual trust and the general norm was: “if you have the data in your computer, this is more than enough”

            &

            “After the big scandal three years ago, many things in psychological research have changed, and this is a good thing. We had to develop new standards for archiving, and for conducting our research in the most transparent manner. At UvA we have now the strictest rules one can imagine for conducting, ana
            lyzing, and archiving, and this is a good thing.”

            For me it is strange, and amateuristic, that the Stapel case was/ is needed to come to these new standards.

  2. This revelation occurs at a bad time for Förster, write the Dutch media. He is supposed to work as “Humboldt professor starting from June 1, and he was awarded five million Euros to do research at a German university the next five years. He is also supposed to cooperate with Jürgen Margraf – who is the President of the “German Society for Psychology” and as such the highest ranking German psychologist.

  3. As the google translation is not very ideal, please find below a quick translation of the article in the NRC Handelsblad. I cannot attest to the accuracy of the translation, but it’s likely better than what Google does. There is still an ambiguity in the translation of the dutch ‘triljoen’, which officially is 10^(18), but which often gets translated as a trillion, 10^(12). The quotations are from the original report which contains an English translation.

    ———–
    University of Amsterdam professor manipulated research data

    A scientific paper of Jens Förster, professor of social psychology at the University of Amsterdam (UvA), and his colleague and co-author Marcus Denzler should be withdrawn because of data manipulation. That is the decision of the Executive Board of the UvA, following advice from the National Board for research Integrity (LOWI), as is reported by the NRC Handelsblad today.
    The names of Förster and Dentzler are not mentioned in the anonymized decision. Those are known to the NRC Handelsblad from the complaint (in possession of the newspaper). The UvA appointed an integrity commission following a complaint in regards to three papers authored by Förster (two single author papers from 2009 and 2011, a third from 2012 with Denzler as co-author). According to the complaint the probability of the very handsome results was 1 in 508 quintillion. Also it was remarkable that none of the 2284 participants quit the study, which is exceptional for psychological experiments. The complainant had asked Förster for the original data from the 2012 study, but these could not be provided.

    INTEGRITY COMMITTEE ADVISES TO MAKE CONCERNS PUBLIC
    The integrity committee presided by Ton Hol, law professor at the University of Utrecht, recommended publication of a ‘letter of concern’ in the journals that published Förster’s papers containing the challenged data. The complainant, finding this insufficient, appealed with the LOWI.
    The LOWI came to a much harsher judgment of the 2012 paper, published in Social Psychological and Personality Science. The website of the association of Dutch universities were the UvA decision was published states:

    “The conclusion that research data must have been manipulated is considered unavoidable;
    the diversity found in the scores of the control group is so improbably small that this cannot be explained by sloppy science or QRP (questionable research practices); intervention must have taken place in the presentation of the results of the 19 experiments described in the 2012 article. Based on this and based also on the inadequate explanation regarding the data set and the original data, a violation of academic integrity can be said to have taken place;”

    Förster “could or should have known that the results (‘samenhangen’) presented in the 2012 article had
    been adjusted by a human hand.“ The decision published on the Association of Universities website does not contain any information concerning the two other papers, although they contain identical unusual data, according to the complainant.
    ————-

    1. Thanks for the translation.

      Unrealistically low within-group standard deviations is exactly what got Smeesters caught. I wonder if this case was reported at the same time as Smeesters but took longer to come to fruition…? Quite possible given that in this case there were two investigations, the first one and then an appeal…

  4. Would it be interesting to try and replicate the results? Doesn’t seem that hard to do. Or is that a waste of resources?

  5. According to APA, one has to keep the original data for at least five years AFTER publication. Institutions, where the research is conducted, have a responsibility in this respect – all too often they argue there is no space for archiving. Keeping the original data is a necessity for later reanalyses, e.g. for meta analyses.

    Among other obvious issues in Jens Förster’s case I find it troubling that editors and reviewers of prestigious journals like JEP: General and JPSP became not alerted to obvious issues with the data or reporting, such as no single case of missing data or dropout in hundreds of questionnaires filled by undergraduates.

    1. “III.3 Raw research data are stored for at least five years”

      is stated on page 7 of version 2012 of ‘The Netherlands Code of Conduct for Scientific Practice’
      http://www.uu.nl/SiteCollectionDocuments/The%20Netherlands%20Code%20of%20Conduct%20for%20Scientific%20Practice%202012.pdf

      I am unable to find precise details in the Code if this period of at least 5 years starts at the moment when the raw data are collected, or starts at the moment when an article has been published in which the raw data have been used. The bold part below ‘III. Verifiability’ might indicate that the period of at least 5 years will start when research results are published? Please also read the ‘Preambule’ of the Code for some general remarks.

      This period of at least 5 years is also listed in the 2004 version of the Code. I am unable to find an English translation of the 2004 version, but http://www.wageningenur.nl/upload_mm/0/5/a/73e265ee-cc3d-4e61-a191-925169b9129f_WageningseGedragscodeWetenschapsbeoefening.pdf gives the 2004 version in Dutch (the white parts, the grey parts are added by Wageningen University). Page 9 states: “De bewaartermijn van ruwe onderzoeksgegevens is minimaal vijf jaar”.

      1. It doesn’t make sense if the 5 year period were to start after collection, because fellow scientists are unlikely to request the data before publication. So if it takes five years to get a paper based on some research data published, then no one reading the publication would be able to request the data, assuming it is deleted after the mandatory five year period.

      2. Both journals in question (the Journal of Experimental Psychology: General and the Journal of Personality and Social Psychology) are journals published by the American Psychological Association (APA). “APA expects authors to have their data available throughout the editorial review process and for at least 5 years after the date of publication.” (http://www.apa.org/pubs/authors/instructions.aspx?item=9), and this applies to publications in their journals. Among many other principles from the APA Ethics code.

  6. All statistical analysis of social psych experiments should be done blindly. Provide all the raw data, require a different group to analyze it.

  7. I’m trying to understand the numbers of studies and participants Förster mentions in his letter posted at http://www.socolab.de/main.php?id=66. It seems like it took 9 years and 30 batteries of 1,800 participants each to collect the data mentioned in the 2012 SPPS paper. This would amount to some 54,000 participants. This can’t be right, can it? The 10 experiments reported in the 2012 SPPS paper each included (exactly) 60 participants, 600 in total. What happened to the data of the 53,400 other participants?

  8. It is also intriguing that the authors were apparently able to collect the data for the paper without any funding, as the declaration at the end of the article says: “The author(s) received no financial support for the research, authorship, and/or publication of this article.”

    1. Dear Rene,
      Sorry, can you clarify you arrive at 54,000 participants for the 2012 SPPS paper?
      Many thanks!

  9. In the letter posted at http://www.socolab.de/main.php?id=66 Förster writes: “120 participants were typically invited for a session of 2 hours that could include up to 15 different experiments (some of them obviously very short, others longer). This gives you 120 X 15 = 1800 participants.” and later “If you organize your lab efficiently, you would need 2-3 weeks to complete this “experimental battery”. We did approximately 30 of such batteries during my time in Bremen and did many more other studies.” 30 times 1,800 is 54,000.

    1. Thanks. But this does not mean that 54,000 subjects were collected for the 2012 SPPS paper, but “just” during Förster’s time in Bremen.

  10. Indeed. The question then remains: how many participants were run with similar procedures in studies not reported because the “data did not confirm the hypothesis”?

  11. After some re ordering:
    1. “We had 12 computer stations in Bremen, we used to test participants in parallel.”
    2. [In total] “120 participants were typically invited for a session of 2 hours that could include up to 15 different experiments.”
    3. “If you organize your lab efficiently, you would need 2-3 weeks to complete this “experimental battery”.
    4. “If you only need 60 participants this doubles the number of studies.”
    5 . “This gives you 120 X 15 = 1800 participants.” NO.

    You have 120 participants who have conducted 120 X 15 = 1800 studies. OK? Please correct me when I am wrong.

    “We also had many rooms, such as classrooms or lecture halls that could be used for doing paper and pencil studies or studies with laptops.”

    1. Dear Klaas, Of course you are right, the participants do not multiply themselves magically. so N=120, but he does not really care about independence (which seems consistent with the other QRPs he implicitly admitted in this text).

      1. Dear etb, thanks.

        Förster & Denzler (2012) state: “”For each of the 10 main studies, 60 different undergraduate students (number of females in the studies: Study 1: 39; Study 2: 30; Study 3: 29; Study 4: 26; Study 5: 38; Study 6: 32; Study 7: 30; Study 8: 30; Study 9a: 35; and Study 10a: 28) were recruited for a 1-hour experimental session including ‘‘diverse psychological tasks.’’ In Studies 9b (31 females) and 10b (25 females), 45 undergraduates took part.”

        Towards my opinion, the word ‘each’ combined with the word ‘different’ means that in total 600 individuals were involved in these 10 studies. The complainant also holds the opinion that in total 600 individuals had conducted these 10 experiments, and Jens Förster, as far as I am aware, has never denied this number.

        Other papers with much less experiments often refer to experiments from different cities / levels / background (etc.). E.g.,

        Exp 1. “Forty-one Dutch-speaking students (16 females) participated for course credit and were randomly assigned to either a local or a global perceptual scope or to a control condition.(..). Overall, participants were confronted with 48 trials stemming from eight sets of composite letters. Upon completion, participants filled in a scale allegedly pre-tested as part of a different study.”

        Exp 2. “Forty French-speaking students (34 females) took part for course credit and were randomly assigned to either a promotion or a prevention focus condition. The procedure followed that of Experiment 1, but in the ostensible first study participants now completed the regulatory focus manipulation (Friedman & Förster, 2001): Participants had 3 minutes to draw the path for a mouse through a labyrinth. In the promotion focus condition this entailed leading a hungry mouse through the labyrinth to a piece of cheese. In the prevention focus condition participants had to save the mouse from a bird of prey, leading it safely to its mouse hole. Upon completion, participants reported their empathic concern. Participants also filled in a questionnaire unrelated to this study. Order of questionnaires was counterbalanced, had no effects, and is not further discussed. Finally, they were debriefed, thanked, and given course credit.”

        Exp. 3. “Forty French-speaking students were paid for taking part in a battery of unrelated studies, which involved male participants only. They were randomly assigned to the powerful or the powerless condition. The procedure followed the previous experiments, but as ostensible first study participants received a booklet containing the
        power priming: a word-completion-task pre-tested for power and valence. Four participants were excluded from the analysis because they suspected that the word-completion-task was related to the empathic concern measure. Upon completion, participants turned to the ostensible second study and filled the Empathic Concern scale (IRI, Davis, 1980; α=0.74). Finally, they were debriefed, thanked, and paid 5€.”

        From http://www.sciencedirect.com/science/article/pii/S002210311000257X

        IMO conducted (partly?) in Belgium.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.