Social psychologist Förster denies misconduct, calls charge “terrible misjudgment”

forster-j-a
Jens Förster

Retraction Watch has obtained an email from Jens Förster, the social psychologist in the Netherlands who, as Dutch media reported this week, was the target of a misconduct investigation at the University of Amsterdam. The inquiry led to the call for the retraction of a paper by Förster and a colleague, Markus Denzler, over concerns of data manipulation.

Förster denies those claims and said Denzler was not involved in the heavy lifting for the study in question:

This is an English translation of my reaction to a newspaper article that appeared in the Dutch newspaper NRC about me.

Today, an article appeared in the Dutch newspaper “NRC” summarizing an investigation on my academic integrity that was opened in September 2012. The case was opened because a colleague from the methodology department at the University of Amsterdam (UvA) observed some regularities in data of three articles that are supposedly highly unlikely. The UvA commission decided in a first, preliminary evaluation that there was no evidence of academic misconduct, but that I should send “cautionary notes” to the respective editors of the journals pointing to these unlikely regularities. The complainant filed yet a different complaint at the national ethics board, the LOWI, because he found the evaluation too mild. Recently, the LOWI finished the investigation, ending up with a more negative evaluation and found that academic misconduct must have taken place, mainly because the patterns are so unlikely. Concrete evidence for fraud however, has not been found. They also recommended to retract one of the papers that has been published 2012. Last week, the UvA accepted this advice but points to the fact that nobody could say who manipulated the data and how this could have taken place. However, I would be responsible for the data I published because I should or could have seen the odd pattern. They will try to retract the 2012 paper based on the statistical analyses provided during the investigation.

The rapid publication of the results of the LOWI and UvA case happened quite unexpectedly, the negative evaluation came unexpectedly, too. Note that we were all sworn to secrecy by the LOWI, so please understand that I have to write this letter in zero time. Because the LOWI, from my point of view, did not receive much more information than was available for the preliminary, UvA-evaluation, and because I did never did something even vaguely related to questionable research practices, I expected a verdict of not guilty. The current judgment is a terrible misjudgment, I do not understand it at all, and doubt that my colleagues will understand it.

I do feel like the victim of an incredible witch hunt directed at psychologists after the Stapel-affair. Three years ago, we learned that Diederik Stapel had invented data, leading to an incredible hysteria, and understandably, this hysteria was especially strong in the Netherlands. From this point on, everybody looked suspicious to everbody.

To be as clear as possible: I never manipulated data and I never motivated my co workers to manipulate data. My co author of the 2012 paper, Markus Denzler, has nothing to do with the data collection or the data analysis. I had invited him to join the publication because he was involved generally in the project.

Consistently, no concrete evidence for manipulation could be found by LOWI or UvA even after a one and half years lasting, meticulous investigation. The only thing that can be held against me is the dumping of questionnaires (that by the way were older than 5 years and were all coded in the existing data files) because I moved to a much smaller office. I regretted this several times in front of the commissions. However, this was suggested by a colleague who knew the Dutch standards with respect to archiving. I have to mention that all this happened before we learned that Diederik Stapel had invented many of his data sets. This was a time of mutual trust and the general norm was: “if you have the data in your computer, this is more than enough”. To explain: most of the data is collected at the computer anyway and is directly transported to summary files that can be immediately analyzed. Note however, that having the questionnaires would not have helped me in this case: the complainant is so self confident that he is right that he would have had to argue that I faked the questionnaires. In this way, I am guilty in any event. My data files were sent to the commissions and have been re analyzed and tested in detail. The results of the re analysis and the investigations were:

*the data is real

*the analyses I did are correct and are correctly reported

*all information of the questionnaires is in the data files

*the results are indeed unlikely but possible and could have been obtained by actual data collection

*it is always possible (according to the reviewer) that we will understand odd patterns in psychology at a later point in time

*if data manipulation took place, something that cannot even be decided on the basis of the available data, it cannot be said who did it and how it was done.

Based on this evaluation, I expected a verdict of not guilty; I cannot understand the judgments by LOWI and UvA.

After the big scandal three years ago, many things in psychological research have changed, and this is a good thing. We had to develop new standards for archiving, and for conducting our research in the most transparent manner. At UvA we have now the strictest rules one can imagine for conducting, analyzing, and archiving, and this is a good thing.

One can consider the judgment by LOWI and UvA as very strict and ahistorical. At least the harshness of the judgment makes one wonder. Moreover, the conclusion that dumping questionnaires necessarily indicates fraud is absurd. It can simply have happened because one wanted to clean up the room, or because one wanted to make room or because archiving was considered less relevant or because there were no resources left. Nonetheless I regret my behavior. I will of course in the future keep strict control over the procedures in my lab. Absolute transparency is self-evident for our discipline. My case is a cruel example for why this is important.

The second basis for the judgment is the statistical analyses by the complainant, suggesting that the results look “to good to be true”. His analyses and later writings sound, as if there is no other interpretation for the data than data manipulation. These stark conclusions are inadequate. Methodology is a science, that is methods are being discussed scientifically; there are always better or worse methods and many of them are error prone. The methods used by the complainant are currently part of a vivid scientific discussion. Moreover, methods are always dependent on the content of the research. During the investigation, I observed several times that the complainant had no idea whatsoever what my research was actually about. Other reviewers came to different, more qualifying conclusions (see above): the results are unlikely but possible.

In short: The conclusion that I manipulated data has never been proved. It remains a conclusion based on likelihoods.

I assumed, that the presumption of innocence prevails and that the burden of proof is on the accuser. This is how I understand justice. LOWI and UvA base their evaluation on analyses that already tomorrow could be obsolete (and they are being discussed currently). The UvA states that it is not clear, who could have manipulated data if this had been done. But UvA thinks that I am still responsible. I should have or could have seen that something is odd with the data. Note that I did not see these regularities. Two reviews sent to LOWI and UvA explicitly state that it is difficult or impossible for a non expert in statistics to see the regularities. Moreover, neither the editor of the journal nor the independent peer reviewers noticed something weird.

In addition, external markers speak for the validity of the phenomena that I discovered. The results were replicated in many international laboratories, meaning that the phenomena I found can be repeated and have some validity. Many scientists have built their research on the basis of my findings. My work is a counter example to the current “replication crisis”.

UvA and LOWI suggest to retract the 2012 article. In principle, I have no problems with retracting articles, but content wise I do not agree at all with this. I do not see any sufficient reason doing so. The lasts statistical review says explicitly that the results are possible yet unlikely. Only the analyses by the complainant are full of exaggerated conclusions and I simply cannot take them as a valid basis for retracting an article. I will leave it to the editor of the journal if he wants to retract the paper.

In summary, I do not understand the evaluation at all. I cannot imagine that psychologists who work on theory and actual psychological phenomena will understand it. For me, the case is not closed. I will write a letter to UvA and LOWI (see article 13 of the regulations), and will remind the commissions of the information that they overlooked or that did not get their full attention. Moreover, now is the time to test the statistical analyses by the complainant – note that he stated that those would not be publishable because they would be “too provocative”. I hesitated until now to hand his analysis over to other statisticians, because confidentiality in this case was important to me. Some of his arguments I believe will be challenged. Listening to the complainant, one gets the impression that the current reviews are all confirming his views. However this is not the case. For example, in his original complaint he stated that linearity is completely unlikely, and later we learned that linearity does exist in the real world.

I found it disturbing that due to the premature publication of the UvA-report a dynamic started that is clearly disadvantageous to me. My options, to argue against an evaluation that is from my perspective wrong in a sensible way and within an appropriate communication frame are drastically limited now. However, I cannot accept the evaluation and I hope that in the current tense atmosphere, that has been partly fueled by the premature publication, it will still be possible to re start the dialogue with UvA and LOWI.

Regards, Jens Förster

P.S. Finally, I would like to thank all the friends that supported me through the last one and a half years. It has been a quite difficult time for me but see, I survived. I love you.

Please see an update on this post, with a copy of the report that led to the investigation.

57 thoughts on “Social psychologist Förster denies misconduct, calls charge “terrible misjudgment””

  1. Gregory Francis, Professor of Psychology at Purdue University and an ardent fraud hunter, did not find anything “fishy” in the disputed paper. As he wrote in an email:

    “I’ve looked over the Förster and Denzler (2012) paper. If there is some statistical flaw, it does not jump out at me. At the very least, the reported data seems consistent with the conclusions (unlike for Piff, where I argued the reported data were too good to be true). I do not know the basis of the “1 to 508 trillion” statement. Maybe it is computed relative to other measures in the literature. I’m not a saying I believe the findings in the paper. I think much of the work on embodied cognition is lousy, largely because the ideas are too vague to be properly tested. Moreover, there do seem to be mistakes in the paper (for example in Figure 2, all of the degrees of freedom are listed as (2,57), even though there are sample size differences). These kinds of mistakes are not as rare as we might hope, and are easily explained as sloppy copy-paste errors.”

    1. The “one in five trillion” is probably just a small p-value, which for a given confidence interval is supposed to report how likely such an event happened by random chance. By itself a p-value isn’t necessarily indicative.

      We have nosebleed seats to a Data Inquest. We’ll see how this shakes out. My background isn’t stats and I haven’t looked at the data so I can’t really say much.

    2. As etb noted below, my statement about the degrees of freedom is incorrect. In my brief look at the article, I misread the statement about sample sizes.

  2. Forster’s response is very long but not convincing. The case against him (to the extent that I understand it) is simple: the data in his paper have too little variation to have come from real questionnaires. His response, sadly, does not address this directly.

    I take from his letter two main defenses.

    a) The details (who, when, how) have not been proved.

    b) The results are not impossible.

    Neither defense helps, in my view. Since the original materials have not been forthcoming no investigation will be able to provide a detailed accusation. As for point b, he had the opportunity to show that his result are within the bounds of reasonable levels of improbability. He instead is staking his defense on the distant limit of impossibility.

    1. Particularly since “mind” is given no coherent conceptual treatment that does not amount to stipulation or tautology.

  3. I was just informed by the press office of the Humboldt Foundation that the Humboldt Award will not be conferred to Förster next week, as was originally planned.

    1. This smells like a case of professional jealousy and an environment too happy to go on a witch hunt to show they are doing something about fraud in science. This is a really bad precedent, if this guy turns out to be innocent.

  4. Forster writes “My data files were sent to the commissions and have been re analyzed and tested in detail. The results of the re analysis and the investigations were: the data is real, the analyses I did are correct and are correctly reported, all information of the questionnaires is in the data files […]”

    This seems inconsistent with the anonymized LOWI (dutch national organization for integrity in science) advise report (https://www.knaw.nl/shared/resources/thematisch/bestanden/LOWIadvies2014nr1.pdf), which reads “de raw data from the …. that formed the basis of …., are not available anymore, because of a computer crash that occurred in … 2012 (original text “de ruwe data van de …, die de grondslag vormden van …, niet meer aanwezig zijn, vanwege een in … 2012 plaats gevonden hebbende crash van de computer van ….” (page 6)). The advise report also states “in fact there is the case of a combination of anomalies, namely […] “the loss of original raw data, which makes control and verification not possible” (original text “feite sprake is van een combinatie van verschillende anomalieën, namelijk […] wegraken van oorspronkelijke ruwe data, waardoor controle en verificatie niet mogelijk blijkt.” (page 9)).

  5. Re G. Francis:
    – The degrees of freedoms in the figures are correct, all experiments reported in this figure have N = 60 in diff. groups.
    – The suspicious pattern in the results most likely concerns the consistency of means and mean differences: The means of the global, local, and control group systematically shift up and down together, across experiments. The differences between local vs. control and global vs. control show a similar consistency across experiments. This all seems very unlikely, as these are all independent groups of subjects.

  6. “I do feel like the victim of an incredible witch hunt”
    If he is not lying, this is something many of us at this site saw coming, long ago. With witch hunts, the chance of getting ruining the career of innocent people is very high. Hopefully, he will sue.

    1. I’d like to know whether there is actually any evidence for witch hunts, that is, are you aware of any allegations (e.g. against social psychologists) that turned out to be unfounded? If so, did such false allegations ruin any researcher’s career?

      1. I know of that happening. The problem is that for the hunted there is nothing to gain by outing the hunter. The cases I know of, it did not ruin their carreer but did cost them a huge amount of time and caused a lot of stress.

        1. This seems to be the price we have to pay in the name of science. I suspect that we rather need more hunters than less.

          1. No, that is not the price we have to pay. No more than executing innocent people is the price we have to pay to have “justice”. Any fair justice system should err in the direction of false negatives, not false positives.

    2. I worry about this, too. The question for me seems to be whether we hold suspicions concerning scientific findings to the standards of a) a criminal case or b) positivist rigor.
      If a) is the case, presumption of innocence is important, and might have to be balanced with other goods.
      If b) is the case, presumption of innocence is irrelevant; in fact, for scientific work, the burden of proof is *always* on the researcher. That’s the whole point of hypothesis testing; a suggested hypothesis remains false unless and until it is suggested to be correct, not the other way around.

      I would like a world where science offers absolute transparency, and these questions can be discussed completely in the open, without fear of immediate retribution either way. Names of *both* “defendants” (eh, let’s say, hypothesizers) and “prosecutors” (sceptics) would have to be public, as Fritz Strack (?) said here earlier.

      It should also be clear that under standard b), the goal is to discover and test scientific knowledge – not to assert wrongdoing. For that, the rule of law standards of a criminal case would have to be adhered to (and they are clearly not being adhered to here).

      So here, on retractionwatch, let’s talk about whether these results hold water – and not sit in court on anyone or anything.

      1. “I would like a world where science offers absolute transparency”
        That is a world without competition for resources: all researchers are given a certain amount of money for their research and they have a guaranteed job. Without that, some other people in the field will decide on whether you get a grant, a promotion, and get to keep your job. And if they don’t like you personally, they will trash you.

  7. Across the 12 graphs in the Förster and Denzler paper, the Control and Global scores are correlated .996 (p=.0000002). If I’d obtained data like this legitimately, I’d have been tempted to change it to look less like I’d made it up!

    Also, *none* of the error bars in *any* of the graphs overlap, and in general there is a full SE or more of clear blue water between them. I’d love to see some post hoc power calculations, with an application of Ioannidis & Trikalinos’s Test for Excess Significance.

  8. What statistical analyses were used to conclude that Forster’s results were unlikely and probably manipulated? Has that information been disclosed? The discussion should be about the quality of the empirical evidence and the validity of the quantitative methods used.

  9. I found it remarkable that university (or other) committees make ethical judgments on the basis of the outcome of studies. To be sure, academic misconduct has to identified and sanctioned. However, the plausibility of outcomes must remain a topic of scientific discourse, be it conceptual, empirical or methodological.

    1. “I found it remarkable that university (or other) committees make ethical judgments on the basis of the outcome of studies.”

      It’s not the outcome, but the data that is the problem in this case. If you flip 10 coins you will, on average, get 5 heads. If you do this 10 times and get exactly 5 heads each time then something has gone gravely wrong. The most likely explanation is that you made up the data.

      1. But if you have 1000 people each independently conducting a 10 x 10 coin flip trial, the chance of getting at least one extremely unlikely outcome is 1000 times greater than normal. There are statistical corrections for this in biology — for example, if I measure levels of 100 biomarkers in two groups of people, a standard p value of 0.05 is generally not considered sufficient to demonstrate significance.

        I wonder if a similar consideration needs to take place when evaluating (for example) social science publications. First, there is a well known publication bias — boring or non-significant results are less likely to be published than significant and interesting results. Then, if there are 1000 studies published per year, the chances that at least one of those studies will have an unlikely but real outcome is much greater than would be calculated without including a multiple comparisons correction. Just thinking out loud.

        1. “But if you have 1000 people each independently conducting a 10 x 10 coin flip trial, the chance of getting at least one extremely unlikely outcome is 1000 times greater than normal.”

          But it’s still not high enough to expect an observation in the entire history of scientific research. If I can calculate right (not my strong point) the odds of flipping 5 heads out of 10, 10 times in a row is about 1e-30. If 1000 people try, the odds only improve to 1e-27.

          If you say you observed something as rare as that, and when challenged say “well, I lost my notes”, then you don’t really have any claim to an expectation of being believed.

        2. “But if you have 1000 people each independently conducting a 10 x 10 coin flip trial, the chance of getting at least one extremely unlikely outcome is 1000 times greater than normal. ”
          Well in this instance – if I do my maths right – it is still only 0.07 of 1% or 0.0007.
          Universities generally bend over backwards to find explanations to paper over anomalies, so the fact they stuck to their guns on this one suggests it must be pretty cut and dried.
          And if it isn’t, well it is certainly extremely unfortunate for the group that
          a) they had a computer crash with wiped all their data and
          b) they don’t back up any of their data.

          My guess it won’t be long before he is writing a book about the psychology of data falsification, a well trodden path I believe

          1. If these psychologists don’t back up any of their data, they are just less prone to suffering from anxiety than others.

    2. This is indeed the most troubling aspect of this incident: There is no gun (i.e., direct proof of data manipulation), there is no smoke (i.e., no one saying: “I witnessed him doing it”), and yet Förster is being asked to retract his papers. Others who have faces similar accusations were merely asked to respond to their critics (see Daryl Bem’s case and the color experiments by Elliot, Pekrun, and others which were criticized by Francis).

      1. Just to be clear, in the case of Elliot and (perhaps) Bem, there was never an accusation of fraud. Fraud requires intent, but in Elliot’s case one could argue that the “too good to be true” status of the reported results could happen due to ignorance rather than malfeasance. As I posted at PubMed Commons, Elliot’s reply actually makes the original findings less believable, so I do think a retraction is called for, but even then a charge of fraud is not appropriate with the available information. The analysis for the Förster case suggests something different. It is difficult to imagine how the overly consistent outcomes could have been due to ignorance.

        1. I agree. I made the above comments before reading the report on Förster’s papers and realize now that this is a wholly different ballgame.

  10. Jens Förster wrote: “and because I did never did something even vaguely related to questionable research practices”.
    I tend to disagree with him. Deliberately throwing away original files / data sheets / questionaires / lab notes / field diaries is often seen as a clear example of QRP, and already for a very long period of time.

  11. This case is still very unclear, and I think this should be investigated to the bottom. The data patterns across studies scarily resemble each other close to perfection. The fact that these patterns are so alike, and that there is not much variation across conditions is very striking, and this is probably what the one in 508 trillion refers to. And from the report I could read that there were some other papers with these anomalies. I looked into one of his other papers, and I must say the same perfect patterns (with very similar sd’s across conditions) occur. This is something different than the too-good-to-be-true patterns that Francis tries to find in papers (and also has nothing to do with the plausibility of his outcomes), but rather reminds me of the same techniques that were used in some other cases in the last years in social psychology… Therefore, I think it would a good case that his university, for the sake of Forster’s integrity, conducts a bigger examination to shine more light on the way he did research, and to see if similar patterns occurred in multiple papers.

    1. Another issue: authorship. Forster states “Markus Denzler, has nothing to do with the data collection or the data analysis. I had invited him to join the publication because he was involved generally in the project.” Is Denzler a guest author? If Denzler has nothing to do with data analysis or collection, then what exactly did Denzler do? Does SPSS sbscribe to ICMJE definition of authorship?

      1. It’s not uncommon that some authors are only involved in the thinking and writing process, and that they trust another author for data collection and analysis.

      2. Well, perhaps the field requires some thinking, rather than just pushing a button, getting some data from a machine, and dumping the data into a paper? Perhaps Denzier contributed to the thinking part? Just guessing.

  12. A statistic such as “finding such consistent results (or more consistent results) in one out of 508 trillion” does not say me much.
    In The Netherlands nurse Lucia de Berk had been arrested in 2001, and later convicted of murder, in after 7 patients had died while she had been in the neighbourhood. According to a computation by the hospital the chance that this was coincidence was only one in 7 billion (Dutch: “miljard”).
    A next figure that the hospital presented was one in 342 million. The real meaning of this number is unclear, at least to me.

    Later, statistician Richard Gill made a computation for the chance that a nurse with comparable shifts would be near that many death cases: one in 48.
    The advocacy by him and several other concerned citizens was successful: Lucia De Berk was exonerated in 2010.

    So I am interested in the following statistic:
    What is the chance that an honest experimental social psychologist, doing comparable research as Förster, would be falsely accused when such criteria were applied as LOWI has done in this case.

    1. Just because statistics can be misused, does not necessarily mean that that has been the case here. (And I hypothesise that sufficient qualified people have reviewed the statistical analyses performed in the investigation(s) under discussion.)

    2. So, on the basis that someone once made a widely-publicised mistake in calculating the odds against something happening within a geographical area (i.e., the Netherlands) that Lucia de Berk, Jens Förster, and (I presume, from your name) you share, you conclude that any probability calculations regarding statistical phenomena that are claimed to have occurred within that geographical area, even if the method of calculation is available for public inspection, are likely to be false?

      1. No; I am not defending Förster, and geography has nothing to do here. I am convinced Förster is a fraud. I just think statistics such as “finding such consistent results is one out of 508 trillion” don’t say enough. The question is how many false positives would you get this way. Not so many, I guess, but I think that number deserves more focus.

  13. Förster’s reactions prove that he (and his “reviewer”) doesn’t understand a thing at all about statistics. Three examples.

    1. “Consistently, no concrete evidence for manipulation could be found by LOWI or UvA even after a one and half years lasting, meticulous investigation. ”

    The straight line graphs are concrete evidence for manipulation. The fact that he does not have the slightest memory which assistants aided him in collecting the data is concrete evidence for manipulation. No records whatever about when and where exactly the experiments were done. No single “drop-out” in all those experiments.

    What do you call concrete evidence? No he has “lost” all the files and anyway there never were any decent log books … so this research is so sloppy it should be withdrawn anyway.

    2. “The results are indeed unlikely but possible and could have been obtained by actual data collection”

    The results are *extraordinarily* unlikely (but indeed, they are just just possible …)

    3. “It is always possible (according to the reviewer) that we will understand odd patterns in psychology at a later point in time”

    No this is impossible. The patterns are statistically almost impossible, even if psychologically they would be possible.

    Yet again this proves a point I keep making http://www.math.leidenuniv.nl/~gill/Heiser_symposium_RDG.pdf : we need open scientific debate, not disciplinary investigations in secret by tribunals which are subject-matter not competent, and have an organisational conflict of interest.

    First we should openly and in scientific fashion debate the scientific integrity of the research results.

    Only then, maybe, might some universtity committee ask itself whether or not they hired the right person and whether or not their strategy of hiring new people is wise. They should certainly ask themselves what the emphasis on quantity above quality is doing to science.

    PS The results might indeed have been obtained by a legitimate data collection. Do the experiment (one of the 25 or so studies reported in one paper) today. If you don’t like the results, throw them away, repeat. Recruit another 40 psychology students… I think that for each of the “sub-experiments” in these papers, it would take about a year to hit the jackpot. So altogether 25 years to reproduce all these results together in a proper experimental set-up. No falsification of data …

  14. PS there is *no* reason from psychology why the population mean score of control subjects should be *precisely* half way between population mean scores of “high” treated subjects and “low” treated subjects. In study after study. There is every reason from statistics why, even if this were true in the population, it won’t be anywhere close to true in small samples. And if it is not even true in the population, it is even less likely to be close to true in small samples.

    1. I read this also somehwere else that he is doing this. I just cannot find it. Can you please indicate where he is naming his accusers?

        1. Well: basically, the whole methodology department of the psychology faculty at UvA stands behind these analyses, and which particular person anonymously filed a report which led to two investigations (UvA, LOWI) is irrelevant, since the initial findings have been more than confirmed and much more evidence come to light. An internal whistleblower who chooses the path of submission of an “accusation” to some organ of his university has to have anonimity guaranteed (unless of course the accusations turned out to be false). The alternative path is to publish (not anonymously) findings in a scientific forum. Each path has different advantages and disadvantages. A very difficult choice. Kind of depends on your level of job-security and level of authority in your organization.

          1. PS By the word “false” in the remark “… has to have anonimity guaranteed (unless of course the accusations turned out to be false)”, I meant a false accusation knowlingly made, i.e., with intention to deceive.

  15. Three points that we can learn from this:

    1) Peer review needs to be taken more seriously. What worries me most is that the editor and peer reviewers did not notice the unusual consistency in the data pattern. It would be good if the editor and reviewer comments would be considered as well.

    2) Journals and universities should require a minimum time that data must be kept on file. One possibility would be that universities keep the data somewhere, so that it cannot be lost.

    3) Journals should put more value on reporting pure replication studies which can prove whether a phenomenon really exists or whether it was an anomalous finding (which occassionaly will happen).

    1. The unusual pattern is not obvious, from the published papers. They contain bar charts, three bars, in the order: low, high, control. Three fat bars, with a lot of space between each. You don’t see that the control bar on the right is precisely half the height of the other two.

      You do see rather surprisingly small error bars.

      There is an excellent illustration in this article:

      http://www.wetenschap24.nl/nieuws/artikelen/2014/April/Opvolger-van–Diederik-Stapel-is-bekend.html

    2. “Authors are expected to retain raw data for a minimum of five years after publication of the research. Other information related to the research (e.g., instructions, treatment manuals, software, details of procedures, code for mathematical models reported in journal articles) should be kept for the same period; such information is necessary if others are to attempt replication and should be provided to qualified researchers on request (APA Ethics Code Standard 6.01, Documentation of Professional and Scientific Work and Maintenance of Records).”

      American Psychological Association (APA). These rules are also followed by the journal Social Psychological and Personality Science.

  16. Jens Förster wrote: “I found it disturbing that due to the premature publication of the UvA-report a dynamic started that is clearly disadvantageous to me.”

    Article 12 of the regulations of UvA ( = Klachtenregeling Wetenschappelijke Integriteit Universiteit van Amsterdam, besluitnummer: 2013cb0471, d.d. 10 december 2013, see http://www.uva.nl/over-de-uva/uva-profiel/regelingen-en-reglementen/onderzoek/onderzoek.html ) does not mention a delay between the date of the publication of the findings of the UvA report on the site of VSNU and between the date when the board of UvA makes a decision (= 22 April 2014).

    Frank van Kolfschooten published his article in the NRC on 29 April 2013 (http://www.nrc.nl/nieuws/2014/04/29/uva-hoogleraar-manipuleerde-data-van-onderzoek/). Frank van Kolfschooten states in this article that the final decision of UvA was published on the site of VSNU on 29 April 2014 (= the date of his publication in NRC).

    Can anyone explain what Jens Förster means by a “premature publication of the UvA-report”?

    Thanks in advance for a response.

  17. “In summary, I do not understand the evaluation at all. I cannot imagine that psychologists who work on theory and actual psychological phenomena will understand it.”

    He hits the nail on the head there.

  18. Jens Förster wrote: “(…) because I did never did something even vaguely related to questionable research practices, I expected a verdict of not guilty. The current judgment is a terrible misjudgment, I do not understand it at all, and doubt that my colleagues will understand it.”

    Both Richard Gill (see above) and Ulf-Dietrich Reips (see http://retractionwatch.com/2014/04/29/new-dutch-psychology-scandal-inquiry-cites-data-manipulation-calls-for-retraction/) state that APA rules are strict in regard to the amount of time when raw data need to be retained and that these raw data (as well as alot of other items) need to be provided to other researchers when they want to check the results published in any of the journals of APA.

    Any scientist working at a Dutch university must work according to the rules of “The Netherlands Code of Conduct for Scientific Practice”. Also this code (version of December 2004) states that raw data need to be stored for at least 5 years after the research results have been published.

    I fail to understand how Jens Förster can write that he never ever has done anything which might even be vaguely related to ‘sloppy science’ / ‘QRP’ and I also fail to understand how Jens Förster is accusing others (LOWI and UvA) of “terrible misjudgment”.

    I would like to suggest that Jens Förster made a “terrible misjudgment” when he deliberately decided to throw away all the raw data (including details on date, month and year of the records). I also would like to suggest that Jens Förster made a “terrible misjudgment” when he submitted papers to APA journals while he was aware that he was violating rules of APA in regard to the accessability of the raw data and in regard to other information related to the research.

    I would like to invite Jens Förster to react on this posting.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.