An illuminating profile of Diederik Stapel in the New York Times Magazine

stapel_npcThe New York Times Magazine has a great profile — featuring an in-depth interview — of Diederik Stapel this weekend. Check it out. (Or, if you’re visiting us because the magazine was kind enough to include a link to Retraction Watch, welcome! And find all of our Stapel coverage here.)

One of a number of highlights in the piece by Yudhijit Battacharjee:

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.

29 thoughts on “An illuminating profile of Diederik Stapel in the New York Times Magazine”

  1. Very nice article by Battacharjee. There is one inaccuracy. Battacharjee describes Stapel as having had a sort of inverse ‘Eureka’ moment at Gröningen (between 2000 and 2004) in which he suddenly realized that he could just fake the data. Battacharjee cites Stapel’s advisor at Amsterdam (1990’s) for the integrity of Stapel’s work prior to Gröningen. This leaves the reader with the impression that Stapel’s subsequent conduct was sudden, idiosyncratic, and aberrant.

    In fact, Stapel admitted to the investigating committee that his work at Amsterdam was “grey.” He had, perhaps, engaged only in the kind of data cherry-picking and incomplete reporting that has emerged in other recent social psychology misconduct cases. The Drenth Committee, which reviewed Stapel’s Amsterdam work in detail, was less charitable. It found evidence of fraud as far back as 1996, including seven publications, several of which formed parts of Stapel’s dissertation.

    From a journalistic perspective, the difference is small potatoes. I mean no criticism of Mr. Battacharjee’s article. However, the contrast in perspectives is significant for those interested in the detection and prevention of misconduct or in analyzing the kinds of research cultures in which patterns of misconduct may develop. As the article elsewhere points out, Stapel’s fraud may fall in the tail of a distribution of research ethics, but is still a part of the distribution. It is not a sudden anomaly traceable to a unique event which one may ignore for the sake of a more aesthetic analytical result. The latter view isn’t an analysis, it’s the problem.

    1. “Stapel’s fraud … is not a sudden anomaly traceable to a unique event which one may ignore for the sake of a more aesthetic analytical result. [Such a] view isn’t an analysis, it’s the problem.”

      Well said!

      Deja vu whitewash again!

      Some research cultures (and personalities) seem inherently more inclined to self-deceiving whitewashing; such inclinations start early in life, they seem difficult to shake.

      1. “Some research cultures (and personalities) seem inherently more inclined to self-deceiving whitewashing; such inclinations start early in life, they seem difficult to shake.”

        References to empirical work to back up this claim would be appreciated.

        1. “References to empirical work to back up this claim would be appreciated.”

          A lot of folks my age don’t want to say anything on the internet because we’re afraid of getting fired. I strongly suspect this will change soon, after I have a little chat with some USA professors in physics and medicine.

          I do not think this is deliberate corruption (yes, I know there is some, but I don’t think it’s as pervasive as us younger scientists sometimes feel), esp. in the physical sciences. It’s more about how the currently structured funding mechanism tries to push scientists in particular directions, esp. in the USA.

          Here’s my comment that got put up today on the NYTs website:
          “My undergraduate thesis adviser at Reed College, Prof. McClard, once said during his always amusing analytical chemistry lecture:

          “When you become wedded to your hypothesis rather than to your data, that’s a little bit scary.”

          Sometimes I think social and life sciences would benefit if a solid analytical chemistry course were a requirement for the majors.

          -Dr. Allison L. Stelling (PhD Stony Brook University Dept. of Chemistry, 2008)

          https://twitter.com/DrStelling

          1. Funny — that’s pretty close to what my postdoc advisor, David McClelland, once remarked:

            “…if my experience or the facts do not fit the theory, I tend to rethink the theory rather than try to explain away the fact as due to bias, chance, or any of the favorite devices so often employed by upholders of tradition. My primary loyalty is to the phenomenon, to the empirical fact — and if it messes up somebody’s theory, so much the worse for the theory.” (p. 28, in McClelland, 1984, Motives, personality, and society. Selected papers. New York: Praeger).

            This always sounded like excellent advice to me.

            His advice also stands in marked contrast to the remarks by Hans Eysenck, another one of those towering figures in psychology, but one with quite a bit of troubling intellectual baggage:

            “The first findings in trying to substantiate a theoretical discovery are never clear-cut … But enemies would seize upon these anomalies to destroy his theory. Obviously the way to overcome this problem is simply to make sure the data fit the theory!” (p. 126, in Eysenck 1995, Burt and hero and anti-hero: A greek tragedy. In N. J. Mackintosh, Burt: fraud or framed? pp. 111-129, Oxford, GB: Oxford University Press)

            A scary comment, albeit one that seems to characterize more scientists’ thinking than one cares for.

          2. @ Oliver; I think it’s one of those sayings common in higher education in the sciences, esp. for a certain age group– Ron was pretty old-school (he did a postdoc at UCSF back in the day doing- if I recall correctly- pretty hard core bioanalytical chemistry) 🙂

          3. a href=”http://www.youtube.com/watch?v=EYPapE-3FRw” target=”_blank”>Richard Feyman’s Lecture on the scientific method (at Cornell, 1964, I think), chalkboard and a superbly incisive mind on the “scientific method”. No one has put it better.

          4. @Oliver; the advice to follow the data always sounds like the proper thing to do, but it actually only makes sense if the data are of high quality. For a lot of measurements in psychology the conclusion is uncertain (depending on the statistics of the sample), so to follow this advice in general means that scientists end up with models that fit noise. In other contexts such an approach is called HARKing (Hypothesizing After the Results are Known).

            A theory that seems to perfectly match the data might be the best theory can you can create, but that does not mean it is a good theory. The reality is that if empirical measurements are noisy, then they cannot support _any_ theory very well. Scientists just have to wait until better data comes along.

          5. @Greg: I completely agree, and herein lies the uncertainty principle of good science: the data can be noisy and therefore you may be HARKing, so stick to the theory. But the theory may have it wrong and not fit the data, so stick to the data. Urgh!

            In my experience, the only way out of this hard-place-and-a-rock situation is to make sure that you take the utmost care collecting valid and noise-free data. This in turn requires that you fully understand the measurement instrument and process, In my experience as author, editor, and reviewer, many psychologists tend to be rather cavalier about these issues, with some notable exceptions (e.g., Jan DeHouwer’s work). As a rule of thumb, scientists who fully disclose their data in the sense that they fully report all test quality stats and actually show scatterplots and dot plots of their data tend to do that more than scientists who leave out these “messy details” and only provide sanitized regression lines or bar graph plots (preferably without error bars). Obviously, there are limits to this cards-on-the table approach (think fMRI), but I think it should be followed as much as possible. Because putting your cards on the table FORCES you to think thoroughly and critically about your measures.

            And once you do that, McClelland’s advice applies.

            (BTW, he was on the faculty at BU before he passed away).

            Best,
            Oliver

  2. Let’s tell it as it is:

    1. Academic science has become a business.

    2. We all know that: “There are scarce resources, you need grants, you need money, there is competition” and also that: “Normal people go to the edge to get that money”.

    3. Now, couple this with the investigating committee’s admission that: “most troubling about the research culture are the plentiful opportunities and incentives for fraud”.

    4. As I have mentioned on RW (for example, September 9, 2012 at 5:28 am; September 18, 2012 at10:48 am), academics are no different than the taxi drivers, plumbers, or waiters, except that they are (may be) more intelligent. Therefore, the fraud can be more sophisticated and the damage incurred from those who are bad is much greater that the damages which might incur from the taxi drivers, plumbers, or waiters. I do not understand WHY, while we all condemn taxi drivers, plumbers or waiters, when we are deceived by them, when we are deceived by academics it’s considered to be OK?!?

    5. In the Constitution of any country around the world is mentioned specifically that
    All citizens are EQUAL before the law, so let’s start treating academics equally for the fraud they commit.

    6. IT’S TIME FOR A CHANGE OF THE SYSTEM!!!
    Not a cosmetic one, but complete overhaul which includes also editors, publishers, institutions.

    1. 1. Retract articles
      2. Rescind Stapel’s PhD
      3. Stapel should pay back government funds and any travel funding he received from learned societies.
      4. If he has received any honours, these should be taken back too.
      5. Consider carefully the PhDs of his students – is there any evidence for critical thought in these or just happy regurgitation? if the latter, some form of action is required, such as busting the PhD down to a Masters. Although the students are partly victims, they are willing ones, because they were happy to take clean results and a PhD with papers and move onto a job. In contrast their peers wrestled with real world data and would not have obtained such good papers (perhaps none in some instance) and may not have landed such good jobs.

      1. Well, all of that has been done. Stapel no longer has a PhD, there is constant stream of (co–authored) papers that are being retracted (read this website), he is stripped of all his honors, he is fired, and the Dutch authorities are investigating whether he engaged in criminal behavior and mis-used government funding. The work of his PhD students has been examined and the implications were quite serious: some people had to re-start from scratch.
        The reporting in many Dutch newspapers was less favorable to the guy than the article in NYT. The guy was bully to any PhD student who was critical of his work/data/results. He simply abused his powers as a chair and head of a school. In the end, it was the critical assessment of some of his PhD students that brought him down, initially his peers played little or no role in his fall from grace.

        1. John Hagedoorn Wrote “The guy was bully to any PhD student who was critical of his work/data/results. He simply abused his powers as a chair and head of a school.”

          Well, students certainly have a role, and thank goodness those students have stood their ground.

          But, how sad is it when “initially his peers played little or no role in his fall from grace” is the reality of some Universities? Perhaps sites like this will encourage more PhD students to show similar courage.

          1. “Well, students certainly have a role, and thank goodness those students have stood their ground.”

            I would say students are the LAST ones in a long line of people, positions and related responsiblities to have a role in this sort of thing. Students are the most vulnarable, least knowledgeable, and the least responsible for any of this. Stapel’s colleagues, but most importantly the faculties and ultimately the institutions are the ones who have a role and responsibility in preventing this sort of thing. Aren’t they supposed to provide a scientific and safe work-environment ?

  3. “A few weeks later, he called Vingerhoets to his office and showed him the results, scribbled on a sheet of paper. Vingerhoets was delighted to see a significant difference between the two conditions, indicating that children exposed to a teary-eyed picture were much more willing to share candy. It was sure to result in a high-profile publication. “I said, ‘This is so fantastic, so incredible,’ ” Vingerhoets told me.”

    If the entire field of experimental psychology deals with banalities, I do not see why its disciples should not be allowed, encouraged even, to commit fraud. Such stories are the fodder of morning TV shows and magazines displayed near checkouts in grocery stores. This is entertainment, so there is no need for this to be true. Stapel understood that the fraud he was committing was not dangerous because the results of his studies were inconsequential anyway. The article says 10 PhD theses were tainted with bogus data. Still, the degrees earned in this fashion have not been revoked. So even the universities involved do not believe such research is worth much. Mere typing a thesis, irrespective of its contents, suffices.

    1. I agree. Stapel mainly improved the efficiency of this kind of nonsense science a bit by making up the data. He has been a fraud, but at least he is now talking about it, and this may help science worldwide.

      Many people are still angry with Stapel, but they would better focus their anger on still ongoing nonsense science and politically correct fraud. For instance, the renowned journal Psychological Science has an absolutely insane and harmful paper by Stephan Lewandowsky in press. Why are the good psychologists not protesting? And it is getting worse. I quote Jo Nova today:

      — Over Easter, psychologist Stephan Lewandowsky moved from Perth to Bristol (lucky UK). He’s the psychologist who is expert in an imaginary group of humans called “Climate deniers”. Neither he, nor anyone else has ever met one but he discovered their imaginary motivations by surveying the confused groups who hate them. As you would, right?
      None of the so-called researchers can explain what scientific observations a climate denier, denies. It’s an abuse of English, profoundly unscientific, but has some success in shutting down public debate, if that’s what you want.
      Can humans change the weather and stop the storms? If you know we can, Lewandowsky calls that “science”. If you wonder “how much”, you are a denier.
      The Royal Society, possibly reaching a tipping point in its rush to abject scientific decay, has immediately awarded him the Royal Society Wolfson Research Merit Award.

      More on http://joannenova.com.au/2013/04/royal-society-calls-lewandowsky-outstanding-gives-him-money-loses-more-scientific-credibility/

        1. “Race and affect?

          If they can’t get basic grammar right, how are we supposed to be impressed with their PhD’s??

          1. Affect is a perfectly valid piece of psychological jargon (as Law & Order: CI once taught me, and Wikipedia [1] confirms). It does not represent a confusion of the verbs to effect and to affect.

            [1]http://en.wikipedia.org/wiki/Affect_%28psychology%29

    2. I don’t see it would be so difficult to design an experiment that showed pictures of teary children to other children and then induced them to share candy. But what sort of field grants this “high profile” publication status, even if the data was high quality? If anything Stapel is to be congratulated with refusing to waste the time of children, teachers and parents on such trivia
      We need more Stapels in the field of social psychology, not less.
      http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0044111
      This study (curiously from Groningen) concludes beyond a shadow of doubt (in my view) that Dutch girls would rather drink a non-alcoholic cocktail with a plastic insect in it, than say aloud: “It was so horny to have him (the dog) inside me”
      Valuable information doubtless, but need we lose any sleep if it turned out the researchers had faked the entire dataset? When it comes to social psychology I say fake away.

      I am tempted to start an online petition demanding the immediate reinstatement of Diederik Stapel.

      1. I’m tempted to go along with the ridiculing because it is a cheap way to feel better about oneslef, but the best part of me realizes that it is always easy to ridicule research in other fields we really know nothing about. Psychology is special in this respect because most people think that experimental psychologists have nothing to teach them for the very simple reason people live inside their own mind most of the time. Therefore, everybody must be an expert, right? Yet, the same argument is not made for physics or chemistry even though we are all massive physics and chemistry users! There are plenty of phenomena that outsiders would consider ridiculous and a waste of time to study.
        However, there are many examples in history of “trivial” phenomena leading to major discoveries in the hands of good and persistent observers.
        The problem here is not the kind of phenomenon under study, Stapel’s subject of study is perfectly legitimate. The problem is the way he was studying such phenomena (or not studying them, since he was just making up data).

        1. “I’m tempted to go along with the ridiculing because it is a cheap way to feel better about oneslef”
          Yes well since whistle blowing on scientific misconduct and having to spend a lot of time unemployed and retraining, alas cheap ways to feel better about myself are all that are within my financial means.
          It is indeed possible that there lurk hidden gems in social psychology, just for whatever reason they always seem to pony up rubbish for the consumption of common plods like myself. Equally it may be possible that academics, like all other humans, can be inclined to make cosy little fairly well renumerated clubs on public money – to which ridicule might be a proper response.
          As it is we know that none of his collaborators suspected a thing and many of his PhD students weren’t allowed to do experiments but were forced to by Stapel to use the data he provided them – presumably the rest of the time they spend playing solitaire on the computer.

          Social psychologists might like to investigate the question of situational ethics and investigate what percentage of PhD students will falsify results if given a project that can’t produce results and if given subtle encouragement from above. They might find the percentage quite high. Not quite the Milgram experiment but probably more reliable.

          1. Anyone who says that psychology or social psychology are useless fields is completely off base. Phenomena studied by these disciplines are at the core of our human interactions and of society by definition. You are not going to study these phenomena with the concepts and tools of chemistry, the field requires different concepts and tools! The field of study is legitimate, it’s something that we need to understand. The tools are what they are. If anyone has better tools, be they physicists or mathematicians, they are welcome to suggest them to social psychologists so that these phenomena can be studied and modeled better. In general, there are no trivial phenomena, there are only bad methods.

  4. To me Stapel, the “scientist”, comes across as an insufferable narcissist. And now he’s got his personal New York Times article. I bet it will go up on his study wall.

  5. The con goes on.
    Why is the New York Times publicising fraudster Stapel’s book?
    “if you look at the NYT piece as a piece of marketing material for a book written by a discredited author it all makes sense. In fact the NYT article might just as well have been commissioned by the publishers of the book”

  6. Congrats to Retraction Watch for the citation!

    Stapel comes off as unbelievably self-centered. What he and his family seem to be saying is that, underneath the façade of fraud and self-interest, there beats the heart of a noble knight. Uh-huh.

    The most disturbing thing in the article is all the quotes from anonymous and named people who suspected fraud, but were afraid to say anything lest it ruin their careers. There seem to be a lot of incentives to engage in fraud, and a lot of incentives not to report it. With the difficulty in replicating experiments (more pronounced in social sciences but still present even in the hardest hard sciences, e.g. physics), it really seems like the perfect recipe for lots and lots of fraud.

    The second most disturbing thing is the role of political correctness – Stapel’s results confirmed the world-view of his colleagues, which made them less likely to question him.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.