Ever since Cornell food researcher Brian Wansink wrote a blog post one year ago praising a graduate student’s productivity, things have gone downhill for him. Although he initially lauded the student for submitting five papers within six months of arriving at the lab, the four papers about pizza have all since been modified in some way after the research community began scrutinizing his work; two have been outright retracted. On Friday, Frontiers of Psychology retracted the fifth paper, about the shopping behavior of military veterans, with a notice stating a journal probe found “no empirical support for the conclusions of the article.” The retraction — covered by BuzzFeed — was likely not a surprise to Nick Brown, PhD student at the University of Groningen, who had expressed concerns about the paper in March.
Retraction Watch: You note that this newly retracted article was co-authored by the graduate student Wansink initially blogged about, but wasn’t as heavily scrutinized as the four papers about pizza consumption she also co-authored. Why do you think this paper wasn’t as closely examined?
Nick Brown: The four “pizza papers” made a natural set for analysis, partly because the contradictions between them were part of the story. After analyzing those we (Tim van der Zee, Jordan Anaya, and me, with help from James Heathers) started looking at other stuff from the same lab, some of which was more prominent (e.g., the articles that influenced the school lunch program). So this article got left behind a bit; plus, as I put in the blog post, at a first glance it didn’t look to be as bad as the others. I don’t know if I would have looked more closely at this article if a journalist [Tom Bartlett, at the Chronicle of Higher Education] hadn’t asked me about it.
RW: Your blog details some of the potential issues you identified with the paper. Can you summarize them here?
NB: The main problem is that the description of the sample is absurd. First there is the claim that only 80% of people who saw heavy, repeated combat during WW2 were male; even ignoring for a moment the fact that exactly zero American women were assigned to combat roles and only a very few enlisted women in total were exposed to any form of attack in the form of, say, enemy air raids, in the other articles from the same survey around 99% of the participants are reported as being male. Second, and more subtly, the distribution of the ages is basically impossible; almost everybody in the sample must have been recruited in 1945 at the age of 18 in order for the numbers to be possible. When the first couple of items in the table of descriptives are obviously not right, there isn’t a lot of point in reading the rest of the article, because any of the other numbers could be equally wrong. There were other issues, such as text recycled from a previous article and a large number of impossible test statistics, but by the time I got to blogging about this article, those sorts of problems had started to wash over me because they occur so often in the work from this lab.
RW: The latest notice states that, after an investigation, the journal found “no empirical support” for the article’s conclusions. What is your reaction to that language?
NB: That statement does seem remarkably strong. I would very much like to see the dataset, because in principle it is also the basis of several other articles drawn from the same survey. For the moment the journal has declined to share it, although apparently Dr. Wansink plans to share it at some point.
RW: You’ve been highly critical of Wansink’s work ever since he posted his initial blog. What’s it been like to watch so many editorial notices appear on his work?
NB: It’s nice to have one’s (hundreds of hours of) work vindicated, but also a little frustrating because several of the eight notices so far that have not been full retractions arguably ought to have been. For example, in one case the authors stated that their entire description of the method had been incorrect, and the journal just issued a note. The change of method throws the entire theoretical basis of the study into doubt, but the results have still been allowed to stand.
RW: As a research community, what can we do to avoid similar problems in the future?
NB: I don’t know if it’s possible to avoid bad science getting published, without killing the whole scientific enterprise; it’s a variant of the well-known tradeoff between Type I and Type II error, and I don’t think anyone in science is asking for more formal bureaucracy right now. I would rather see the development of an academic publishing system that accepts that unfortunate things of various kinds are going to happen, and is more robust than at present when those situations do arise. Right now we have all the guards on the outside of the castle in the form of peer review; when that fails, and something bad slips past, often nobody has much of a clue what to do. We have to accept that science is a human enterprise, with all that that entails, rather than pretending that everything is wonderful and then having an attack of the vapours any time there’s a problem.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at email@example.com.