“The correct values are impossible to establish:” Embattled nutrition researcher adds long fix to 2005 paper

A Cornell food researcher who has pledged to re-analyze his papers following heavy criticism of his work has issued a major correction to a 2005 paper.

The correction tweaks two tables, a figure, and the description of the methodology — and notes in two instances the correct findings are unknown, since the original data are unavailable. Andrew Gelman, a statistician at Columbia University and critic of Brian Wansink’s work, has dubbed the notice the “best correction ever.”

The paper, about whether changing the name of food influences its taste, was not among the batch of papers initially flagged by critics last year. Since then, researchers have raised additional questions about Wansink’s work; one of his papers was retracted in April. That same month, an internal review by Cornell University concluded that Wansink made numerous mistakes, but did not commit misconduct.

When we contacted Wansink about the correction, we received this statement from the Cornell Food and Brand Lab, which Wansink directs:

The authors were asked by the journal editor to review this paper and answer some questions related to Tables 1 and 2 in the paper.

After review by both the author team and the editorial team, it was concluded the main conclusions hold.  In addition, similar findings were replicated two weeks ago in JAMA Internal Medicine by Stanford researchers.

Armand V. Cardello, an editor of Food Quality and Preference, told us:

…the Editors of Food Quality and Preference had two Associate Editors in statistics conduct a thorough analysis of the issues raised with the paper and the authors’ reply to these issues. It was our determination that, while there were several errors in the reporting of results and several missing details of experimental design, the errors did not alter the basic conclusions from the study. For that reason, the Editors opted for a lengthy corrigendum, rather than a full retraction of the paper.

Gelman told us:

On its own, the note is innocuous.  In the context of Wansink’s long record of errors, followed by his long record of not admitting problems with his work, it looks pretty bad.  There are lots and lots of serious questions regarding Wansink’s data; it’s not just this paper.  It’s a long and consistent pattern.

Gelman added:

It’s not at all clear why anyone is expected to believe the conclusions of these papers, given that the scientific conclusions do not follow from the data summaries, the data summaries don’t follow from the raw data, and in this case there are no raw data at all!  It’s approaching the platonic ideal of an empty publication, in the sense of a full dissociation between data and conclusions.  See, for example, this comment from Jordan Anaya, and this follow-up from Ben Prytherch.

Here’s the full correction notice:

Several errors and omissions occurred in the reporting of research and data in our paper: “How Descriptive Food Names Bias Sensory Perceptions in Restaurants,” Food Quality and Preference (2005), 16: 393–400, http://www.sciencedirect.com/science/article/pii/S0950329304000941, the data for which were collected around 1999.

-The values reported in Table 1 and in the first three rows of Table 2 are adjusted mean values based on the respective ANOVA models.
-In the results section, the degrees of freedom cited (df = 133) are the maximum degrees of freedom if all diners had answered a particular question. However, some diners skipped certain questions, so the true degrees of freedom may be smaller than 133.
-The p value threshold reported in the footnote of Table 1 (p < 0.01) is incorrect. It should have read p < 0.05.
-The percentages reported for gender and education in Table 2 may be incorrect due to missing values. Unfortunately, the correct values are impossible to establish, since the raw data could not be retrieved.
-The mean values of 4.47 in Table 1 (“After finishing this menu item, I felt comfortably full and satisfied”) and the reported Chi-square value of 1.71 for education in Table 2 are both incorrect. Due to the original data not being available, the true values are unknown.
-With regard to the design of the study, each of the six food items was available on the cafeteria line four times instead of the six times reported in the text (twice with regular names, twice with descriptive names, and twice not on the menu).
-Finally, Figure 1 is based only on positive (favorable) and negative (unfavorable) comments about the food. Comments unrelated to the food and neutral comments that were neither favorable nor unfavorable (“I ate the fish”) were not included in the analyses.

The authors regret the errors and lack of details regarding the study design.

How descriptive food names bias sensory perceptions in restaurants” has been cited 81 times since it was published in 2005, according to Clarivate Analytics’ Web of Science.

After critics raised allegations about an initial batch of four papers, other researchers have asked questions about additional papers, suggesting some appear to contain duplicated material. Wansink has announced that he has contacted the six journals that published that work, and was told one paper is being retracted. One retraction appeared in April, citing “major overlap” with one of his previous papers.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

 

8 thoughts on ““The correct values are impossible to establish:” Embattled nutrition researcher adds long fix to 2005 paper”

  1. I fail to understand how “These numbers are wrong and we cannot establish what the right numbers were” is compatible with “The conclusions still hold.” The paper should have been retracted, as it is acknowledged to contain wrong numbers and can’t be fixed.

      1. Hold your horses — Wansink’s not a psychologist, but an economist by training!

  2. Am I alone in becoming increasingly weary of the “conclusions remain the same” or “it’s been replicated so we’re OK” responses?

    While science is indeed a search for the truth, the path to arrive at the truth matters. If you happen to discover something via improper mechanisms, being right in the end or being vindicated by others does not excuse your initial impropriety.

    This is a problem because our academic system is a zero-sum-game. If you get 90% of the way there, and fudge the numbers to make it to 100%, you may be right in the end, but along the way you got the points (the publication, the promotion, the grant) and someone else lost out. Simply restating that you were right all along, doesn’t ecactly help the person(s) who lost out. Let’s call it what it is – dumb luck.

    1. Exactly. If your successful grant application had ANY fudged data, even if you yourself didn’t fudge the data, then the money should be revoked. Because the success of your tainted application means that someone else’s application, that had NO fudged data, was not. Why doesn’t the NIH get that?

    2. Well, you’re obviously assuming misconduct as opposed to even considering the possibility of honest error (of course this is RW….)

      Many researchers would struggle to recover date from an article published 12 years ago (and the experiment was likely conducted over 13 year ago). This point was actually brought up in Andrew Gelman’s blog. However, little consideration of these facts is considered here.

      All this brings up a larger issue: Should commenters be bound by ethical misconduct considerations? Should we consider the possibility of whistleblower misconduct or solely focus on scientist misconduct? (Maybe this is not the correct website for such open-ended discussions.)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.