Retraction Watch

Tracking retractions as a window into the scientific process

Cornell finds mistakes — not misconduct — in papers by high-profile nutrition researcher

with 7 comments

Brian Wansink

An internal review by Cornell University has concluded that a high-profile researcher whose work has been under fire made numerous mistakes in his work, but did not commit misconduct.

In response, the researcher — Brian Wansinkannounced that he has submitted four errata to the journals that published the work in question. Since the initial allegations about the four papers, other researchers have raised numerous questions about additional papers that appear to contain duplicated material. Wansink noted that he has contacted the six journals that published that work, and was told one paper is being retracted.

Here’s the statement from Cornell about its initial probe:

Shortly after we were made aware of questions being raised about research conducted by Professor Brian Wansink by fellow researchers at other institutions, Cornell conducted an internal review to determine the extent to which a formal investigation of research integrity was appropriate. That review indicated that, while numerous instances of inappropriate data handling and statistical analysis in four published papers were alleged, such errors did not constitute scientific misconduct (https://grants.nih.gov/grants/research_integrity/research_misconduct.htm). However, given the number of errors cited and their repeated nature, we established a process in which Professor Wansink would engage external statistical experts to validate his review and reanalysis of the papers and attendant published errata. A report detailing the findings is now available.

Regarding the additional allegations about duplication, Cornell noted:

Since the original critique of Professor Wansink’s articles, additional instances of self-duplication have come to light. Professor Wansink has acknowledged the repeated use of identical language and in some cases dual publication of materials. Cornell will evaluate these cases to determine whether or not additional actions are warranted.

We reached out to Wansink, who has previously spoken to us at length about the criticisms of his work. We heard back from a Cornell spokesperson, who told us:

Professor Brian Wansink and the Food and Brand Lab are working with the leadership of the Cornell SC Johnson College of Business and the university to examine questions that have been raised about prior research, and additional errata and other information may be shared as that review continues. Professor Wansink is declining additional comment until the completion of that effort.

Earlier this year, Wansink announced that he had asked a non-author to review the data in the four papers initially questioned by critics. He explained his rationale to us in February. However, critics raised alarms when they learned Wansink was using a researcher in his own lab to reanalyze the data. Wansink apparently then shifted gears, and engaged an outside firm to analyze his data, as he notes in his statement:

…I have submitted detailed errata and comments to the four Journals that published the papers (links will be available upon publication by journals), and have made available both an overview and detailed table of responses to each of the points raised. My team has also worked to make the full anonymized data and scripts for each study available for review (download below). All of this data analysis was independently reviewed and verified under contract by the outside firm Mathematica Policy Research, and none of the findings altered the core conclusions of any of the studies in question.

Given that the initial criticisms of Wansink’s work weren’t raised until a few months ago, we were surprised by the speed with which Cornell conducted its probe, which usually last much longer. We asked the spokesperson why the university worked so quickly in this case; he told us:

…we take matters of academic integrity seriously. Also, as noted in the statement, the information posted yesterday focuses on claims made regarding the initial four papers. The university continues to evaluate other questions raised, and will determine if additional actions are warranted.

Not surprisingly, not all of Wansink’s critics are reassured by the latest development. As Columbia statistics professor Andrew Gelman writes on his blog:

Here’s the problem. It’s not just those 4 papers, and it’s not just those 4 papers plus the repeated use of identical language and in some cases dual publication of materials.

There’s more. A lot more. And it looks to me like serious research misconduct: either outright fraud by people in the lab, or such monumental sloppiness that data are entirely disconnected from context, with zero attempts to fix things when problems have been pointed out.

If Wansink did all this on his own and never published anything and never got any government grants, I guess I wouldn’t call it research misconduct; I’d just call it a monumental waste of time. But to repeatedly publish papers where the numbers don’t add up, where the data are not as described: sure, that seems to me like research misconduct.

Nick Brown, one of the co-authors of “Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab,” about Wansink’s work, tells Retraction Watch:

What seems to have been a thorough investigation into the four “pizza papers” looks like a good first step.  My co-authors and I look forward to the results of Cornell’s forthcoming (and, we trust, equally-detailed) investigations into the inconsistent descriptive and test statistics that we have identified in numerous other articles and book chapters from the Food and Brand Lab, as well as the other problems that we have reported, such as apparent instances of self-plagiarism, republication of data, implausible sample characteristics, remarkably high and consistent response rates to surveys, and strange numerical patterns in data.

We also believe that this process should be conducted with the highest possible degree of transparency, which we believe ought to involve making the relevant datasets available to investigators other than those appointed by Cornell.  We are particularly keen to obtain a copy of the dataset for the University of Illinois Veterans Survey, which does not appear to be documented on any public-facing web site at the University of Illinois.

The backlash against Wansink’s work was sparked by a blog post he wrote in November, which he intended as a lesson in student productivity: A PhD student took on every research opportunity submitted five papers within six months of arriving to his lab, while a postdoc who passed up multiple chances to analyze a data set left after one year, with much fewer publications. Four of the grad student’s papers resulted from one dataset, and after readers read the blog — and the papers — they raised many questions about whether the papers had fallen victim to p-hacking and other statistical mistakes that can mislead researchers.

In his latest statement, Wansink addresses the subsequent allegations that arose about additional papers:

Since that initial critique was published in January, other researchers and interested writers have identified other areas from my large body of work for additional scrutiny, including instances of possible duplicate use of data or republication of portions of text from my earlier works. Again, I welcome this open conversation and, as I did with the initial four papers, plan to work with the Food and Brand Lab team and my colleagues here at Cornell University to respond in detail to all genuine academic criticisms. In the early stages of that work, I uncovered three instances that occurred before I came to Cornell in which papers I authored were later reworked and submitted to other journals, resulting in the republication of a significant portion of my previously published work. Whatever the circumstances in each case, the responsibility for both academic integrity and respect of copyright are mine, and I have already reached out to the six journals involved to alert the editors to the situation. I have since been informed that one of those papers is being retracted.

Wansink added in the statement that he has also adopted new operating procedures for his lab:

These strict procedures are designed not only to prevent the type of oversights and errors noted here from occurring in the future, but also to create a convenient system for anonymizing and cataloguing data so that this background information can be easily and routinely shared with fellow researchers anywhere in the world.

Update 4/6/17 9:31 p.m. eastern: We’ve heard from another critic of Wansink’s work, Tim van der Zee — aka the Skeptical Scientist. He told us:

Cornell University has finally responded on the issue; this is good, because this this their first public comment on the ongoing discussions regarding Wansink’s work. While I am happy to see that they are working on installing a much better research methodology, I am saddened that this was not yet the case and that these apparent inconsistencies are found in decades of research from Wansink.

van der Zee added:

Similarly, while I am glad that Professor Wansink has acknowledged the repeated use of identical language (including copying an entire bookchapter), I am left with questions about his lack of response regarding the full range of inconsistencies found in his work.

For more details on van der Zee’s criticisms, click here. He concluded:

On a more general note, I hope that we – the scientific community – will take these issues at [heart] and learn from them. For example, requiring data sharing upon publication (or even better, upon requesting peer-review) will prevent at least some of these issues. Further vigilance is required to spot and correct inconsistencies with the data and reported statistics. The closed nature of the current publication system allows these kind of practices to continue, as many issues remain shrouded in darkness. We need to open up science, as good science flourishes with transparency while low quality science will whither.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Written by Alison McCook

April 6th, 2017 at 3:23 pm

Comments
  • John H Noble Jr April 6, 2017 at 5:42 pm

    “Self-plagiarism” is new to me. Can anyone define it and indicate the basis for censure?!! I mean a reasoned and authoritative rationale for it . . . not idiosyncratic opinion.

    If one has carefully researched and agonized over precise wording of a concept and published it, please tell me why its use in another related publication is “self-plagiarism.”

    Please tell me how one should properly cite its exact usage in another related publication to avoid the accusation.

    • Ivan Oransky April 6, 2017 at 6:16 pm
    • Marc Joanisse April 6, 2017 at 7:15 pm

      Here are some good points to consider:
      https://publicationethics.org/text-recycling-guidelines

      A few sentences used to describe methods are not seen as egregious especially if there are not many ways to explain a method/process etc. The issue comes up when it extends to significant portions of introductory text or discussion, or even worse, reusing data with attributing the original source. So for instance if I write a review article, I shouldn’t be allowed to get credit for it a second time by recycling major sections of it into an a book chapter.

    • awk April 6, 2017 at 7:46 pm

      Yet the questions you pose here seem to imply that you’re somewhat familiar with the self/auto-plagiarism debate -which tends to boil down to precisely the stalemate you describe.

      How reasoned and authoritative would you prefer to have it? Is the definition ‘repeating text without reference’ reasonable enough? Which authority would you like to see presiding over this question? BiomedCentral perhaps (https://publicationethics.org/text-recycling-guidelines)? My former university provided hand-outs to students explaining that plagiarism can be quoting other without reference, can be quoting oneself without reference, or even (and least well known, I daresay) copying an argument structure (but not literal text) without reference.

      Self-plagiarism is rarely on its own a reason for retraction or disciplinary action, although it certainly happens, see for instance the Nijkamp case. In that case, as well as the current case, the term seems to be used broadly and as a sort-of friendly euphemism for a red-flag indicating possibly re-use or manipulation of data rather than the ‘precise wording of a concept that one agonised over’ that you mention. To quote from Gelman’s blog (linked in the text above:) “OK, sure, self-plagiarism, no need for us to be Freybabies about that. But, what about that other thing?” where “the other thing” refers to nearly identical tables in two papers that supposedly report on two different samples (one sized n = 153, the other n = 643).

      Self-plagiarism isn’t at the core of what’s being questioned here.

    • Johan April 7, 2017 at 2:12 pm

      It is impossible to understand the concept of self-plagiarism outside the context of academic publishing and scholarship, and outside matters of opinion. It is by nature very dependent on opinion. In academic publishing, productivity of an individual scholar (on which the ability to secure funds is largely based) is not perceived as the amount of words one can write about various subjects and phenomena, but rather as the amount of novel ideas and facts that are considered ‘worth publishing’ as a new paper for other researchers and scholars to read. An academic is only ‘allowed’ to publish a research paper or a review paper or even an opinion paper to help increase their productivity (and thus their income), after a jury of experts (peer reviewers) decide that a submitted text amounts to a ‘new whole of ideas and facts worth publishing’. Re-using previously published content without explicitly labeling it as such is considered (at least) as bad form by most scholars, and indeed by many as a form of plagiarism, because it often amounts to an attempt to heighten the ‘perceived interest’ a submitted manuscript may have, at least in the eyes of a peer reviewer unaware of this previous work (more often the case than not). Opinions vary from ‘this person could have been more careful in drafting his/her new manuscript’ to ‘this person is clearly trying to deceive reviewers into accepting his/her new paper and should be considered a fraud’.

      An analogy that works quite well for fraud in academic publishing is insurance fraud. Claimants will always try to maximize the amount of compensation they receive from insurers. Claiming compensation for non-existent damages amounts to data fabrication and submitting papers with completely made up data. Claiming compensation for real damages, but that actually have been suffered by someone else, but can deceivingly be presented to insurance experts as ones own, amounts to plagiarism in academic publishing. Claiming double or triple (or more) compensation for the same loss suffered by yourself, amounts to self-plagiarism in scholarly publishing. Legally, it is possible to receive more compensation for a single loss, if the separate insurance policies allow for it or if separate insurance experts don’t find out about the matter during your claim. Nevertheless, most insured persons that see their premiums raised by this kind of behavior will consider it as defrauding the insurance system and as illegitemately using it for one’s own benefit, at the cost of others.

      Like in insurance fraud, a ‘minor’ double claim can be a simple oversight on the part of the insured, or can be an intentional scheme to defraud. Unconscious biases spurred by strong and obvious incentives further blur the line between the two situations.

    • Alan R. Price April 7, 2017 at 10:42 pm

      As I noted in my ORI / AAAS Conference on Plagiarism in 1993, “self-plagiarism” is a self-contradictory misnomer (available on ORI’s http://ori.hhs.gov website,
      under http://ori.hhs.gov/sites/default/files/aaas.pdf & 3 files)
      “Plagiarism” is the unauthorized use of another person’s words, ideas or creations without giving that person appropriate recognition and citation. One cannot “plagiarize” oneself. “Self-plagiarism” is not a “valid” word.

      “Duplicate Publication” or “Text Recycling” and “Copyright Violation” for publications, when the reuse is not acknowledged nor authorized, are good terms to use instead.

  • Eli Rabett April 13, 2017 at 12:02 pm

    The key to this may be finding the post-doc who left and why she or he did so, something that Cornell needs to follow up on

  • Post a comment

    Threaded commenting powered by interconnect/it code.