Ever since critics began raising concerns about high-profile food scientist Brian Wansink’s work, he’s had to issue a series of high-profile retractions — and now has his seventh (including one paper that was retracted twice, after the journal removed a revised version, along with 14 corrections). The latest notice — first reported by BuzzFeed — is for a paper that was originally corrected by the journal Preventive Medicine earlier this month — and the correction notice was longer (1636 words) than the original, highly cited paper (1401 words). Following criticism by James Heathers about the highly cited study back in March 2017, the authors issued a series of changes, including explaining the children studied were preschoolers (3-5 years old), not preteens (8-11), as originally claimed. (They made that mistake once before, in another retracted paper.) But critics remained concerned. Yesterday, the journal retracted the paper. We spoke with editor Eduardo Franco of McGill University — who provided us with an advanced copy of an accompanying editorial, soon to be published — about the editorial processes behind the two notices, including the moment the journal knew the correction wouldn’t suffice.
Retraction Watch: You write that the initial decision to correct the paper came with “considerable intellectual agony.” Can you say why?
Eduardo Franco: The agony came from the entire set of circumstances that led to the corrections, some of which originated from the Editorial Office’s original 2012 request to the authors that they condense the original submission. Our editorial request forced them to be economical on the methodological details of the two studies included in the paper. We asked them to combine or eliminate tables, to round off numbers, and to simplify statistics. The submission was already unorthodox to start with; two studies were described in a single manuscript. If you are an author of a full report with ~3500 words and you are asked to condense it to 1200 words, a lot of content can be compromised. The accusations originally made by a member of the public, disclosed in the editorial, had been satisfactorily addressed by the authors, but then came the surprising request by the authors to correct the record regarding the children’s ages. In all, it was not a straightforward decision. We spent dozens of hours examining documentation and discussing the most appropriate way to protect the scientific record.
RW: You note that one editor received emails expressing concern about your initial decision to correct the paper. How did the community react to the correction?
EF: As Editor-in-Chief, I was in the receiving end of these. Obviously, I could only hear the reaction of members of the community who wrote to me and those were negative. Most stated their disagreement forcefully but politely; one individual was less gentle and took aggressive actions by writing to my institution because of my refusal to engage this person.
RW: The journal changed its mind within weeks after the correction was issued, after a funder asked for an “amendment to the disclosure of grant attribution.” What were they asking for? And why would this change, which you say would have required a single sentence, prompt the journal to opt to retract the paper entirely?
EF: It was just an ‘i’ dotting, ‘t’ crossing request from the one of the funders. However, to issue another Corrigendum would have made the story much too complicated. The original paper would have been drowned in too many layers amending the record.
RW: Why devote an entire editorial to explaining the decision to retract a paper?
EF: It was a great opportunity to show the complexity behind an editorial decision. As we wrote, we wanted readers to appreciate the difficult process of curating the scientific record. In light of the publicity surrounding these authors, a simple retraction notice would not have captured the entire set of circumstances; we needed to explain the lessons we learned and to be transparent about how we dealt with the process. Most of the time, journals issue terse retraction notices that are not very helpful. I thought that our colleagues at Retraction Watch would have liked how we did it.
RW: As you note, Wansink’s work has been subject to heavy scrutiny of late. In the editorial, you say you tried to block out all of the “noise” of blogs, news stories, and social media criticizing his work, and base your decisions solely on the paper and the authors’ attempts to correct it. Why do you think that was necessary?
EF: It would have been wrong to pass judgment on the basis of material that was outside of our purview, that is, the original submission, the challenges made by a member of the public, the authors’ responses to them, the data analysis output and scripts they submitted, the age corrections they requested, and finally, the minor request from the funder. We could not allow ourselves to be influenced by what was happening outside. It would be wrong.
RW: You’re giving the authors the chance to resubmit the paper after addressing all the issues. Can you explain the rationale for this?
EF: As we communicated in the editorial, we believe that they should be given a chance to describe their studies with the full space that the research methodology and findings deserve. Their research fits the scope of our journal and thus we would like to see it again “de novo,” without any of the obscuring limitations of the original. The public funded the original studies; they deserve to know what the authors found. Needless to say, should the authors choose to submit a brand new manuscript, we will send it out for review by a few of our experts on the topic of the paper.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Regardless of whether it made a difference to the Wansink paper, this request to shorten the Methods section of a paper is an example of a very regrettable trend in scientific publishing. Science IS methods. People can argue all day about how to analyze results, or interpret them, or how some experiment fits into the background literature, but what makes a paper unique are the methods used. If the methods are clear, anyone can replicate the experiment – meaning the Method section is, at the end of the day, the only irreplaceable section in a scientific paper.
Journals have often sacrificed this section first. Some journals place the Method at the end, like it’s a footnote. Some relegate many methods to an online supplement. Some print them in smaller font, or shoehorn them into figure captions. Some encourage them to appear in Results sections. As this becomes the norm, newer investigators learn by what they read, and the art of a clear and well-documented Method section is being lost. Retractions and failed replications are sure to follow.
I wholeheartedly agree!
In my field of chemistry, it’s upsetting when the description of a reaction in the paper was just a scheme and yield, not the detailed procedure. In some cases, it’s annoying when the author don’t even provide the spectra of the NMR.
I agree to some extent.
I am not sure the real problem is whether the font size is the same or whether the section appears right after the introduction (as opposed to at the very end).
To me, the biggest problem is that sometimes the methods are just not there at all and it is not possible for me to understand what exactly has been done (even when reading the supplement).
https://www.buzzfeed.com/stephaniemlee/brian-wansink-cornell-p-hacking?utm_term=.yxBZgKbvN#.hh5yBQjpv