To Brian Wansink of Cornell University, a blog post he wrote in November, 2016, was a meant as a lesson in productivity: A graduate student who was willing to embrace every research opportunity submitted five papers within six months of arriving to his lab, while a postdoc who declined two chances to analyze a data set left after one year with a small fraction of the grad student’s publications.
But two months and nearly 50 comments on the post later, Wansink — known for so much high-profile nutrition research he’s been dubbed the “Sherlock Holmes of food” — has announced he’s now reanalyzing the data in the papers, and will correct any issues that arise. In the meantime, he had to decline requests to share his raw data, citing its proprietary nature.
As Wansink writes in the second addendum to the November blog post, “The Grad Student Who Never Said ‘No’:”
There’s been some good discussion about this post and some useful points of clarification and correction that will be made with these papers. All of the editors were contacted when we learned of some of the inconsistencies, and a non-coauthor Stats Pro is redoing the analyses. We’ll publish any changes as erratum (and we’ll have an analysis script).
Here is a description of the original research, from Wansink’s November post:
When [the graduate student] arrived, I gave her a data set of a self-funded, failed study which had null results (it was a one month study in an all-you-can-eat Italian restaurant buffet where we had charged some people ½ as much as others). I said, “This cost us a lot of time and our own money to collect. There’s got to be something here we can salvage because it’s a cool (rich & unique) data set.” I had three ideas for potential Plan B, C, & D directions (since Plan A had failed). I told her what the analyses should be and what the tables should look like. I then asked her if she wanted to do them.
Ultimately, the student ended up with five papers. This concerned many readers, who posted comments such as:
This is a great piece that perfectly sums up the perverse incentives that create bad science. I’d eat my hat if any of those findings could be reproduced in preregistered replication studies. The quality of the literature takes another hit, but at least your lab got 5 papers out.
What you describe Brian does sound like p-hacking and HARKing. The problem is that you probably would not have done all these sub-group analyses and deep data dives if you original hypothesis had p < .05…I have always been a big fan of your research and reading this blog post was like a major punch in the gut.
Wansink told us he was “hugely shocked” by the community’s reaction to his post, which he said was intended to illustrate
how you can take advantage of an opportunity, versus not taking advantage of an opportunity.
The “null result” (or “Plan A” for the dataset), Wansink explained, was his initial hypothesis that people eat less at a relatively cheap all-you-can-eat buffet than at one that costs more. Instead, he found people ate roughly the same amount, regardless of cost.
In response to the backlash to his blog, Wansink posted an addendum:
P-hacking shouldn’t be confused with deep data dives – with figuring out why our results don’t look as perfect as we want.
With field studies, hypotheses usually don’t “come out” on the first data run. But instead of dropping the study, a person contributes more to science by figuring out when the hypo worked and when it didn’t. This is Plan B. Perhaps your hypo worked during lunches but not dinners, or with small groups but not large groups. You don’t change your hypothesis, but you figure out where it worked and where it didn’t.
Readers also linked to two blogs critiquing Wansink’s article — one by Andrew Gelman, who noted that some of the papers reported different “n” values, and one of the papers says that the graduate student collected the data when, by Wansink’s own account, that wasn’t true. Another blog post by Ana Todorovic at the University of Oxford notes:
It is a post that aims to accentuate hard work, efficiency, capitalizing on opportunities, a collaborative spirit, and dedication. It ends up highlighting questionable research practices, misrepresenting exploratory research as confirmatory, and a lack of understanding why null results are important.
Wansink told us the “n” for some papers is different because not everyone who participated met all the criteria necessary for each study — for instance, in a study looking at the effect of dining companions on eating behavior, the researchers had to exclude everyone who was eating alone.
Eventually, Wansink posted a second addendum, letting readers know he alerted editors to the “inconsistencies” and would issue any corrections as necessary.
He told us, however, that he doesn’t expect the re-analysis to overturn any of his findings:
I doubt the significance levels will be different at all.
When these contribution statements are asked, our general default is the graduate student or post-doc has usually collected and analyzed the data. When this was submitted, the person doing the formatting and submitting would have legitimately assumed that the grad student had again been the one to collect the data. In reality this had been done about 5 years earlier by a different grad student.
We have contacted the journal asking to rerun the analyses along with publishing an erratum. At that time we will also make this change.
That paper, “Low prices and high regret: how pricing influences regret at all-you-can-eat buffets,” was published in 2015 by BMC Nutrition.
Wansink noted that he’s going to wait to issue that erratum until he gets the results of the reanalysis, so he can make all of the changes at once. He noted that any errata he issues will also add references to the papers published from the same dataset. Since all of the papers were published close to each other, the authors forgot to add those references, Wansink said.
Here are the other papers:
- Eating Heavily: Men Eat More in the Company of Women, Evolutionary Psychological Science, March 2016.
- Peak-end Pizza: Prices Delay Evaluations of Quality, Journal of Product & Brand Management, 2015. Not yet cited, according to Clarivate Analytics’ Web of Science, formerly part of Thomson Reuters.
- Lower Buffet Prices Lead to Less Taste Satisfaction, Journal of Sensory Studies, 2014. Cited three times.
Researchers have already published an analysis of the above papers in PeerJ, entitled “Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab,” which notes approximately 150 inconsistencies among the four papers. (See Gelman’s response here.) They write:
We contacted the authors of the four articles, but they have thus far not agreed to share their data.
Wansink acknowledged to us that he hasn’t shared the data, because it’s “tremendously proprietary.” He added in the second addendum to his blog:
Sharing data can be very useful – like with lab studies and large secondary data sets – and in some instances being willing to do so (or a good reason why not) is a precondition to publishing in some journals. When we collected the data for this study, our agreement to the small business and to the [institutional review board] would be that it would be labeled as proprietary and would not be shared because it contained data sensitive to the small town company (sales data and traffic data) and data sensitive to the small town customers (names, identifying characteristics, how many drinks they had, the names of the people they were sitting with, and so on). This is data that cannot be depersonalized since sales, gender, and companions were central to some analyses. (We had explained this when someone requested the data.) At the time we published these papers, none of the journals had the policy of mandatory data sharing, or we would have published these papers elsewhere.
Since 2005, Wansink has co-authored more than 200 papers, collectively cited more than 2800 times. If he issues any errata to these papers, they would be his first.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.