Journal hasn’t retracted ‘Super Size Me’ paper six months after authors’ request

Six months after the authors of a 2012 paper requested its retraction, a marketing journal is still investigating the concerns,  Retraction Watch has learned. Other researchers had failed to replicate the findings – that consumers choose portion sizes based on their desire to signal higher social status –  and discovered anomalies in the data. 

The paper, “Super Size Me: Product Size as a Signal of Status,” appeared in the Journal of Consumer Research and attracted media attention from The New York Times and NPR, among other outlets. The lay media interpreted the findings as helping to explain the rise in obesity in the United States. The article has been cited 180 times, according to Clarivate’s Web of Science. 

The same authors requested the retraction of another paper, “Dynamics of Communicator and Audience Power: The Persuasiveness of Competence versus Warmth, published in 2016 and cited 61 times, which the journal is also still investigating.

Problems with the portion size paper date back to 2020, when a team of researchers posted a preprint, later published in Meta-Psychology, about their unsuccessful attempt to directly replicate one of its experiments. 

“I was slightly disappointed but not too surprised,” said Ignazio Ziano, now a professor at the University of Geneva in Switzerland, who oversaw the replication by a master’s student of his when he was a professor at the Grenoble Ecole de Management in France. “I thought this was just another unreplicable finding like so many others that are published in consumer behavior journals.” 

The group not only failed to replicate the results, but they also found inconsistencies in the statistical calculations. Ziano told us: 

We realized that there had to be a statistical mistake in the data reporting (some standardized effect sizes, i.e., the differences between product size conditions, did not line up with the p-value and the test statistics within the same comparisons between product size). 

We’ve made Ziano’s full comments about his experience with the replication study available here. He told us that when he and his coauthors emailed David Dubois, one of the authors of the original study who is now a professor at the Fontainebleau, France campus of INSEAD, to ask for the original data so they could recalculate the effect sizes, Dubois replied “that he was in the middle of moving and that he did not have the data handy at the moment.”

They did not follow up, Ziano said, because “my previous experiences in replication papers suggested that original authors (at least those who have already not shared data and materials in the paper) are not super keen to share data and materials, and even less keen to know that their study does not replicate.” (That’s not always the case, however.) 

After Ziano and his team published their findings, another marketing researcher, Aaron Charlton, analyzed the paper and found more numerical inconsistencies in the statistical calculations. Charlton, who maintains OpenMKT.org, a site aimed at improving evidence quality in academic publications in marketing and consumer behavior, wrote up his findings in a report he shared with Retraction Watch. 

“Erroneous P-values are “a fairly universal problem throughout the paper,” Charlton wrote in his report, which also identified “impossible means” using the GRIM test developed by data sleuths Nick Brown and James Heathers

Dubois was responsible for the data collection and analysis, the authors told Charlton, and shared with him a spreadsheet of P-values, in which he noted that none of the errors substantively changed the conclusions. 

Charlton informed the authors of his findings about a year ago. While they were looking into the issues, another analyst, who remains anonymous, found “huge problems” in the 2016 paper, he said. 

Last October, the two senior coauthors of the papers, Derek Rucker of the Kellogg School of Management at Northwestern University in Chicago, and Adam Galinsky of Columbia Business School in New York City, requested that the Journal of Consumer Research retract both papers, Charlton told us. Neither has been pulled. Dubois is an associate editor of the journal. 

“The JCR policy board has assigned this matter to a special committee of experts who are thoroughly investigating it,” Carolyn Yoon, the chair of the board and a professor of management and marketing at the University of Michigan, Ann Arbor’s Stephen M. Ross School of Business, told Retraction Watch. “We are still waiting on the report, as it took longer than expected. We hope to have a decision by the end of this month.”

Yoon also told us: 

It is critical that we get this right. Hence the commentaries, speculations, and foregone conclusions that are being shared are unfortunate and unhelpful given these parties do not have full information or knowledge about the case.

As for Dubois being listed an AE, it has not yet been determined that he did anything wrong. 

Rucker, Galinsky, and Dubois declined to comment while the journal’s investigation is ongoing. 

“I disagree that they are ‘thoroughly investigating’ it,” Charlton told us when we shared Yoon’s comments with him. He said: 

In my opinion a thorough investigation would require interviewing the whistleblower who discovered and reported the issues. That’s me. I’ve not heard from them. Would a police investigation be thorough if they didn’t take the time to interview the accuser/key witness? This distorts the meaning of the word “thorough.” 

Yoon’s comments about “commentaries, speculations, and foregone conclusions” seemed “aimed at discrediting whistleblowers,” Charlton said. 

He added: 

My understanding is that all three coauthors jointly requested retraction of the 2012 paper back in mid-October of 2022. Why would the journal say “no you’re wrong, your paper is fine.” I first made the authors aware of the issues in early February of 2022. That was 14 months ago. Seems like it’s time for JCR to act. Every day they delay, people are citing the paper, grad students are trying to replicate it as part of their thesis, etc.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

10 thoughts on “Journal hasn’t retracted ‘Super Size Me’ paper six months after authors’ request”

  1. I’m writing anonymously as there is already too much speculation about motivations, connections, etc.

    Let me confine my comments to Charlton’s. It’s remarkable that he contends that a journal interview a “whistleblower”, since this is exactly what they should *not* do. What the journal is apparently doing is empaneling disinterested experts to comb through the record and weigh evidence as a jury might. It’s patently obvious that whistleblowers, no matter how earnest, fail the most fundamental test of being “disinterested”. They have a horse in the race, which is exactly what JCR is supposed to rule out. Conducting an investigation, which can have serious implications for someone’s career, appropriately takes a while. In the meantime, this thread on Retraction Watch is a suitable interim measure to ensure that researchers take the findings in the Dubois papers with a grain of salt.

    Charlton’s (pretty terrible) analogy, “Would a police investigation be thorough if they didn’t take the time to interview the accuser/key witness?” paints him as an aggrieved party or a differentially implicated victim here. He is neither.

    1. The analogy is accurate. Your interpretation is inaccurate. A police investigation reaches out to accusers and witnesses because they have information. It does not reach out to them because they are the aggrieved party or differentially implicated victims. The law does not care at the time of the investigation who was injured or for what reason, other than to establish what happened. The journal needs to reach out to the whistleblower for the same reason: to get from them the complete set of information they have on the issue.

  2. Hi Anonymous,
    Charlton is not a victim nor is he painting himself as such. He is an accuser and a whistleblower whom the panel assembled by JCR still has not contacted, hence his frustration. While he might not be disinterested, he brought up the issue.

    1. If you read my comment again, I tried to be pretty clear about this being an analogy, and said this literally: “Charlton’s (pretty terrible) analogy, “Would a police investigation be thorough if they didn’t take the time to interview the accuser/key witness?” paints him as an aggrieved party or a differentially implicated victim here.” Charlton never called himself these things directly, but that was his role in his own analogy.

      Anyone can blow a whistle. People calling to question the veracity of a scientific claim are not routinely involved in ascertaining if that claim is accurate. In fact, the authors themselves needn’t, and probably shouldn’t, be. Rather , a *disinterested* panel of experts needs to carefully weigh the evidence, and from what I’ve heard, this is what JCR is doing. Including the authors or others who have a vested interest in the outcome is bad practice and horrible “optics”.

      I agree that the panel is taking a long while. I do not agree that they are not doing their job well because Charlton, who (shall we say) has a history of calling out research practices for reasons unknown, believes he isn’t suitably involved.

      1. “Anyone can blow a whistle” you say.

        … But no one does. Marketing journals routinely publish p-hacked/fabricated papers, and no one calls them out for it. Charlton’s not the most tactful person around, but at least he’s doing something.

        And yes, they should contact the whistleblowers (Aaron and the other analyst), if only to clarify the analysis they submitted and make sure they understood everything. Hard to see how a thorough investigation wouldn’t go through this basic step.

  3. Incidentally, it would be reasonable for the panel to contact you, specifically, since you worked directly on the replication of the Supersize Me paper. At the same time, one could plausibly suggest you might have a vested interest in upholding the findings in your own work; but the panel would be reasonable in consulting your work or you directly. I have no idea if that has happened.

    1. It seems like you are reading a lot in my words and Aaron’s, things that we did not say. I suggest to be careful with interpretations, lest you are led astray. The panel is taking a long while, and they have not contacted people with knowledge of the paper’s irregularities. This is definitely a source of worry. Further, all authors requested retraction of the paper, which in my view is more than enough for retracting it. Another definite source of worry, considering that the journal hasn’t retracted it yet. In the meantime, the authors are deciding which research is good or bad in consumer research, a third source of worry.

  4. What is most surprising and upsetting is not that the journal has not reached its final determination yet but that the journal has not detailed its process publicly.
    1. What specific issues are they looking into?
    2. How will they look into them? Are they commissioning replication studies? Are they conducting the analysis again on the originally collected data? Have they engaged a third party to oversee the process?
    3. At what point, is the evidence sufficient to support retraction?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.