A psychology journal is correcting a paper for reusing data. The editor told us the paper is a “piecemeal publication,” not a duplicate, and is distinct enough from the previous article that it is not “grounds for retraction.”
The authors tracked the health and mood of 65 patients over nine weeks. In one paper, they concluded that measures of physical well being and psychosocial well being positively predict one another; in the other (the now corrected paper), they concluded that health and mood (along with positive emotions) influence each other in a self-sustaining dynamic.
As a press release for the now-corrected paper put it:
People who experience warmer, more upbeat emotions may have better physical health because they make more social connections.
The paper received coverage in the Daily Mail and PsychCentral.
Here’s the correction notice for “How Positive Emotions Build Physical Health: Perceived Positive Social Connections Account for the Upward Spiral Between Positive Emotions and Vagal Tone,” published by Psychological Science:
This article used data that were also the basis of an earlier article (Kok & Fredrickson, 2010). Kok et al. did not indicate this fact in their article.
The paper was published in 2013 and has been cited 52 times, according to Thomson Reuters Web of Science (which has labeled it a “highly cited paper,” based on the expected rate of citations in that particular field).
The article that it shares data with is “Upward spirals of the heart: Autonomic flexibility, as indexed by vagal tone, reciprocally and prospectively predicts positive emotions and social connectedness,” published in Biological Psychology in 2010.
Stephen Lindsay, the editor in chief of Psychological Science, told us:
It is hard to say whether or not the Kok et al. manuscript would have been accepted had it made clear that it reported a new and different analysis of the same observations that formed the basis of the earlier Kok and Fredrickson (2010) article. Maybe, maybe not.
But in my view this is NOT a case of duplicate publication, because the two articles reported qualitatively different sorts of analyses that tell different albeit closely related stories. They might reasonably be criticized for indulging in piecemeal publication, but piecemeal publication is not grounds for retraction. Kok et al. (2013) should, in my opinion, have made the provenance of their data set clear, and the Corrigendum does that.
The papers share a first and last author — Bethany E. Kok, a researcher at the Max Planck Institute for Human Cognitive and Brain Sciences in Germany, and Barbara L. Fredrickson, at the University of North Carolina, respectively. We reached out to both for more information on why the papers are unique.
The study itself has received criticism: In “The Elusory Upward Spiral: A reanalysis of Kok et al,” published in Psychological Science last year, other researchers claim that the article’s conclusion is “unwarranted” — in part, they argue, because “the validity of using [vagal tone] as an objective proxy for physical health…is questionable.” They write:
It is imperative that extraordinary scientific claims be supported with solid evidence, especially when they carry health-related messages that are likely to be widely reported by the popular media.
Kok and Fredrickson published a defense of their paper in the journal last year, “Evidence for the Upward Spiral Stands Steady.”
Fredrickson has made a name for herself around the “positivity ratio” — the concept that you’ll flourish if you have three positive emotions for every negative one. However, one of her previous papers that formed the basis of her book “Positivity” was partially withdrawn in 2013.
Although duplication is a frequent cause of retraction, it isn’t always. Another recent case: After considering whether two publications were redundant, editors published a letter explaining why they were keeping both papers in the scientific record.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
A correction seems like the proper action in these types of cases. There is no need to retract if the only lapse was failure to disclose the reuse of data and the paper contributes new knowledge (in spite of it being based on reused data). But, it is crucial that readers be alerted as to the data reuse. As I have said many times: The provenance of data should never be in question.
I disagree with the journal and with Roig. Even if “most” (what percentage is that anyway?) of the remaining data is original, the fact that reused data was not indicated is a serious violation of guarantees of originality when authors submit a manuscript. Imagine that 10% of data is reused in a subsequent publication, only 90% is original. In terms of volume, spread over 10 papers, that’s like a fast-food joint where you would order 10 and get 1 free. The literature gets filled with redundancies, and scientists (in this case psychologists) get rewarded for unoriginal work. The entire playing field becomes unfair for those who strive for 100% originality.
Dear Anonymous, of course I agree that, with few exceptions, salami or piecemeal publication should be strongly discouraged. Moreover, I am much more comfortable when the decision to split a large project into two or more publications is made by an editor rather than the authors themselves. In either case, what is most important is that each paper (i.e., salami slice) that is part of the larger ‘salami’ include full disclosure about data, patients, samples, etc., that are common to all related publications. FWIW, in the social sciences is not uncommon to compare newer data to earlier published data (e.g., attitudes toward X or Y across time). And, of course, there are also longitudinal studies in the biomedical, health, and social sciences. In sum, the key issue should always be transparency, transparency, transparency! on the part of the authors.
Rather than setting arbitrary rules on how much you can re-use (there are fields where datasets take decades to collect so you can’t possibly limit yourself to one publication/dataset), stop rewarding scientists based on the volume of their work and look at the novelty and importance instead.
I would guess most people on RW know people in their fields who salami-slice, and also know that reading their papers is useless because they’re just rehashing things they did years ago over and over again. I personally don’t take such scientists very seriously, and neither should funding bodies nor hiring committees.