Retraction Watch

Tracking retractions as a window into the scientific process

Real problems with retracted shame and money paper revealed

with 7 comments

sjdmLast month, we reported on a retraction in Judgment and Decision Making that said “problems were discovered with the data.” At the time, corresponding author Wen-Bin Chiou, of National Sun Yat-sen University in Taiwan, told us that a research assistant who had since left the lab hadn’t kept questionnaires used in the research, making replication impossible.

But it turns out that wasn’t the whole story of “Shame for money: Shame enhances the incentive value of economic resources,” to put it charitably. We’ve now heard from people familiar with the case and can provide a fuller account.

The problems with the paper, according to our source,

began with the large size of the effect of the emotion manipulation on the coin sizes, and the high correlation between the coin-size effects and the dependent variables.

The originally published version of the paper was actually a revision of a previously rejected manuscript that answered reviewers’ concerns. But

The current version answered those questions so well as to be implausible…

That, however, didn’t jump out at the time, so the paper was published. Once it was, at least one reader noticed that the data were excessively similar across conditions, and — similar to the cases of Lawrence Sanna, Dirk Smeesters, and Diederik Stapel,

data across supposedly independent samples appear too similar to have arisen from random samples.

At this point, Chiou was given the chance to retract the paper, which he did.

As to whether the data had disappeared, as he told us, it turns out Chiou said that he had actually reviewed the data by hand in response to questions about why the coin-size numbers in the data were all integers, even though they were supposed to be averages of four integers. Those concerns weren’t the reason for the retraction. But they suggest Chiou wasn’t being completely forthright with us, so we wanted to set the record straight.

Finally, worth noting: Judgment and Decision Making is one of the few behavioral science journals that requires
posting of data. The journal instituted that policy a few years ago, and this clearly led to discovery of this problem. A good lesson for journals reluctant to take such a step, we think.

Update: See psychology sleuth Uri Simonsohn describe how these problems were uncovered.

Written by Ivan Oransky

September 10th, 2013 at 1:00 pm

Comments
  • omnologos September 10, 2013 at 2:02 pm

    time to boycott dataless journals as non-scientific?

    • JATdS September 10, 2013 at 3:54 pm

      I guess that would account for about 100% of the journals in my field of study. You are not advocating change. You are advocating revolution. Not that I don’t agree. Simply, that it’s not realistic. A journal that had done otherwise until now – even one with a very high IF – might forcibly have to retract every single paper until that time unless reproducible experiments could be performed, which is clealry not realistic. Thus, although the idea is excellent, it’s never going to work, not in 100 years, not at least for the established publishers, most of whose journals do not carry supplementary data files. Perhaps what you are suggesting is that new journals that become established adopt this policy, which might eventually cause a flow of the honest scientists there, leaving those who have something to hide behind in the remaining journals. Once again, I don’t see main-stream followship of this plan, even though it is essential. I also don’t see publishers taking on the extra burden of having Supplementary files online for every single paper…

  • QAQ September 10, 2013 at 5:40 pm

    question for 5th grade math…

    “why the coin-size numbers in the data were all integers, even though they were supposed to be averages of four integers”

    you can’t add decimal places to data numbers, can you? technically speaking, isn’t one supposed to round the average of the four integers to the nearest integer or something like that? otherwise, wouldn’t that violate that whole significant figures thing?

    (this is NOT a defense of a bad paper, just one of those… technically speaking, if that was the only real issue, wouldn’t that have been the correct thing to do?)

    • Linsorld September 12, 2013 at 10:40 am

      I guess when you measure *true* integers in an experiment, you have an infinite number of significant figures (e.g. if you count 3 bananas, you can say you counted 3.00000… of them). So giving decimals when you compute the mean of integer variables totally make sense.

      Another example: when you throw a dice, it can only take integer values, but the mean is 3.5 though. Would you round this value when presenting your data on dice throwing ? :)

  • scott allen September 10, 2013 at 10:35 pm

    you have just got to love the irony of some of these retractions, this person writes a paper for “Judgement and Decision Making” while making poor decisions , papers written in ethics journals that are not “ethical”, accounting papers where the numbers don’t add up, plagiarized papers “on plagiarism” written for journalism magazines, this list goes on. I don’t think the best comedy writer could come up this this stuff.

    • ferniglab September 11, 2013 at 10:08 am

      I enjoy these little ironical vignettes – they are lovely comments on the contradictions of humans!

  • Post a comment

    Threaded commenting powered by interconnect/it code.