Two blog posts are shining additional light on a recent retraction that included some unanswered questions — namely, the identity of the researcher who admitted to manipulating the results.
To recap: Psychological Science recently announced it was retracting a paper about the relationship between the words you use and your mood after a graduate student tampered with the results. But the sole author — William Hart, an assistant professor at the University of Alabama — was not responsible.
The post raised some questions — for instance, who was the graduate student, and if his or her work was so influential to a paper, why was he/she not listed as an author? Hart declined to identify the student, but two new blogs — including one by one of Hart’s collaborators at the University of Alabama — are providing more details.
In our original post, Hart told us he discovered the fraud after he posted the student’s data from another project online, and an outside expert raised concerns. In a recent post, Rolf Zwaan at Erasmus University Rotterdam identifies himself as the outside expert that questioned the data.
In his post, Zwaan refers to the student as “Fox;” he clarified to us that it is a pseudonym:
I didn’t want to use the person’s actual name until it was clear they were the only culprit.
Zwaan writes that he discovered the problems with Fox’s data while trying — unsuccessfully — to replicate one of Hart’s papers, on which Fox was not listed as co-author. Since Zwaan was publishing his findings, Hart and his co-author submitted a commentary that included new data, and listed Fox as the first author. Once Hart’s team uploaded the new data to the Open Science Framework, Zwaan spotted numerous duplications — more than 70, among a list of 194 subjects. The two sides underwent some back and forth, and Zwaan told us the process became “contentious:”
The editors of the journal [publishing the replication effort] did their very best to be evenhanded in this difficult situation,. I admired this but it meant that I had to mobilize all my co-authors of the replication paper to get our point across. I’m not sure what went on on the other side, but there clearly was an unwillingness to believe the data were fake.
Last week, a colleague of Hart’s, Alexa Tullett, posted another essay on Zwaan’s site, saying Hart asked her to verify the data after Fox admitted he had deleted some data for “confidentiality” issues:
I only started to genuinely question Fox’s intentions when I ran the key analysis on the duplicated and deleted cases and tested the interaction. Sure enough, the effect was there in the duplicated cases, and absent in the deleted cases. This may seem like damning evidence, but to be honest I still hadn’t given up on the idea that this might have happened by accident. Concluding that this was fraud felt like buying into a conspiracy theory. I only became convinced when Fox eventually admitted that he had done this knowingly. And had done the same thing with many other datasets that were the foundation of several published papers—including some on which I am an author.
Fox confessed to doing this on his own, without the knowledge of Will, other graduate students, or collaborators. Since then, a full investigation by UA’s IRB has drawn the same conclusion.
She describes what she believes was the student’s motivation:
Hindsight’s a bitch…I wish we had treated our data with the skepticism of someone who was trying to determine whether they were fabricated, but instead we looked at them with the uncritical eye of scientists whose hypotheses were supported.
Fox came to me to apologize after he admitted to the fabrication. He described how and why he started tampering with data. The first time it happened he had analyzed a dataset and the results were just shy of significance. Fox noticed that if he duplicated a couple of cases and deleted a couple of cases, he could shift the p-value to below .05. And so he did. Fox recognized that the system rewarded him, and his collaborators, not for interesting research questions, or sound methodology, but for significant results. When he showed his collaborators the findings they were happy with them—and happy with Fox.
We spoke with Tullett, who told us she is retracting one paper on which Fox is a co-author, and has withdrawn a paper from the European Journal of Social Psychology that had been accepted but not yet published.
It was accepted, and then we found out about these data problems immediately after it was accepted. So I contacted the editors and asked them not to publish it.
Tullett said she didn’t know how many other retractions were coming from other co-authors.
She told us:
This experience made me think that…fraud is more prevalent than it seemed.
What’s more, she said:
It also reinforced for me the idea that the incentive structures in the field are problematic..this is an extreme case of what the consequences of that can be.
Hart and a spokesperson for the university declined to comment further.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.