Mitch Brown was preparing last August to launch a follow-up study to a 2021 paper on coalitions when he found something in his computer coding that sent his stomach to his shoes.
As Brown, an experimental psychologist at the University of Arkansas, recalled for us:
When I looked at the Qualtrics file for how things were coded, I noticed there was a discrepancy from what I thought the results were. I had collected these data about a year prior to getting them published and believe that I forgot to double-check what each number meant previously. As I corrected the issue, it became clear that one of the primary findings was not what the corrected data supported.
Brown said he was devastated and ashamed by the discovery – a familiar brew of emotions for other scientists in the same position:
As you could imagine, this was a terrifying experience. I lost sleep and felt terrible for making this mistake. My biggest concern was disappointing my colleagues who have invested considerable time in me. I remember telling several colleagues in my department about the embarrassment and shame that I felt in a way that resembled wanting penance. It wasn’t difficult, though, to do the right thing despite the fear it evoked. Science should be self-correcting, and I need to do my part to ensure the greatest accuracy of my findings.
Brown said his colleagues:
were certainly disappointed that we would need to go through the process of replacement because of the extra work it would entail, but they had never once wavered in their support for my decision. We all agreed this was appropriate. I was warned about the potential backlash this could have but the consensus was that I did the right thing. Other colleagues told me stories about it happening to them and it made me remember that science is conducted by humans who make mistakes. The editor who handled the paper actually told me that the editorial board wished more people would be like me. I don’t think I was really surprised because I expect honesty to be rewarded in my field, but the fear of being conflated with Diederik Stapel was where my mind went initially. Completely ridiculous thoughts in retrospect.
Here’s the notice for the paper, “Socially dominant women strategically build coalitions of strong men in resource-rich environments,” which appeared last year in Personality and Individual Differences:
The authors have requested this paper be retracted following the first author’s discovery of a coding error when intending to use the same experimental stimuli and programming for a follow-up study. The first author discovered the numeric values to code for two targets were reversed (i.e., strong target coded as weak and weak target coded as strong), thus necessitating a recalculation of values affected by this error. Recalculation and reanalysis led to the results changing substantively from what was originally reported The authors, in consultation with the PAID editors, determined that retraction is the most appropriate step. The authors apologise for this mistake and intend to resubmit a corrected version of this manuscript for consideraton [sic] for publication in the future.
Brown and his colleagues are now working on a new version of the article, which was cited once, according to Clarivate Analytics’ Web of Science:
Peer reviewers were highly receptive to my desire to correct this problem and the editor who handled this paper originally was very kind in allowing me to move forward. The replaced paper should be online any day now where I note what happened for a fully correct record.
Brown offered a bit of advice for other researchers based on his trying experience:
I would say some lessons for researchers in a similar position would be to double-check coding schemes before analyses, especially if there is a delay between collection and writing. I also want to communicate that this process has been relatively rewarding and instills a sense of hope for the future of science. If one were to make an honest mistake and work to correct the issue, it feels as though the system has rewarded honesty. Of course, I did not want to make this mistake, but this has illuminated how supportive the scientific community can be to correcting honest mistakes.
Hat tip: Oliver Schultheis
Like Retraction Watch? You can make a one-time tax-deductible contribution by PayPal or by Square, or a monthly tax-deductible donation by Paypal to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
I admire Dr. Brown’s attitude and courage. I wish I could have this courage when finding there are big problems in my manuscripts. Everybody makes mistakes, and I really believe this kind of mistake, nothing related to faking data, manipulating data, reusing figures, etc., should be punished or a career killer.
Hope the scientific community could have the correct way to deal with this kind of mistake, rather than just try to destroy this guy’s career. Otherwise, nobody would like to correct their mistakes if they have made in their papers.
It’s very nice to read that researchers are doing the right thing and publicly talking about it. Certainly helps in lowering the stigma around retractions and corrections! Kudos!
Like others have noted, it’s pleasure to read the perspectives of someone who did the right thing. I hadn’t read Mitch’s research before, but reading his comments above gives me deep respect for him. I hope his honorable way of handling a mistake helps him in his career. He is definitely a role model for others.
Chapeau, chapeau to Dr Brown! I have immense respect for him & his team for their commitment to accuracy & integrity in science. I wish there were more scientists like him who were so forthcoming about correcting the scientific record.