On Sunday, May 5 of this year, Justin Pickett received an email from a “John Smith” with the subject line “Data irregularities and request for data.”
“There seem to be irregularities in the data and findings in five articles that you published together with two surveys,” the anonymous correspondent wrote. “This document outlines those irregularities.”
Pickett was a co-author on only one of the papers, “Ethnic threat and social control: Examining public support for judicial use of ethnicity in punishment,” which was published in 2011 — the year he earned his PhD from Florida State University (FSU) — in the journal Criminology. The other four papers were published from 2015 to 2019 in Criminology, Law & Society Review, and Social Problems. The only author common to all four was Eric A. Stewart, a professor at FSU.
In dispassionate language, “Smith” listed seven issues with the papers, from “Anomalies in standard errors, coefficients, and p-values” to “Unlikely survey design and data structure.” Writing of a survey that was allegedly used in three of the papers, the anonymous correspondent wrote, “None of the articles using the 2013 survey list a funding agency or grant number, which is surprising, because a nationally representative, dual-frame, telephone survey of 2,736 Americans would cost well over $100,000.”
One of the papers, “A Legacy of Lynchings: Perceived Black Criminal Threat Among Whites,” first published earlier this year, had already been corrected. “After acceptance and online publication, but before print publication, Mears et al. (2019, p. 487) changed all of the tables in their paper because of a ‘coding error,’” “Smith” wrote.
Trouble obtaining data
Pickett, now at the University of Albany at the State University of New York, asked his coauthors for the full data set for the 2011 Criminology paper, but he “encountered difficulties getting them,” as he related in a 27-page article he posted to the SocArXiv preprint server on the Open Science Framework (OSF) earlier this month. So he looked at his own files from graduate school and “discovered 500 unique respondents and 500 duplicates.” That was a problem, because the paper reported nearly 1,200 survey respondents.
There were other problems. The sample size changed inexplicably from 868 in a manuscript draft to 1,184 in the published version, yet somehow didn’t affect “means, standard deviations, or regression coefficients.”
On June 6, Pickett sent his coauthors an email outlining these issues. One collaborator, Marc Gertz, “contacted the former director of the Research Network, who confirmed that the survey he ran for us included only 500 respondents.” After that, Stewart sent Pickett a copy of the data, which was indeed a sample size of 500.
Pickett walks through what happened next when he re-analyzed the data, writing at one point:
Dr. Stewart now says there were two surveys conducted for our study, one with 500 respondents and one with 425, and that the results for the combined sample (N = 925) are similar to those in the published article. However, I am uncomfortable with the new results for four reasons. First, I have not seen them. Dr. Stewart has not sent me the data for the second sample, and although he has sent Stata output for the combined sample to the lead author, Dr. Johnson, he has asked him not to share It. Second, the published article reports 1,184 respondents, not 925. Third, our published article lists only one survey company—the Research Network—and one survey. Fourth, Dr. Stewart has refused to tell me who conducted the second survey, and Dr. Johnson has said he does not know who conducted it. This lack of transparency and accountability is why I have decided not to wait for my coauthors to finish their reanalysis before asking for a retraction.
Pickett told Retraction Watch that the editors said they were unlikely to retract the article:
But they will give me an opportunity to publish a comment responding to my coauthors’ erratum. They also said I could preprint my comment. Given that my concerns are unlikely to change, I went ahead and did that. The OSF paper I posted is what I expected to publish alongside my coauthors’ erratum, whenever they are finished with it.
The journal — where one of the paper’s co-authors, Brian Johnson, is an editor — told Pickett that it only retracts when there are legal issues with a paper. According to our database, the journal has never retracted a paper. Johnson confirmed for Retraction Watch that the authors are working on a correction.
The paper has been cited 33 times, according to Clarivate Analytics’ Web of Science.
‘It’s just too much’
Stewart has not responded to requests for comment from Retraction Watch. In a May 25 response to “Smith,” he wrote of the 2019 correction:
There were several counties incorrectly specified as southern in the coding scheme we employed. In effect, these counties should have been specified as non-southern locations. Once we identified the coding errors, we re-estimated all models.
“Smith” did not find that explanation convincing, writing back that four of the changes to the 2019 article “seem mathematically and logically improbable.”
We asked Pickett — who has conducted a survey of U.S. residents on attitudes toward scientific misconduct — whether he thought the problems in the papers were likely to be the result of honest error.
The discrepancies were absolutely overwhelming. The emails were very long, very detailed, and included web links to tables and evidence. I hold out hope that it is honest error, but it is very hard for me to believe there is a benign explanation for it all. It’s just too much.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].