Rejection overruled, retraction ensues when annoyed reviewer does deep dive into data

Kim Rossmo

As a prominent criminologist, Kim Rossmo often gets asked to review manuscripts. So it was that he found himself reviewing a meta-analysis by a pair of Dutch researchers — Wim Bernasco and Remco van Dijke, of the Netherlands Institute for the Study of Crime and Law Enforcement, in Amsterdam — looking at a phenomenon called the buffer zone hypothesis. In this framework, criminals are thought to avoid committing offenses near their own homes. 

The paper, for Crime Science, analyzed 33 studies, of which, according to the authors, 11 confirmed the hypothesis and 22 rejected it. 

Rossmo, who holds the University Chair in Criminology and directs the Center for Geospatial Intelligence and Investigation in the School of Criminal Justice and Criminology at Texas State University in San Marcos, told us:

I recommended rejection because of what I thought was a fundamental methodological error (known as the ecological fallacy – using grouped data to make individual inferences).  

But the editors spurned Rossmo’s advice and opted to publish the meta-analysis, which they did in May 2020 as “Do offenders avoid offending near home? A systematic review of the buffer zone hypothesis.”

Rossmo then asked if the journal would accept a “critical response piece” in which he could lay out his concerns about the article: 

They said they would, so last summer I began working on that paper.  I began by reading the 33 articles that comprised the basis of the systematic review.  When I did so, however, I discovered different results from what was reported in the systematic review (e.g., simulated data, mixed findings, opposite results, violations of the posted selection criteria, etc.).  My analysis was difficult because the review only provided aggregated data unlinked to any specific study.

I brought this issue to the attention of the editors and the lead author last July, and asked the latter for more information on his data and coding, and the specific location in each article (i.e., table, graph, or page number) that was the source for their conclusion.  He told me they never recorded that detail and was unable to provide that information.

Rossmo said that at this point he was: 

getting annoyed and decided to do a deep dive.  I read each article in detail, recorded the results, noting the exact page source.  I wrote up a full replication report, concluded the article should be retracted, and sent it to the editor and the first author.  This time the author said he would take another look at their work.  A couple of months later, he responded that he found even greater discrepancies and agreed the paper should be retracted.  Apparently, all the coding for the systematic review was done by the second author.  The journal editors also agreed to the retraction.

Rossmo’s annoyance extended to Springer Nature, which publishes Crime Science

Springer Nature then took three months to deal with this, and then another month to actually publish the retraction.  Springer is a member of COPE [the Committee on Publication Ethics], but they certainly failed to follow a timely retraction policy.

(To be fair to the publisher, while four months isn’t exactly speedy, it’s far from the slowest response we’ve seen.)

Rossmo said he reframed his critique into a more general article about statistical methods, and submitted it to the journal, which recommended revisions, which he has since made: 

All in all, this was a very “interesting” experience.  Retractions are challenging in that it is virtually impossible to put the genie back into the bottle.  With some work, most of the citations can be tracked down, but there is no way to reach out to all the readers of the problematic article.

Here’s the retraction notice: 

The authors have retracted this article (Bernasco & van Dijke, 2020). After publication, Professor Kim Rossmo reported that his own analysis of the data failed to replicate the published findings. Professor Rossmo claimed that 16 of the 33 publications analyzed did not meet the authors’ own inclusion criteria.

The authors attempted to replicate their own findings by re-assessing the 33 publications. Based on the results, they concluded that Professor Rossmo’s concern was fully justified. Some publications were based on simulation rather than empirical analysis. Some publications did not provide information on the complete distribution of the home-crime distance. Some publications did not measure or did not report distances with sufficient precision. In sum, the findings of the article are not reliable.

The authors apologize to the readers and the Editors of Crime Science for any problems caused by drawing conclusions not sufficiently supported by evidence, and they thank Professor Rossmo for bringing the issue to their attention.

During all stages—submission, review procedure, and communications after publication—the authors and Professor Rossmo have provided complete access to data and to the methods used for selecting and assessing them. Both authors agree to the retraction.

Bernasco told us that the experience was “painful,” but instructive: 

Wim Bernasco

Some of the lessons are obvious: never underestimate the complexity of seemingly simple research questions and methodologies (e.g. systematic reviews) and never skip controls (e.g., reliability assessments).

Finding out and having to admit that we as authors had made errors was unpleasant, but the way it was handled left us room to make up our minds, and time to get back to the data and attempt to replicate our own work. The ‘whistleblower’ did not make his concerns public but contacted us and the editors of Crime Science about his inability to replicate our findings. This was painful, but becoming the target of a public debate would have been much worse.

This could also be read as an advice to those who identify errors (or worse) in the work of others: Do not immediately put the authors on the stand in public, but contact them and the journal editors first. The self-correcting power of science should not become a witch-hunt. 

Bernasco also offered the following advice to other researchers whose work has come under similar scrutiny: 

try to be as open as possible about everything that they have done, or failed to do. This may initially be painful and sometimes against their own short-term interests, but they will be doing science a favor and also themselves, because it allows them to ‘close the chapter’ more quickly and get back to work.

From the very first start my co-author Remco van Dijke and I have always been completely open about our research methods, errors and about the retraction process. Not only to the ‘whistleblower’ and the editors of Crime Science, but also to our colleagues and supervisors. We have immediately informed our supervisors and have given a brief lecture for our colleagues at my institute (NSCR) about the article and its retraction, so that they could also learn from it. I have asked my institute to publish a note on their website (with a link to the retraction notice), which they have done: https://nscr.nl/en/research-is-human-work-article-retraction/.

Like Retraction Watch? You can make a one-time tax-deductible contribution or a monthly tax-deductible donation to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

Processing…
Success! You're on the list.

One thought on “Rejection overruled, retraction ensues when annoyed reviewer does deep dive into data”

  1. I really appreciate Wim Bernasco’s suggestion “Do not immediately put the authors on the stand in public, but contact them and the journal editors first. The self-correcting power of science should not become a witch-hunt”, for I believe that some cases of ‘trial-by-Twitter’ are like witch-hunts. But, what circumstances qualify as ‘putting authors on the public stand’? Should questions, such as those that were initially raised in this case not be raised in fora such as PubPeer or RW, as opposed to sites like Twitter? How long should a critic have to wait when authors and editors promise to look into the matter, but end up taking a disproportionate amount of time to provide an adequate response? Who determines whether their response is even ‘adequate’?

    Is there any meaningful guidance on any of these questions?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.