Ex-cops tangle with journals over strip clubs and sex crimes

Brandon del Pozo

A study by two economists who found opening strip clubs or escort services caused sex crimes in the neighborhood to drop contains “fatal errors” and should be retracted, argues a group of past and current law enforcement officers, including three academics.

“None of us are prudes or even anti-strip club,” Peter Moskos, a professor at John Jay College of Criminal Justice in New York City and a former Baltimore police officer, wrote in a thread on X (formerly Twitter). “But if you claim strip clubs reduce sex crimes – and by 13 percent! – you’re delving into serious policy issues.”

He added: “This is very typical of academics getting out of their field. They have second-hand data. They crunch the numbers … They don’t know what the data mean.”

The study, titled “The Effect of Adult Entertainment Establishments on Sex Crime: Evidence from New York City,” was published in July 2021 in The Economic Journal

“We find that these businesses decrease sex crime by 13% per police precinct one week after the opening, and have no effect on other types of crime,” wrote Riccardo Ciacci, of Universidad Pontificia Comillas, in Spain, and Maria Micaela Sviatschi, of Princeton University, in their abstract.

What the data suggest, the duo speculated in an opinion piece in The Washington Post, is that “Men otherwise inclined to commit assaults might instead spend more time in strip clubs or hiring escorts.”

Predictably, the findings drew a bit of media attention. But in the view of Moskos and his colleagues – two former police officers turned academics and the commander of the crime-strategies unit in the New York Metropolitan Transit Authority Police Department – they should never have passed peer review.

As the four told the journal in an August 2021 email seen by Retraction Watch, after corresponding with the study authors, “ultimately we could not reconcile the study’s conclusions with the distinct limitations of its data.”

In the email, they pointed to three major concerns:

  • The study relied on the date a business was registered with the state as a proxy for when it opened. But for strip clubs, licensing and inspections can lead to delays of months between registration and opening. 
  • The study used NYPD Stop, Question, and Frisk data as a proxy for sex crimes. These data have been heavily criticized; Moskos and his colleagues claim that more than 94% “are records of people who were legally innocent of a crime at the time and place of the encounter.” 
  • The study’s baseline data on strip clubs missed a number of establishments Moskos and his colleagues knew to exist. (They name-check several, including Wild Wild West, Corrado’s and Sweet Cherry.)

In a preprint posted to SSRN this month and submitted for review to Police Practice and Research, they boiled it all down to this: 

What the study has done is measure changes in police encounters with innocent people in the week after an entity has filed the paperwork necessary to start the processes that will eventually allow it to open a strip club.

But The Economic Journal apparently failed to get the point. In an email seen by Retraction Watch, Editor-in-Chief Francesco Lippi replied to the group:

The journal is open to considering comments on published papers. In the case of empirical papers, comments are expected to be based on systematic analyses of the data, highlighting differences between the results obtained in the original paper and in the new analysis. 

“We said the data shouldn’t have been used in the first place, and their response was to suggest we re-analyze the data and point out differences,” Brandon del Pozo, the corresponding author on the preprint and an assistant professor at Brown University, told Retraction Watch. (Del Pozo is also a former police officer.)

Reached for comment, Lippi did not directly address the issue of the data being unfit to address the study’s hypothesis, but said:

This criticism was based on a data analysis that used a small subsample of observations for which the authors were able to gather more precise measurements. 

He added that he had not heard back after sending the email welcoming a re-analysis of the data. “As far as I am concerned, a serious (scientifically sound) confutation of the original thesis has not been given yet,” Lippi told Retraction Watch.

Del Pozo said that after failing to move forward with Lippi, he and his colleagues sent their concerns to the Journal of Comments and Replications in Economics. But the journal: 

declined to publish them based on a response from the authors that we frankly found a little bewildering. They used an irrelevant case of a person buying an existing business from someone else to say we didn’t prove strip clubs don’t open on the day they are registered with the state. Then they said their robustness check, which used data that principally derives from the main dataset, showed similar results at the precinct-month level, so their results were good.

Ciacci told Retraction Watch in a brief email that the preprint contained “plenty of imprecise information about” his and Sviatschi’s paper:

This is why at least two journals, to the best of my knowledge, decided not to publish that article. Indeed, we were contacted by JCRE (Journal of Comments and Replications in Economics) to reply to their paper. After reading our reply the Editors of JCRE opted to reject their article. Likewise, the Economic Journal decided to reject their article.

Ciacci, who said he was on vacation, did not reply to a follow-up email asking for specific comments on the arguments in the preprint. Sviatschi declined to comment, noting that she was “on maternity leave with my baby and no help.” [See update at end of post.]

“And so it is,” Moskos wrote in his punchy prose on X.

Update, 8/25/23, 1345 UTC: del Pozo tells us that “Police Practice and Research has peer reviewed, using reviewers with expertise in the data, our article and decided to publish it.”

Update, 8/31/23, 1600 UTC: Ciacci provided del Pozo et. al.’s critique submitted to the Journal of Comments and Replications in Economics, and his and Sviatschi’s response.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

13 thoughts on “Ex-cops tangle with journals over strip clubs and sex crimes”

  1. I love the idea that “you didn’t *prove* that businesses don’t always open the day they’re incorporated, so we can assume they always do”.

  2. I tend to agree with the journal. “You measured the wrong thing” is a reason to debate and possibly discount the findings, not to retract the article. It’s an opinion on the strength of the argument, not a proof something was done wrong.

    1. What we are saying is that they cannot possibly use the data they’ve used to measure the thing they need to measure to draw the conclusions they did.

      It is why the people who work with SQF data in fields with better knowledge of its provenance never use it to measure crime, and people have never used the registration date of a business to measure when it opens. The public officials we talked to said as much. So we are saying the study doesn’t have construct validity. Using data that records innocence of crime as data about crime is “proof something was done wrong.”

      Studies that clearly do not have construct validity–due to problems which could have been easily detected at the time of review–should be retracted.

  3. The criticism appear to me seems to be “We don’t agree with your conclusions based on the data that you’ve used”. Is that the basis of a retraction? Perhaps a more appropriate approach is to perform and publish an study that uses better data sets and supports the null hypothesis. Complaining on social media is uncalled for.

    1. We would agree with their conclusion _based on the data they’ve used_. The problem isn’t their modeling, it’s the data. Our point is the data used are not valid. And this is with regards to both the independent variable (opening date of sex club) and the dependent variable of (non-self-reported sex crimes). We’d probably try to replicate the study if it were simply of matter of using a better set. That data set does not exist.
      (There’s another issue that the precinct-level unit-of-measurement is inappropriate. But it would pointless to spend time on that given that the data leading up to it is not valid.)

    2. If someone says you simply cannot use the data to draw the conclusions of the paper because they do not have construct validity, and they provide compelling evidence that this is so, then it is unreasonable to ask that professor to derail their own research agenda to also spend time doing a whole other separate analysis using data that should never have been used in the first place because it is not valid data for the construct.

      This was a basic mistake that should have been caught in the peer review; this is what peer reviews are for. The fact that the data should never have been used is painfully obvious to the people who have actually worked with the data in the past.

      A journal with integrity would reopen the peer review process and send the paper to experts with the proper training in the construct validity necessary for the data and do a reassessment. If they find that there was a problem with construct validity that would have gotten the paper rejected in the first place because it had no business using the data it used, then they should attract the paper.

      You cannot put this burden on other professors who have done the careful work of pointing out a problem in the review process. We don’t have the time to engage in a whole separate line of research to correct somebody else’s poor research and somebody else’s poor peer review.

      In any case, a replication would be about four steps long. The first step would be to say that you must exclude 100% of the data for the independent variable. The second step would be that you must exclude about 94% of the data for the dependent variable. The third step would say that you cannot proceed, and the fourth step would say that you must reject the null hypothesis.

      Voila! A replication.

    3. Exactly. You can publish an argument that the dataset used was not valid or appropriate, along with your argument for that position, but simply stating so based on your expert knowledge and expecting retraction is not how science works. “Knowing the data” is usually a proxy for “you didn’t reach my preconceived conclusion, so you are wrong.” Take your biases to the literature to defend, not social media.

  4. This is why it’s so important for researchers to “touch grass” every once in a while. I’m confident that most people would say the conclusions of this paper are in stark contrast to their anecdotal experience in red light districts. When this sort of conflict of experience and data occur, it’s important for researchers to seek to understand where it came from. It seems to me that the scientific community praises researchers for using data analysis to “prove” public belief wrong, without pursuing the equally important question of why there is a discrepancy between the two. Ironically, this is because the way science is taught places great emphasis on anecdotes where human belief limited its progress (Galileo, Creationism, Climate Change, etc.) to the point that human belief is seen as an enemy. But the human brain is an incredibly powerful pattern-recognition machine, even moreso when many share experiences, and to discount its ability to occasionally come to more “real” conclusions than data analysis is counterproductive.

  5. My training is in a field (math) in which the literature has its own reliability issues (searching the Retraction Watch archives for “mathematics” brings up some classics) but where construct validity isn’t a relevant concept. So as a non-expert, I have a couple of questions, either for Moskos and del Pozo, who have commented on this post, or for others familiar with journal retractions in fields where construct validity issues are common.
    del Pozo writes, in a comment above, “Studies that clearly do not have construct validity–due to problems which could have been easily detected at the time of review–should be retracted.”
    My main questions: Are there prominent examples of retractions actually happening for this reason, i.e. post-publication identification of a lack of construct validity due to problems that could/should have been identified in pre-publication review? How egregious does a construct validity issue generally have to be for such a retraction to happen?
    I would imagine that there are cases in which reasonable people disagree heartily on whether there is a problem, but most of the calls for retractions that I see on Retraction Watch are related to misconduct, especially plagiarism or fabrication, rather than badly designed work. In math, most of our retractions do come from faulty proofs, but in most such cases, the community, including the author, very quickly comes to a consensus that the proof is flawed.

  6. Thanks for asking, Sophie.
    The COPE guidelines covering the journal in question state that “Editors should consider retracting a publication if they have clear evidence that the findings are unreliable, either as a result of major error (eg, miscalculation or experimental error), or as a result of fabrication (eg, of data) or falsification (eg, image manipulation).”
    We would argue the findings here are unreliable to a threshold that satisfies these retraction guidelines. Peer review and publication is meant to inspire a justified belief that the knowledge conveyed by the paper is reliable and has a place in our pool of citable findings. If you are using measures of innocence as measures of crime, and measures of registration as measures of opening when they are unmoored from each other in ways that make the paper’s principal findings impossible to detect, then you have made a major error in your experiment that precludes reliability, and that diligent reviewers should have easily detected.
    So, to answer your question, if a defender of the original research were apprised of the problems with construct validity, and could not articulate a way to nonetheless be reasonably confident of the findings, then by our view the concerns call for a retraction. In addition to fraud, misconduct, etc., that calls for retraction, serious mistakes that eliminate reliability seem to call for it too. It is not as if these variables could ever produce these findings, no matter what you did with them or how hard you try. The authors haven’t even disagreed about this, they’ve just avoided having to publicly explain themselves.

    1. Thanks, Brandon. It seems that the “e.g.” in the “major error” criterion (“e.g., miscalculation or experimental error”) pulls a lot of weight and perhaps ought to be expanded upon through a revision of the guideline, to make study design/data selection issues like this easier to identify. The closest issue with which I’m familiar is that of severely underpowered study designs that have no chance of reaching statistical significance, assuming a remotely reasonable upper bound on the true effect size. Andrew Gelman has written about the challenges of dealing with this in the literature.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.