A study finding no evidence of racial bias in police shootings earns a correction that critics call an “opaque half measure”

via Tony Webster/Flickr

A group of researchers who published a controversial study that found no evidence of racial bias in deadly police shootings have corrected their paper but are standing by their findings — to the displeasure of some scholars who say the article is too flawed to stand.

The 2019 study, “Officer characteristics and racial disparities in fatal officer-involved shootings,” was written by David Johnson, of the University of Maryland, and several co-authors from Michigan State University. According to the abstract:  

We report three main findings: 1) As the proportion of Black or Hispanic officers in a FOIS [fatal officer-involved shooting] increases, a person shot is more likely to be Black or Hispanic than White, a disparity explained by county demographics; 2) race-specific county-level violent crime strongly predicts the race of the civilian shot; and 3) although we find no overall evidence of anti-Black or anti-Hispanic disparities in fatal shootings, when focusing on different subtypes of shootings (e.g., unarmed shootings or “suicide by cop”), data are too uncertain to draw firm conclusions. We highlight the need to enforce federal policies that record both officer and civilian information in FOIS.

The article prompted a back and forth between the authors and other scholars, including Jonathan Mummulo and Dean Knox, of Princeton. Mummolo and Knox wrote a letter to the journal criticizing the Johnson paper as being

mathematically incapable of supporting its central claims 

Another letter, from Ulrich Schimmack and Rickard Carlsson, of the University of Toronto and Linnaeus University in Sweden, respectively, called the conclusions

misleading because the reported results apply only to a subset of victims and do not control for the fact that we would expect a higher number of White victims simply because the majority of US citizens are White.

In a January 21 reply to those letters, Johnson’s group dismissed those concerns: 

In sum, we did control for racial differences in police encounters. Our results are consistent with past research estimating Pr(shot∣race) when using population or crime rates, with the added benefit of being able to control for both simultaneously. This is a valuable contribution given debate over what proxy should be used to measure police exposure (11, 12). Rather than choose one, our approach allows researchers to control for multiple relevant proxies for police exposure.

However, last month, PNAS issued the following correction:  

The authors wish to note the following: “Recently, we published a report showing that, among civilians fatally shot, officer race did not predict civilian race and there was no evidence of anti-Black or anti-Hispanic disparities (1). Specifically, we estimated the probability that a civilian was Black, Hispanic, or White given that a person was fatally shot and some covariates. The dataset contains only information about individuals fatally shot by police, and the race of the individual is predicted by a set of variables. Thus, we compute Pr(race|shot, X) where X is a set of variables including officer race.

“Although we were clear about the quantity we estimated and provide justification for calculating Pr(race|shot, X) in our report (see also 2, 3), we want to correct a sentence in our significance statement that has been quoted by others stating ‘White officers are not more likely to shoot minority civilians than non-White officers.’ This sentence refers to estimating Pr(shot|race, X). As we estimated Pr(race|shot, X), this sentence should read: ‘As the proportion of White officers in a fatal officer-involved shooting increased, a person fatally shot was not more likely to be of a racial minority.’ This is consistent with our framing of the results in the abstract and main text.

“We appreciate the feedback that led us to clarify this sentence (4). To be clear, this issue does not invalidate the findings with regards to Pr(race|shot, X) discussed in the report.”

But Mummulo and Knox weren’t satisfied, calling the correction “inadequate and misleading.” In a statement they sent Retraction Watch, the researchers wrote: 

While the authors now acknowledge they described a statistical quantity in their original study which they never estimated, they still maintain the result of the test they performed  “is consistent with our framing of the results in the abstract and main text. … To be clear, this issue does not invalidate the findings with regards to Pr(race|shot, X) discussed in the report.”

These statements are false. The original article claimed that, “Racial disparities” in the quantity they estimated “are a necessary but not sufficient, requirement for the existence of racial biases… .” In their significance statement, they also claimed that their results suggest “that increasing diversity among officers by itself is unlikely to reduce racial disparity in police shootings.” In other words, the article claims to shed light on whether white and nonwhite officers exhibit differential levels of racial bias in the use of lethal force, with the aim of informing the promise of personnel reforms in police agencies. Since the study’s publication, the authors have also amplified these strong claims in interviews with several major media outlets.

But when properly understood, the test that was conducted in the original article sheds no light on racial bias or the efficacy of diversity initiatives in policing, and a meaningful correction would acknowledge this. Because every observation in the study’s data involved the use of lethal force, the study cannot possibly reveal whether white and nonwhite officers are differentially likely to shoot minority civilians. And as we show formally in our published comment, what the study can show—the number of racial minorities killed by white and nonwhite officers—is simply not sufficient to support claims about differential officer behavior without knowing how many times officers encountered racial minorities to begin with.

Contrary to the text of this correction, the analysis in Johnson et al. (2019) is not consistent with the study’s broader claims, which involve sweeping conclusions and policy recommendations that are wholly disconnected from its analysis.

With this correction, PNAS had an opportunity to clarify the severe limitations of this paper. Given the life and death nature of this research topic, and the potential to mislead both the public and policymakers, it is very disappointing after these many months to instead see an opaque half measure that is likely to perpetuate confusion used to address a blatant and fundamental scientific error.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

8 thoughts on “A study finding no evidence of racial bias in police shootings earns a correction that critics call an “opaque half measure””

  1. This study is flawed even worse in another way.

    Many other studies have shown American Indians face, per capita, an even higher fatality by cop rate than African Americans, but I don’t even see this one mentioning American Indians.

    I call it junk right there.

    1. The only flaw is that it goes counter to the current SJW Bullsht narrative.

      Why does the Left on support Science only when it supports their agenda. Last time I saw that the Soviets were doing it.

  2. I would like to know if the critics of this paper would apply the same standards to other papers that support their view? Right now many controversial papers are being criticized based on their alleged poor quality. However, in my opinion, many critics don’t apply this standard equally. Shouldn’t there be one standard of quality that applies to all papers?

    1. Kazushi, this is both one of the frustrating things and one of the strengths of science, and why science is about the community of knowledge, not a single step. If your work stands up to my arguments, good for you. If my arguments make you clarify your points and set limits on applicability, also excellent. That also goes the other way. Of course, I will find my arguments crucial and yours nitpicky, but this is why science is done by people, not some impersonal process. It’s ugly, frustrating, and the most reliable tool we have.

    2. Bingo. The study in question provides some good insights and fatal shootings by police seems like a good proxy for aggressive police behaviour. How are you going to account for every interaction? To say a study has no validity because you don’t have information on every single police interaction is silly. By the same token, then, the opposition would have to agree that there is no evidence there is any racial bias when it comes to police violence against civilians. Maybe police shoot proportionally more at white people, but they just happen to miss much of the time. Black people, for some unknown reason, get killed more often. Not enough data… There is political motivation on the part of these political scientists who oppose the study to pit groups against one another. They want to perpetuate the divide to control both sides. It’s sickening.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.