How easy is it to change people’s minds? In 2014, a Science study suggested that a short conversation could have a lasting impact on people’s opinions about gay marriage – but left readers disappointed when it was retracted only months later, after the first author admitted to falsifying some of the details of the study, including data collection. We found out about the problems with the paper thanks to Joshua Kalla at the University of California, Berkeley and David Broockman at Stanford University, who tried to repeat the remarkable findings. Last week, Kalla and Broockman published a Science paper suggesting what the 2014 paper showed was, in fact, correct – they found that 10-minute conversations about the struggles facing transgender people reduced prejudices against them for months afterwards. We spoke with Kalla and Broockman about the remarkable results from their paper, and the shadow of the earlier retraction.
Retraction Watch: Let’s start with your latest paper. You found that when hundreds of people had a short (average of 10 minutes) face-to-face conversation with a canvasser (some of whom were transgender), they showed more acceptance of transgender people three months later than people with the same level of “transphobia” who’d talked to the canvasser about recycling. Were you surprised by this result, given that a similar finding from Michael LaCour and Donald Green, with same-sex marriage, had been retracted last year?
Joshua Kalla and David Broockman: When Science retracted that study, it did not disprove the original hypothesis that high-quality, two-way canvass conversations about same-sex marriage could change attitudes. Retracting the study simply meant the hypothesis was unproven. With that said, it’s also important to note that that study (and our study) are not the only studies of canvassing — there’s a lot of work that’s been done on this general topic for over a decade that finds high-quality conversations can have large effects (see more below). So we had an open mind about the potential for these very in-depth conversations to have meaningful impacts on prejudice outcomes. At the same time, we also know many great ideas don’t end up working, so we weren’t sure exactly what to expect (see below). That’s why we have data!
RW: Did you suspect that your paper was reviewed more heavily than usual, given the skepticism surrounding this type of paper and result?
JK and DB: We always think it’s important to follow best practices in open data and transparency, and we did our best to do so here. We registered our pre-analysis plan at egap.org, where we specified all of our hypotheses and drafted the code we would use to analyze the data. In our Supplementary Materials, we further provided all the code we used in analyzing our data and presented a number of descriptive statistics and robustness checks. The reviewers and editors at Science had access to all of this, which led to a very fair and thorough review process.
Upon publication, we took additional steps to help overcome any remaining skepticism. For example, all of our data and code for the SM are now publicly available in R Markdown so people can follow where all the point estimates come from line-by-line. Using R Markdown allowed us to embed R code chunks and their output directly into our Supporting Materials, with the goal of improving transparency and reducing the likelihood of errors. Also, as noted in this news article in Science, one of our professors at Berkeley, Gabriel Lenz, independently verified that the data were truly collected.
RW: Were you already pursuing the project at the time the same-sex marriage study fell apart? If so, did you ever think about abandoning the project, given that the failure of the previous study suggested it might not “work”?
JK and DB: Yes, it was in the process of launching this study on anti-transgender prejudice that we discovered the statistical irregularities that led to the retraction of the same-sex marriage study.
But even after that study was retracted, we all remained committed to understanding the potential power of these conversations to reduce prejudice. The retraction didn’t reduce our interest in conducting this research and all of our research partners felt the same way.
RW: How heavy does the shadow of that previous retraction weigh on this field? Do you suspect your findings got more or less attention because of it? (We should note: The paper was mentioned last week by the New York Times.)
JK and DB: That retraction reflected just one paper, but there are a number of other examples showing that high-quality, personal interactions are the best way to mobilize and persuade voters (for a partial review, see http://www.vox.com/2014/11/13/7214339/campaign-ground-game). Our results are building upon this tradition combined with the established psychological theories of perspective-taking and active processing to try to reduce prejudice.
Separate from the backstory, our results are particularly timely for the current political environment. Transgender people are being targeted with discriminatory laws across a number of states and these results suggest an effective way to reduce the widespread prejudice against transgender people that contributes to these laws.
RW: One of your first red flags about the LaCour paper was the relatively low response rate when you tried to replicate your results – what was the response rate in your 2016 paper, and how does it match with the response rate for LaCour and Green, and similar research?
JK and DB: As we report in Figure S1 of our Supplementary Materials, the final response rate to our initial survey was around 2.7%. This was much lower than that reported in LaCour and Green. Seeing how low our initial response rate was compared to that reported in LaCour and Green caught us off guard. With the retraction, we realized that our planned survey recruitment technique and experimental design wasn’t in fact a proven approach that we could apply to our work – and that set us scrambling to invent something workable.
It has taken some time, but we’ve now developed a new experimental methodology that allows us to rigorously measure the effectiveness of this kind of canvassing – and to get higher response rates.
Overall, this new experimental design allows other researchers to conduct studies such as ours for a fraction of the cost compared to what the LaCour and Green study supposedly cost. We hope that these advances in experimental design will facilitate further replications and extensions of our work on prejudice reduction.
RW: Broockman told us the LaCour paper had “huge implications for people who were trying to advance the cause of equality and have changed how advocates do their work.” Do you feel the same about your 2016 paper?
JK and DB: We hope so. Transgender non-discrimination laws are one of the most important political issues of 2016, with a number of states passing laws to limit the rights of transgender people and severely intrude on their lives. Advocates are searching for ways to successfully reduce prejudice and change views towards transgender people. Our results provide an experimentally proven method. For folks looking to adopt this canvassing approach, we posted a report prepared by the Los Angeles LGBT Center’s Leadership LAB in our replication materials (https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/WKR39N, See LAB Report – Trans-Formation.pdf). That document explains how advocates can use this research, written by the practitioners from the Leadership LAB who developed this canvassing approach.
We also think there is much that the 2016 campaigns can apply to their electoral work. This paper adds to a growing body of research showing that high-quality conversations can be some of the most effective ways to persuade or mobilize voters. For example, a recent paper by Harvard Business School Professor Vincent Pons describes a massive experiment from the 2012 French presidential election in which activists from the Socialist Party knocked on five million doors and accounted for a substantial amount of President Hollande’s victory margin. Previous presidential campaigns, such as Barack Obama’s, successfully used the power of activists going door-to-door and we hope that other 2016 candidates will continue to apply high-quality conversations to their work.
RW: Anything else I haven’t asked you that you’d like to add?
JK and DB: It is important to note that our study is only one study. Science advances incrementally and this one study doesn’t provide all the answers. What about the 10-minute conversations made them so effective? Would they work in different settings, with different subjects? Are there generalizable principles from these conversations that could be applied to different issue areas? There is still much to be learned. We hope that use of our new experimental methodology helps reduce the costs of field experiments, encouraging more replications and extensions to answer these questions.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.