Psychology journal to retract study claiming that people fear contagion less in the dark

As we’re fond of repeating, sunlight is the best disinfectant. Which doesn’t jibe with the findings in an eye-catching  2018 paper that found people were less fearful of catching a contagious illness if they were in a dark room or were wearing sunglasses.

Fortunately for us, although not for the researchers, we no longer have to live with the cognitive dissonance. The paper, the journal tells us, will be retracted for flaws in the data — which, thanks to the open sharing of data, quickly came to light.

The study, which appeared in May in Psychological Science, reported that:

darkness triggers an abstract construal level and increases perceived social distance from others, rendering threats from others to seem less relevant to the self.

Okay, that’s not particularly helpful to us mere mortals. What the authors — Ping Dong, of Northwestern University and Chen-Bo Zhong, of the University of Toronto — did was to see if people felt safer in the dark, at least as far as contagious illness was concerned. Per the abstract:

We found that participants staying in a dimly lit room (Studies 1 and 3–5) or wearing sunglasses (Study 2) tended to estimate a lower risk of catching contagious diseases from others than did those staying in a brightly lit room or wearing clear glasses. The effect persisted in both laboratory (Studies 1–4) and real-life settings (Study 5). The effect arises because visual darkness elevates perceived social distance from the contagion (Study 3) and is attenuated among abstract (vs. concrete) thinkers (Study 4). These findings delineate a systematic, unconscious influence of visual darkness—a subtle yet pervasive situational factor—on perceived risk of contagion.

But the retraction notice, signed by D. Stephen Lindsay, the editor of the journal, tells us what went wrong:  

A reader alerted me that the time stamps on the posted data for Studies 1 and 3 indicated that testing of subjects was blocked by condition, creating a confound between condition and date. This confound is particularly problematic because (a) the blocks were widely separated in time and (b) the primary dependent variable of interest in those studies was self-reported risk of flu, which might well vary substantially with date. After reviewing the data with the authors, we came to the consensus that the confound undermined interpretation of the results and warranted retraction of the article. As Editor of Psychological Science, I have decided to retract this article.

Lindsay added in an email to Retraction Watch:

Retracting an article is never fun, but in my best judgment it was the best thing to do in this case.

The paper received some attention — not surprising, given the flashy topic — in the science press. As Scientific American wrote about the study:

Participants first viewed and evaluated a brief, neutral documentary film. Half watched the film with the lights on, and half with the lights off. Each participant sat adjacent to a trained confederate, who intentionally coughed and sniffed during the session. Critically, the confederate made precisely the same number of coughs/sniffles each session, regardless of the lighting. Participants then completed a judgment task in which they estimated the likelihood of contracting six different diseases—one contagious (seasonal flu) and several non-contagious (e.g., skin cancer, diabetes, asthma). Estimates of contracting the seasonal flu were significantly reduced when the lights were dimmed, and this pattern was robust in both a lab manipulation and a real-world classroom setting. It was also observed when the light exposure was reduced by having participants wear sunglasses.

But it also took some hits on social media, including this post from Neuroskeptic which took aim at the findings, and critical tweets from, among others, the Dutch psychologist Daniël Lakens and Brian Nosek, director of the Center for Open Science.

Lakens objected to the effect sizes the researchers reported, and also noted that Dong and Zhong conducted their third study at the behest of the editor. As Neuroskeptic points out, that request implied that the journal wasn’t sold on the findings from the other analyses. And he objected to the implications of such an arrangement for preregistration of studies:

To be fair, maybe maybe Psychological Science told the authors ‘you must do Study #3, but we commit to publish your paper whatever the results are.’ That wouldn’t be as bad, but the question would then arise: would this journal have considered the paper at all if the original four studies were not uniformly positive…?

The whole beauty of preregistration is that it would have allowed these authors to get their studies reviewed and accepted for publication before any of the results were in. There would then have been no pressure for Study #3 or any of the other results to be positive – no pressure from the journal, anyway. The hunger for positive results that underlies so much publication bias and p-hacking would never arise.

We reached Dong by email. She told us:

In this project we adhered to open science practices and posted our data and materials online after the publication of the article. Analyzing the openly shared data a reader alerted the editor of Psychological Science of an error in Studies 1 and 3 that we overlooked. The error is due to the first author [Dong] mistakenly blocking participants by condition (instead of random assignment), creating a confound between condition and date. After reviewing the evidence, the authors and the editor came to the consensus that the confound undermined interpretation of the results and the paper should be retracted. This is a tough lesson for us. Open science does not mean people don’t make mistakes; it simply means mistakes are more easily caught. We are very grateful for the reader who caught the mistake and the editor who coached us through how to handle a situation like this. We vow to be more thorough and careful in our future research. At the same time, this experience strengthens our commitment to open science as it shows that open science is working as intended.

We take research practice and full disclosure seriously and we are still confident about the robustness of the phenomenon even though the confound has compromised our ability to draw inferences from Studies 1 and 3. Because of this we plan to conduct another pre-registered, well-powered study that is free of the confound to replicate our previous Studies 1 and 3. We will make the findings available through [the Open Science Framework] OSF regardless of whether our hypotheses are supported or not.

Nosek, who runs the Center for Open Science, which in turn runs the OSF, told us:

This retraction looks to be a positive consequence of open data. It was not possible to detect the data collection confound in the article itself, but open data enabled a reader to identify the problem.  It is encouraging that the Editor and authors decided to retract and pursue new evidence.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.