Divorce study felled by a coding error gets a second chance

home_cover (1)A journal has published a corrected version of a widely reported study linking severe illness and divorce rates after it was retracted in July due to a small coding error.

The original, headline-spawning conclusion was that the risk of divorce in a heterosexual marriage increases when the wife falls ill, but not the husband. The revised results — published again in the Journal of Health and Social Behavior, along with lengthy explanations from the authors and editors — are more nuanced: Gender only significantly correlates with divorce rate in the case of heart disease.

The authors’ note, from Iowa State’s Amelia Karraker and Kenzie Latham, at Indiana University-Purdue University Indianapolis, explains that the coding error led them to over-estimate how many marriages ended in divorce:

This was the basis for the results reported in the previous paper’s (incorrect) estimates that 32% of the marriages in the sample ended in divorce, fewer than 1% were lost to attrition, and 44% remained continuously married.

In reality, a much smaller fraction of couples chose to break up. The fate of many more is unknown, because they left the study:

Based on the corrected coding, we estimate 6% of marriages ended via divorce, 24% of marriages ended via widowhood, 34% of marriages remain continuous through the 2010 wave, and 35% of marriages were lost to attrition (due to nonresponse from at least one spouse in one wave). As would be expected for this age range, marriages were more likely to end in widowhood than divorce, and divorce was a rare event. In addition, more marriages were lost to follow-up than ended in divorce and widowhood combined.

This change affects the conclusion, they note:

Based on the corrected analysis, we conclude that there are not gender differences in the relationship between gender, pooled illness onset, and divorce.

When looking at specific diseases, the results are a little more interesting:

In the corrected analysis, we find that in the case of heart problems and stroke, wife’s onset is a statistically significant predictor of divorce, while husband’s is not. Further, in the case of heart problems, we reject the null hypothesis of equality of coefficients for husband’s and wife’s onset (p < .05) in the corrected analysis, providing evidence of a gendered relationship between heart problems and divorce risk.

The editor’s note, by EIC Gilbert C. Gee, suggests that many coding errors likely go undetected:

Cases of scientific errors and misconduct surface regularly (Wagner and Williams 2011). The research environment is fast-paced given the ethos to “publish or perish,” or in the case of some institutions, “get grants or get out.” The bar keeps rising as noted by the increasing number of publications among new assistant professors in sociology (Bauldry 2013) and elsewhere.

Further, research is becoming increasingly complex, with greater calls for transdisciplinary collaborations, “big data,” and more sophisticated research questions and methods. Anyone who has worked with large data sets, such as the Health and Retirement Study (HRS) used by Karraker and Latham, knows how complicated they can be. These data sets often have multiple files that require merging, change the wording of questions over time, provide incomplete codebooks, and have unclear and sometimes duplicative variables. Such complexities are commonplace among many data systems (e.g., National Longi-tudinal Surveys of Youth, National Health and Nutrition Survey, National Co-Morbidity Replication Survey).

Given these issues, I would not be surprised if coding errors were fairly common, and that the ones discovered constitute only the “tip of the iceberg.” If so, such errors may contribute to some of the conflicting findings found in many areas of research. We would hope that most of these errors are inconsequential, although that is not always the case, as shown in the Karraker and Latham study.

Indeed, we’ve reported on many retractions that are the result of coding errors.

Gee concludes that errors are a natural part of doing science:

As noted by mathematician and Nobel laureate Frank Wilczek, “If you don’t make mistakes, you’re not working on hard enough problems.” Let us be unafraid to work on problems that can yield mistakes, and let us work collectively to fix errors.

Even the media has taken steps in the right direction — The Huffington Post added an update to their story on the study, noting it had been retracted, and the The Washington Post wrote a follow-up story to their coverage of the study.

Hat tip: Rolf Degen 

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

One thought on “Divorce study felled by a coding error gets a second chance”

  1. This is a ridiculous conclusion. Statistically, if you put up 20 diseases and ask which of them has a p<0.05 (=1/20) correlation with driving a red car, there is probably going to be one. Erroneous results are also likely when the sample sizes aren't large enough.

    Making a coding error is an honest mistake, but I think this group makes themselves look very bad to publish a correction with such an utterly nonsensical claim in it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.