“This is how science works:” Error leads to recall of paper linking Jon Stewart and election results

Jon Stewart in 2010

Jon Stewart is a powerful figure in American media. How powerful is he? So powerful that his departure in 2015 as host of The Daily Show on Comedy Central may have tipped the 2016 presidential election to Donald Trump.

At least, that’s the hypothesis behind a paper published in late April in the journal Electoral Studies. According to Ethan Porter, of George Washington University, and Thomas Wood, of Ohio State:

By combining granular geographic ratings data with election results, we are able to isolate the shows’ effects on the election. For The Daily Show, we find a strong positive effect on Jon Stewart’s departure and Trump’s vote share. By our estimate, the transition at The Daily Show spurred a 1.1% increase in Trump’s county-level vote share. Further analysis suggests that the effect may be owed more to Stewart’s effects on mobilization, not his effects on attitudes. We also find weaker evidence indicating that the end of The Colbert Report was associated with a decline in 2016 voter turnout. Our results make clear that late-night political comedy can have meaningful effects on presidential elections.

But as Porter and Wood quickly realized, they were mistaken.

As Wood revealed on Twitter over the weekend, he and Porter erred in their analysis, a fact they discovered after several readers pointed out an apparent flaw in the article:

They notified the journal, which is withdrawing the paper, Wood told us by email:

As I tweeted, the error was drawn to our attention by readers who noted a discrepancy between our depiction of ratings changes and the regression results. We made the figure from scratch quite late in the process of writing the paper–the syntactical error which biased the data we used in the regressions was not repeated when we made the figure.

When a number of readers suggested the ratings shifts in the figure were too modest to affect vote choice in the way we described in the paper, we looked to the data underlying our regression estimates, and found a discrepancy between those ratings data and the data in the figure. We then checked why these estimates would be different, and found that the ratings used for the regression were artificially inflated by using the wrong operator when performing aggregations. We fixed the error in the code, re-estimated the models, and the relationship was now far more modest.

We made that discovery late Wednesday night, and immediately wrote to the editors on Thursday morning to request the withdrawal. They confirmed the withdrawal Friday morning, which we publicly announced on Twitter.

As the Twitter feed reflects, Porter and Wood received praise for their handling of the mistake. Here’s a response from one follower, Michael Spagat:

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.