Group whose findings support video game-violence link loses another paper

Last July, Joseph Hilgard, a postdoctoral fellow at the Annenberg Public Policy Center at the University of Pennsylvania, saw an article in Gifted Child Quarterly that made him do a double take. Hilgard, who is studying the effects of violent media on aggressive behavior, said the results of the 2016 paper “caused me some alarm.”

The research—led by corresponding author Brad J. Bushman, a professor of communication and psychology at The Ohio State University (OSU)—showed that gifted and non-gifted children’s verbal skills dropped substantially after watching 12 minutes of a violent cartoon. The violent program had a greater impact on the gifted children, temporarily eliminating the pre-video verbal edge they displayed over their non-gifted peers.

To Hilgard, the results suggested that violent media can actually impair learning and performance. But the effect size was huge — so big, Hilgard thought it had to be a mistake. This, plus other questions, prompted Hilgard to contact the authors and the journal. Unfortunately, once he got a look at the data — collected by a co-author in Turkey who became unreachable after the recent coup attempt — the questions didn’t go away. So the journal decided to retract the paper.

Bushman’s body of work has continually supported the idea that violent media increases aggressive behavior, including a controversial 2012 study “Boom, Headshot!” that was retracted earlier this year.

What first struck Hilgard as odd about the 2016 paper was how large the effect of the violent cartoon was:

The effect size was really huge, which I thought was odd—perhaps a math error or typo—and needed a double check.

Hilgard contacted Bushman, who supplied Hilgard with the data. When Hilgard took a closer look, several things stood out:

First, I found that the huge effect reported was not an error or typo. That struck me as pretty unusual, considering the effect size that’s typical in this type of psychology research. Second, such data — especially in children –tends to be quite noisy. But when I plotted the data, it became visually clear that everyone in the treatment group decreased consistently by similar amounts. It was very unusual for every single data point to behave in such a similar way.

A few commenters on PubPeer have also raised questions about the paper (An author posted a comment with a link to the original dataset, but the link no longer works). Hilgard explained:

The data also seemed to indicate a strong and implausible improvement in task performance in the control group [which watched an episode of a family-friendly cartoon], and as pointed out on PubPeer, a highly significant difference in pretest scores such that the violent cartoon group had higher pretest scores than the nonviolent control group.

The growing list of questions left Hilgard uncertain about the study results:

On their own, each of these elements might get a pass, but taken together I think they were cause for concern and necessitated a second look.

Hilgard noted that the authors and editors took these concerns very seriously and handled them responsibly. But, ultimately, Bushman could not answer the questions raised by Hilgard and commenters on PubPeer because the key data had been collected by his Turkish colleagues—Atlay and first author Yakup Çetin—who were unreachable after a coup attempt led the government to shut down Fatih University. (We also attempted to contact Çetin and Altay, but both emails bounced back. We tried Bushman, as well, but haven’t heard back.)

Hilgard’s questions raised significant enough concerns that the authors and journal opted to withdraw the paper. Here’s the retraction notice, published earlier this year, for “Effects of violent media on verbal task performance in gifted and general cohort children:”

Joseph Hilgard, postdoctoral fellow at the Annenberg Public Policy Center at the University of Pennsylvania, contacted the journal with questions regarding the pattern of results and conducted reanalyses of the data that called into question the credibility of the data. Unfortunately, the data collection procedures could not be verified because the author who collected the data (Cengiz Altay) could not be contacted following the attempted coup in Turkey. Therefore, as the integrity of the data could not be confirmed, the journal has determined, and the co-authors have agreed, to retract the study.

The Conversation also retracted an article based on the research, once it learned of the retraction.

Hilgard stressed that he believed the authors did the right thing by sharing data and trying to explain and rectify the issues:

They were quite helpful, and I thank them for their swift and responsible action.

We asked Jeff Grabmeier, senior director of research communications at OSU, if he could provide any information on the new retraction since we previously reported about the PubPeer thread raising questions about the paper. Grabmeier told us:

The reason for the retraction has not changed. I’d like to emphasize that there have been no allegations of misconduct. Some questions were raised about the data set used in that study. Unfortunately, the researcher who collected the data is from Turkey and has not been reachable since the attempted coup in that country. Since the data is not available, the other authors decided it was best to retract the study. Dr. Bushman has begun the process to replicate the study here in the United States.

The university has conducted misconduct investigations on several prominent researchers, such as cancer biologist Carlo Croce and pharmacologist Terry Elton. In both cases, OSU had to reopen the investigations after finding no evidence of wrongdoing. In Croce’s case, it took accusations from another researcher for OSU to consider reopening the investigation. And in Elton’s case, the Office of Research Integrity had to ask the university to take a second look at the allegations.

In addition to the two retractions, Bushman has other papers questioned on PubPeer, along with two corrections. The first is of a 2010 paper—“Like a Magnet: Catharsis Beliefs Attract Angry People to Violent Video Games”—published in Psychological Science, cited eight times and corrected last year, due to statistical inconsistencies:

In several cases on page 791 (left column), the test statistic (F or t) and the p value given for it are inconsistent with one another. The statistics in question are as follows: F(1, 108) = 9.71, p < .002 t(118) = 3.01, p < .003 F(1, 147) = 3.04, p < .05 F(1, 147) = 6.13, p < .01 The raw data are not extant, so in each case it is not known whether it was the test statistic or the p value (or both) that was reported incorrectly. Only the third of the discrepancies listed is consequential: If the reported F value is correct, then the associated p value is a nonsignificant .0833.

The second, “The effect of video game violence on physiological desensitization to real-life violence” in the Journal of Experimental Social Psychology—published and corrected in 2007 and cited once — notes the publisher failed to make the authors’ corrections.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on,

3 thoughts on “Group whose findings support video game-violence link loses another paper”

  1. Please be careful with the assertion “has other papers questioned on PubPeer”. Yes, there are a lot of papers showing up for Bushman and colleagues. But except for the two papers you have flagged, the rest appear to be just collateral damage from the attempt of a Dutch research methods group to automatically scan all accessible papers for errors in stats reporting. Their algorithm automatically inserted its verdict in a PubPeer comment for each scanned paper. So even if a paper passed the test, with no corrections whatsoever, it will show up on PubPeer. And in those cases that the algorithm actually found inaccuracies, they mostly pertain to rounding errors for significance levels (and even then not always in the “wrong” direction). So the fact that papers are listed on PubPeer is no longer a sign of trouble in and of itself.

    1. No. You are wrong. For example, if you check the link in the comment above yours you will find comments from other researchers who seem to have found real problems with other studies from this group.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.