Research integrity officials at Georgia State University say a psychology researcher did not commit misconduct in a controversial 2015 paper in JAMA Pediatrics which challenged the notion that most rapists on college campuses are repeat offenders.
GSU launched the inquiry after an outside researcher questioned the validity of data supplied to him by Kevin Swartout.
Brenda Chapman, associate VP for research integrity, told us in an e-mail that the investigation cleared Swartout of wrongdoing:
Georgia State University conducted an investigation into an allegation of research misconduct against Dr. Kevin Swartout. The investigation committee concluded that no research misconduct occurred.
We asked for a copy of the investigative report; GSU declined, citing confidentiality.
Swartout gave us this comment:
As you would guess – I fully concur with the investigation outcome. This has been a tough process for me and my family, so I do not have any additional comment at this time.
This paper has been the focus of much controversy since it was published. We covered some of the debate earlier this year, after researcher Jim Hopper — an outspoken critic of the paper — reported JAMA Pediatrics had rejected his letter to the editor because he’d already posted it on PubPeer. As we reported, after Hopper reviewed some of the initial data, he contacted GSU with his concerns:
When [independent consultant, Allison Tracy] and Hopper discovered flaws in the study, and received what they considered to be insufficient clarifications from the journal and the study’s authors, Hopper informed the research integrity staff at GSU, and was told that they were going to carry out an investigation into the case.
We contacted Hopper about GSU’s decision. He provided a link to their critique of the paper, and told us:
My goal has always been to stop Dr. Swartout and his co-authors from misleading policy makers with invalid science. That requires determining, and then letting others know, that the complex analyses and findings reported in the paper are not valid, so that the authors, or the journal, or someone with the authority — and the integrity — to do so will take the appropriate action to ensure that the paper is sufficiently corrected or retracted.
Because GSU, like the authors and the journal editors, has failed step up to the plate, I plan to pursue the other options available to achieve my goals for the good of science, public policy and victims of rape on campus, until I have exhausted those options.
Swartout and his colleagues corrected the paper in December 2015 to reflect errors which they maintain do not fatally cripple their conclusions:
In the Original Investigation titled “Trajectory Analysis of the Campus Serial Rapist Assumption,” published online July 13, 2015, in JAMA Pediatrics,1 there were inconsistencies in missing data between the data used for the published analyses and the publicly available derivation data, which affected 2 cases. After correcting for these errors, most of the frequencies and statistics reported in the Results section differ slightly. All interpretations and conclusions remain the same after correcting for these errors.2 A corrected article with corrections to the Abstract, text, Tables, and Figure has been published online.1 In addition, this article was previously corrected on August 13, 2015, to fix a column heading in Table 3.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
I encourage readers to read my latest comment in my PubPeer post (https://pubpeer.com/publications/26168230), which provides valuable additional information, including that the two GSU investigation committee members with expertise in the complex latent class growth analyses employed in the paper expressed AGREEMENT with the meticulously documented critique by Dr. Allison Tracy, a critique that, as you can see if you read even just the first three pages of Dr. Tracy’s “Executive Summary,” goes FAR BEYOND the very small corrections made to the paper thus far and cited by the authors, GSU and this RW article.
I want to affirm, in the strongest terms, that I have never wished to harm Dr. Swartout. From the start I encouraged, indeed veritably begged him, to do the right thing and report all of the major errors and problems to the journal and insist that they take appropriate action. I did so precisely because I wanted to spare him such ordeals. But I will not put his personal comfort and reputation above the interests of science, public policy, and the public health implications of misleading people into believing that most campus rapists are not repeat offenders with multiple victims.
What are the grounds for retraction? If a method is used poorly does this mean the related paper must be retracted? If a paper should be retracted, does this mean that misconduct occurred? Should an abstruse argument about a statistical method provide a rationale for accusing a scientist of “misleading policy makers with invalid science”? What’s at the bottom of this very slippery slope?
Arguments that are solely about methodology are not grounds for retraction. The scientific record is not meant to be an unblemished history of successes, with every failed method and flawed conclusion retracted. It is, and should be, a record of failure as well as of success; the literature is an ongoing (and permanent) narrative that describes the process of iteratively approaching truth. To think otherwise is to tacitly assume that what we know now is “Truth”. The most callow assumption possible in science is that we’ve already arrived at full knowledge.
This controversy delineates the rabbit hole that Retraction Watch is in danger of disappearing down. Not every accusation of misconduct is accurate or even reasonable. The notion that every scientist must respond to every ”outside researcher” or science gadfly in excruciating detail is naïve at best and may ultimately be damaging to science. Dialogue between scientists belongs in the literature, not in a furious back-and-forth at a website and certainly not in court.
I understand your fears, and I never wanted it to come to this, or to file a misconduct investigation.
Have you read the executive summary, and, if you are an expert in latent class growth analysis (LCGA), have you read the technical report?
Papers are retracted all the time for far less egregious problems than this one has (e.g., false, manipulated images) on topics that, in stark contrast to the one addressed by this paper, have no identifiable or even foreseeable potential harms to public health.
The fact that few people understand LCGA does not change this reality: What is reported in the paper with respect to the LCGA analyses, which are the entire point of the paper, is simply not true. As I have shown, not only has Dr. Tracy documented this, but GSU’s own LCGA experts on their investigation committee agreed with her findings and praised her work.
The “rabbit hole” (to use your phrase) that Retraction Watch, PubPeer and others are trying to help science get out of involves utterly invalid papers not being removed from the literature, and in the meantime influencing not just other scientists but also policy makers and in some cases contributing to very serious harm to real people, because no one with the ability to have them removed, and the knowledge that removing them is the right thing to do, has done so.
But yeah, it’s not pretty being in the trenches, on either side, or even partly on the sidelines, of one of these battles.
I sense politics and science. That is a deadly mix. I subscribe to Mr. Steen’s position. Methology issues are to be debated, but once a paper makes it through peer-review, methology alone is no grounds for retraction. If the detractors don’t agree with the findings, they should not go through unorthodox channels, such as websites, comment boards, etc., but they should do original research and publish the results. That is harder but more noble.
“Making it through peer review” can mean very different things for different papers. Read the opening paragraph of Dr. Tracy’s Executive Summary, linked to in the RW post above, to get a sense of how this paper would have looked fine to peer reviewers but in fact was a mess.
And every Retraction Watch reader knows just how much bad and even dangerous science “makes it through peer review,” not to mention through editorial review when the orthodox channel (e.g., letter to the editor) is used, etc.
But I also appreciate your concerns, and your view on on “unorthodox channels” vs. publishing one’s own research is admirable and realistic for SOME types of research (e.g., bench science that can actually be done, in a reasonable time-frame, by those who can and want to challenge invalid work that way).
However, the kind of research in question here — large longitudinal data sets on an issue for which apparently real “important new findings” can have large immediate impacts on many people’s lives — does not conform to your view.
It would take many years, and that’s after one secures the funding for a large longitudinal study (among the hardest money to come by for a variety of reasons), to collect, clean, analyze and publish the kinds of data and analyses required to refute the alleged findings of this paper. (Fortunately, one of the data sets for this paper is in the public domain; although it has lots of problems, and I am working on a paper using those data to show what I quickly showed, as as stop-gap measure, in simple frequency analyses available via my PubPeer post on this paper.)
Finally, this is not about “methodology alone.” It’s about the validity of the findings, and the claims made for those alleged findings by the authors, and the impacts those findings and claims can very rapidly have, and are already having, on the views of policy makers around the country, and therefore on policies and the lives of those vulnerable to being raped on campus. So long as the prevalence of repeat perpetrators, and the percentage of all assaults that they commit, are not properly and widely understood, there will not be effective prevention and response effects to stop those repeat offenders from assaulting again and again, and the numbers of victims will remain terribly high.
When I provided to RW the comment quoted above, in which I say, “Because GSU… has failed to step up to the plate,” I had not yet received the investigation report from GSU (which they only provided after my request for it under Georgia public records law and only after the publication of the RW story).
When commenting to RW, I only had a letter on the investigation’s results, which turned out to include literally all of them, verbatim from the original report, except this all-important one: The “recommendation” that Dr. Swartout and his co-authors, in the interest of the scientific record, take action to rectify any remaining errors in the JAMA Pediatrics paper.
As Dr. Tracy’s Executive Summary and my PubPeer post makes clear, there remain many major errors and problems not yet addressed by correction or retraction of the paper.
However, that this is a “recommendation” by GSU makes it unclear whether, should Dr. Swartout and the JAMA Pediatrics editor fail to make such rectification (in a reasonably timely fashion), is GSU committed to ensuring that such rectification occurs, just as other universities often do after such investigations. (When misconduct has been found, this is often done without waiting for the lead author to take appropriate action, and with public comment from the institution that compels the journal).
Having this new information, then, it is NOT my position that GSU has “failed to step up to the plate.” The evidence on that will remain incomplete for some time.
For more information on this issue, see https://pubpeer.com/publications/26168230
It has been 7 months since GSU issued it’s report and recommendation on July 1, 2016 that Dr. Swartout and his co-authors, in the interest of the scientific record, take action to rectify any remaining errors in the JAMA Pediatrics paper.
They have not done so. This means, unfortunately, that additional efforts must now be taken.
I had a look at the original paper together with the analysis from Dr. Tracy and the paper seems to be a mess as far as the latent class analysis goes. It would appear that they wanted to find at least 3 classes because that fits what they wanted to show (never I good idea). The problem is that once they only had 4 outcomes with a traditional latent class the model is not identifiable with 3 classes. So they tried a quadratic which should be, but seems not to be.
Given that there are only 4 time points I would have used a linear. The statement that this is a “severe constraint that mismodels men who rape at non‐consecutive time points” in the critique is wrong, as it is only the probabilities that are constrained to be linear, the events will be independent and will model men who, for example, rape in 2 non-consecutive periods of the 4. I would also disagree with the opinion that throwing away the data about number of rapes per year is wrong. If the research question is whether a rapist keeps up that behaviour over a number of years then the number of times per year is almost irrelevant. It would also be difficult to model which would result in something difficult to interpret.
I found their model selection a bit strange. saBIC is an unusual information criteria to use. There was also negligible change from 2 to 3 classes which suggests that 2 is adequate. However, the BLRT gives a totally different view and suggests 4 classes. Maybe a consequence of lack of identifiability and calculation of degrees of freedom. Extra classes always seem to produce strange results, where the classes split to give two reasonably different classes so a constant class becomes an increasing and decreasing class.
Anyway, what I did was took the Traj.dat and extracted the data for the first dataset. I then fitted some models using Latent GOLD with Syntax module (much better than Mplus for most analyses of this form) to define the regressions and used a model with a linear effect over time for each class. I then fitted models for 1,2 and 3 classes with a linear effect and 2 and 3 classes with no effect over time and used BIC as a model selection method. Not unsurprisingly the 2 class with no time effect was the best model. As there is not a lot of data on the rapists it is always going to be difficult to determine if there is a change over time.
As a general point, I think that 4 time points is insufficient for longitudinal latent class even if there is some sort of parametric form for the probabilities over time. How many is sufficient is a question that should be answered.
Thank you so much for doing this work and sharing what you’ve found.
My opinion that throwing out the number of rapes per year is wrong actually DOES make sense. Here’s why:
As you can see, the authors framed this paper, especially in the intro but in the discussion too (more implicitly), as a challenge to Lisak’s prior findings (on older commuter college students) that most rapists were repeat offenders who committed the vast majority of rapes (findings mirrored by McWhorter et al.’s research on young Navy recruits). Then, Swartout and colleagues led journalists and policy makers to believe that they had refuted Lisak’s findings, even though they had only compared their (rotten/invalid) apples to Lisak’s oranges. Swartout even told FiveThirtyEight that they COULDN’T TELL whether the guys had committed multiple rapes in a particular year. Not only is that not true, but comparable descriptive statistics for their publicly available dataset are totally consistent with Lisak’s findings (see PubPeer comment 1 for link to my simple frequency analyses).
Consider these titles of articles by journalists who read/looked at the paper and, in two of those cases, DISCUSSED the paper with Swartout: “What If Most Campus Rapes Aren’t Committed by Serial Rapists” (FiveThirtyEight); “Researchers [including me] Push Back on Criticisms of Well-Known Serial Rapist Study [Lisak & Miller’s study]: Two studies say most rapes are committed by serial offenders; a new one finds the opposite” (The Huffington Post); and “The Hunting Ground Uses a Striking Statistic About Campus Rape [Lisak’s] That’s Almost Certainly False” (New York Magazine).
Where in those titles does it say anything about rape “over a number of years”? Sure, Swartout didn’t write those titles, but two of those three journalists spoke to him as a main source and came away with those misunderstandings.
So yes, it does matter that they threw out multiple rapes per year. It matters because the fact that many guys they labeled as “not serial rapists” actually ARE repeat rapists who unambiguously admitted “more than 5” was NEVER MENTIONED in the paper. That is, they didn’t just throw them out, but they effectively hid that fact. It matters because the paper clearly gives the false impression, to those who aren’t LCGA experts like you, that they’re refuting Lisak’s simple descriptive statistics. It matters because journalists who talked to Swartout completely misunderstood what he had allegedly found, and because those journalists framed and wrote their articles about the paper in ways that totally misled their readers, including policy makers.
Finally, it matters that policy makers HAVE been misled, and are STILL being misled, because those multiple rapes per year were not acknowledged. And that’s not some theory of mine. It’s what policy makers have been saying (i.e., parroting those misleading headlines) and what they’ve been DOING while citing this paper as support for their statements and actions. This is common knowledge out in the field, and it’s exactly why Swartout and colleagues a priori rejected the 2-class solution in favor of the 3-class solution, an issue that was a big focus of GSU’s experts when they looked closely at the paper after I filed my misconduct complaint.