Authors questioning papers at nearly two dozen journals in wake of spider paper retraction

Jonathan Pruitt

Talk about a tangled web.

The retraction earlier this month of a 2016 paper in the American Naturalist by Kate Laskowski and Jonathan Pruitt turns out to be the tip of what is potentially a very large iceberg. 

This week, the researchers have retracted a second paper, this one in the Proceedings of the Royal Society B, for the same reasons — duplicated data without a reasonable explanation. 

Dan Bolnick, the editor of the American Naturalist, tells us:

After learning about the problems in the [2016] data set, I asked an associate editor to look at data sets in other publications in the American Naturalist [on which Pruitt was a co-author] and we have indeed found what appears to be repeated data that don’t seem to have a biological explanation.

He isn’t alone. Bolnick added:

I am aware that there are concerns affecting a large number of papers at multiple other journals, and at this point I’m aware of co-authors of his who have contacted editors at 23 journals as of January 26. 

In a detailed thread about the first retraction, Laskowski, of the University of California, Davis, recounted how she’d learned about problems with her 2016 study in the American Naturalist of the behavior of spiders, data she’d received from Pruitt, who has made an impressive career out of supplying results for collaborative studies. 

The new retraction notice states: 

The first author of the paper by Laskowski & Pruitt, ‘Evidence of social niche construction: persistent and repeated social interactions generate stronger personalities in a social spider’, published in Proceedings of the Royal Society B, recently scrutinized the raw data associated with the paper, after being made aware of problems in the raw data of a related follow-up study. The data were collected in the laboratory of the last author. The first author found duplicated values in the raw data that were concentrated in two treatment groups. The presence of these duplications cannot be adequately explained nor corrected, and if removed, the finding that increasing familiarity increases individual behavioural variation disappears. As such, results drawn from these data cannot be considered reliable, and the authors therefore wish to retract the paper in question.

Laskowski has written a lengthy blog post about her experience, laying out the chain of events that led to the two retractions — as well as to what appears to be a likely third removal of a paper in Biology Letters, titled “Persistent social interactions beget more pronounced personalities in a desert-dwelling social spider.” The post is worth reading. 

In it she states:  

I’d also like to note that since the first retraction (the Am Nat article) has been made public, several of Jonathan’s other co-authors have reached out to discuss potential issues in their own papers that they have collaborated on with him. Given the problems in my data sets, these folks are proactively investigating data that they received from Jonathan and are communicating with the relevant journal editors about any necessary next steps they may have to take. It seems that everyone’s top priority is to ensure the integrity of the scientific record.

Pruitt, who was at the University of California, Santa Barbara, when the research was conducted, has since moved to McMaster University in Hamilton, Ontario. Pruitt said on Twitter earlier this month that he and his colleagues were retracting other papers, but has not responded to any of our requests for comment about the first retraction or the new one. (He is doing field work in Northern Australia and Micronesia at the moment, according to Bolnick.)

Meanwhile, Laskowski says she is looking for silver linings: 

I am now trying to focus on any potential positive benefits of this experience. This has been an absolute crash-course in intensive data forensics for me. When I received the first set of data for these papers, I was a final-year PhD student overjoyed to have initiated my own independent collaboration with a more established scientist. Science is built on trust, and we all trust that our co-authors perform their parts of the collaboration as accurately as possible, to the best of their abilities. So at the time, I walked through my regular data exploration techniques (looking at the spread, looking for outliers, etc) and nothing popped up as unusual. I’d like to think that now as a more established scientist my data exploration and interrogation methods have already improved from where they were 5-6 years ago. But suffice it to say that from here on out, any data sets I receive (or produce) will get a full strip-search: check for duplicates, check for duplicated sequences, look for any too-precise relationships among different behavioral measures. PO Montiglio and I (& others) have already been discussing building a small R package that could look for the more complicated problems like those we found in these data that we hope could be useful to help others avoid horrible situations like this.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

9 thoughts on “Authors questioning papers at nearly two dozen journals in wake of spider paper retraction”

  1. Can someone do a through review of the data in this paper: Site-specific group selection drives locally adapted group compositions by Pruitt and Goodnight in Nature?

    The figure Fig 1(b) and (c) appear odd

    Data from Dryad is found here: https://datadryad.org/stash/dataset/doi:10.5061/dryad.87g80

    This is in cautionary response/ check to the a number of papers under retraction co-authored by Jonathan Pruitt.

    1. I’ve already looked at this data. I haven’t read the paper so I don’t know what to expect of these variables (e.g., one of them might be a constructed variable), but try plotting column #5 vs. column #6 and see what it looks like.

  2. It might be interesting to investigate his move from UCSB to McMaster (I admit I haven’t read much about this case, or seen the Science feature yet). At first search for Pruitt, the internet still yields top hits for his [former] UCSB pages. What’s most telling to me that UCSB has something to say is that general news features about him and his work – not just his old faculty pages – have been scrubbed from USCB domains.

  3. It’s interesting that in Kate Laskowski’s blog, she says “One of my favorite hypotheses in this regard was the “social niche specialization hypothesis” … but somewhat disappointingly (to me, at least) I found that sticklebacks show strong individual differences in behavior, but repeated social interactions within the same group do *not* seem to strengthen these differences … But I was convinced that social niches are likely important …” and “At the conference, we outlined the experimental design and talked about our predictions and then we parted ways, …”

    It is clear that the researcher had a conscious or unconscious bias, and that she communicated that bias (and her predictions, and hopes) to the person who went off and generated a bunch of data. There’s an incentive for the person generating the data to provide data that matched her hopes. It sounds like this wasn’t quite double-blind.

    It’s always a danger that a researcher has a “hope” and something she’s “convinced” about, to then test that theory… without STRICT controls in place.

  4. Animal personality!

    What the “F” is personality? How in the “F” can one objectify a subjective aspect of the natural world? At best this is a study of animal behavior accompanied by stipulative and unsubstantiated claims that the behavior reflect some measurable, undefinable abstraction called personality.

    Behavioral sciences are not science (unless one intends, by behavioral science to focus solely on behavior and forgo any theoretical inferences about the relation between the behavior and the experimental abstraction.

    At best — given the total absence of well-formed, quantitative theory, the so called scientific reach of these “theories” is “effect present/effect absent”. The numerical assignment might as well be “more”, “less” and “equal”.

    Pathetic.

    1. Animal “personalities” are similar to human personalities: consistent, individual differences in behaviour patterns.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.