Do journals walk the walk when it comes to publishing replications? In the first installment in this series of guest posts, Mante Nieuwland, of the Max Planck Institute for Psycholinguistics, described a replication attempt of a study in Nature Neuroscience that he and his colleagues carried out. Today, he shares the story of their first submission to the journal.
Our initial expectation was that Nature Neuroscience would welcome our replication effort for review, because of two Nature editorials. The August 2016 Nature editorial titled “Go forth and replicate” actively solicited submissions such as ours to its journals: “We welcome, and will be glad to help disseminate, results that explore the validity of key publications, including our own.” The January 2017 Nature editorial titled “Replication studies offer much more than technical details” reiterated their timely commitment to replication and highlighted its value to science, and characterized the exact kind of replication attempt we offered as “the practice of science at its best.”
Of course, we mentioned these Nature editorials in our cover letter along with a description of our study and results: “Given the unfolding replication crisis in neuroscience and other disciplines (Button et al., Nature Reviews Neuroscience, 2013), and in light of the recent editorials by the Nature Publishing Group about the importance of doing and publishing replications (e.g., Nature News, Aug. 2016, Jan. 2017), we sincerely hope that you will consider our manuscript for review. Our work is highly provocative, fits very well in to the spirit of the times, and carries crucial implications for the neuroscience of language.”
A week later, Nature Neuroscience triaged our paper, that is, they rejected it without sending it out for peer review. The editorial team thought that our paper was not suitable for the general readership of Nature Neuroscience, and would be better appreciated in a more specialized journal. We were surprised by this disappointing news, as were many of our colleagues who were not involved in the replication study. After all, if the original study was general enough for Nature Neuroscience, then our replication study would be as well.
We decided to appeal, but first wanted to demonstrate that there was a general interest in our study. We posted our study as a “contradictory results” pre-print on bioRxiv . In less than 3 days after posting on bioRxiv, our preprint had amassed a great deal of online attention, ranking in the 99th percentile of all research outputs ever scored by Altmetric. We appealed on 3 March 2017, arguing that the position of Nature Neuroscience “is difficult to sustain when considered against the impact of the original study we attempted to replicate,” and demonstrating the general interest in our study with the Altmetric scores. We again cited from the Nature editorials about the importance of replication, and argued that “Our work gives Nature an opportunity to demonstrate its dedication with actions rather than words.”
On 14 March 2017, Nature Neuroscience notified us of their decision to reconsider their original decision. However, because our study challenged DUK05’s conclusions, our work would be considered as a refutation. This meant we had to reformat to a short “Correspondence” format. The original authors would have one week after we submitted our Correspondence to provide a commentary, and these documents together would be sent out to review. Nature Neuroscience would try to obtain reviews from the researchers who reviewed DUK05.
We tried to appeal this decision, citing another Nature editorial that came out around that time stating that Nature has no dedicated format for replication research. None of the previously published refutation correspondences in Nature had ever involved a direct replication, let alone a large-scale replication study. Unfortunately, this appeal was unsuccessful. Nature Neuroscience suggested that if we wanted to publish the full data report, we were welcome to do so in another Nature journal called Scientific Data, which has published replication data sets. To my knowledge, however, none of those data sets involve replications of studies published in Nature journals.
In the meantime, a public commentary on our pre-print by the authors of DUK05 appeared, intended to “highlight some features of their project that undermine confidence in the purported non-replication.” The commentary solely focused on the original analysis, and did not even mention the improved analyses or Bayesian analyses. Many of the arguments in the commentary were inaccurate or misconceived but a discussion of those arguments is beyond the scope of this post. The commentary concluded “We firmly believe that replication studies are important in science. It is also important that those replications be done properly.” So they implied that the failure to replicate was because we did not do a proper job. In the end, however, the commentary was helpful to us and we updated our re-submission so that we straightforwardly and explicitly addressed each concern.
Importantly, the commentary also acknowledged that their methods description in DUK05 was incomplete: their materials had also included a set of filler sentences (i.e. sentences that do not contain the critical experimental manipulation), which our replication study had not included, and “This discrepancy could have been avoided if we had been apprised of the experimental procedures and aims of direct replication before the experiments were conducted.” The authors seemed to deny that we had requested materials for the purpose of direct replication, essentially faulting us for not consulting them, which baffled us. This issue illustrates a more general point, namely that publications should be accompanied by the data and materials of the study in the first place so there is no need to ‘check’ with the original authors (see also here), which may not be sufficient.
In the meantime, another methodological omission in DUK05 surfaced: they had performed a baseline correction procedure, subtracting average voltage within a 500 ms time window before the onset of the articles/noun from the ERPs elicited by the words, but this was not reported in the original manuscript. Baselining is a standard practice in ERP research, although the specific time window that is used for the correction can differ from study to study and impacts the manifested form of the brain response. In our resubmission, we therefore matched this part of the procedure, although doing so did not change our results and conclusions.
DUK05 thus omitted a crucial data processing procedure and failed to report the full set of materials, which raised new questions. For example, why was the baseline procedure different from the most common procedure in the lab and different from a highly similar study on prediction from that same lab? Do the results depend on the specific analysis choice? More importantly, these details arguably affect the publication record and methodological completeness of the paper.
We submitted our correspondence on May 2nd 2017.
In tomorrow’s installment, find out what happened when the team submitted their second manuscript to Nature Neuroscience — and the lessons they’ve learned from this case.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at firstname.lastname@example.org.