One team’s struggle to publish a replication attempt, part 3

Mante Nieuwland

Which journals will publish replications? In the first post in this series, Mante Nieuwland, of the Max Planck Institute for Psycholinguistics, described a replication attempt of a study in Nature Neuroscience that he and his colleagues carried out. Yesterday, he shared the story of their first submission to the journal. In the final installment today, he explains why the paper was eventually published in another journal.

We received a confirmation of our submission that restated the refutation procedure, in which the original authors had one week to provide comments on our submission before sending our correspondence and those comments to reviewers, which could include reviewers of DUK05.

More than a week later, on May 11th, I e-mailed Nature Neuroscience to explain the issue with the omission of the baseline correction procedure in DUK05, and why we included new analyses to address that issue, and the issue with the filler materials. I also added “Whether this is grounds for requesting an erratum, I cannot judge.”

I also raised the issue of data availability, because we two other researchers, namely Shravan Vasishth and Florian Jaeger, told us that they had requested the original data of DUK05 to perform re-analyses but without any result. My e-mail thus stated that “our group and several other groups have asked for data from the original study, but thus far DeLong et al. have not shared any data, so that the original analysis and results cannot be verified, and an improved analysis of the original result cannot be performed.”

In the meantime, however, the original authors asked us for new data that went into the reported analyses. We uploaded more of our data to the Open Science Framework and notified Nature Neuroscience we were doing so. Nature Neuroscience said that they hoped that DeLong et al. would make data available but that the reviewers would just have to do with what DeLong et al provided.

In other words, Nature Neuroscience ignored what seems to be a blatant violation of the journal’s policies, which state that authors must

make unique materials promptly available to others without undue qualifications. Any restrictions on the availability of materials or information must be disclosed to the editors at the time of submission. Any restrictions must also be disclosed in the submitted manuscript. After publication, readers who encounter refusal by the authors to comply with these policies should contact the chief editor of the journal. In cases where editors are unable to resolve a complaint, the journal may refer the matter to the authors’ funding institution and/or publish a formal statement of correction, attached online to the publication, stating that readers have been unable to obtain necessary materials to replicate the findings.

A caveat here is that I am not aware of the journal’s policies at the time when DUK05 was submitted nor of current policies about applying data sharing requirements retroactively. But no restrictions were disclosed in DUK05, and nothing happened: Nature Neuroscience did not explicitly acknowledge or respond to my mention of the methodological omissions in DUK05. To our knowledge, so far, the journal has not taken any of the actions described in their policy and Nature Neuroscience readers remain uninformed of these omissions.

Two months later, on July 4th, I was able to view the response from DeLong et al in the journal’s submission portal. I cannot reveal the details of that response, but it showed reanalysis of the DUK05 data with an analysis that was similar to our improved analysis (however, it was not identical because it used a different dependent variable, namely voltage in the 300-500 ms time window after word onset, instead of the 200-500 ms used in DUK05 and in our replication study). It also included 2 other older datasets which were not direct replications but came from related studies with different materials and different filler sentences (one of those datasets belonged to another already published paper which was not cited in the response). They reported a replication of DUK05: a statistically significant effect at the articles and at the nouns. It also showed that with the improved analysis, the effect in the DUK05 data alone was not statistically significant. Unusually, not a single ERP waveform was shown.

In the meantime, we had discovered an error in our calculations of the question accuracy. I wrote to Nature Neuroscience on July 5th to report this mistake, and we resubmitted a version with the correct numbers because “It would be relevant for a reviewer to have the correct data wrt our accuracy scores. We have also uploaded all log files and new accuracy data to our OSF page.” Referring to the response from DeLong et al. I also stated that “we have several major concerns with their data (no ERP waveforms, no available files etc.) and we have noted incorrect information in their descriptions and citations. it is unclear to us what the rest of the procedure is, and whether there is going to be opportunity to correct some of these errors.”

On July 11th, Nature Neuroscience responded that they had given DeLong et al extra time, and that DeLong et al would be sent our revised version so this would delay the process further. More than a month later, on Aug 22nd, Nature Neuroscience informed us that the two papers would finally be sent out for review.

More than two months later, on Nov 8th, we received the editorial decision that our paper was rejected. As editorial letters go, it did not say much except that our conclusions did not significantly challenge the conclusions of DUK05, and merely summarized some of the topics mentioned by the three reviewers (R1-3). R1 was very positive about our paper and supported our conclusions, but R2 and R3 had a range of concerns, which I cannot cite directly, but below is the gist:

R2 wanted to see a head-to-head comparison between our results and those of DUK05 when precisely the same methods were used. However, we had made all correlation results available for review (with the original and new baseline correction), and we used a Bayesian analysis to test whether we replicated both the size and direction of the original, which was the case for the nouns but not the articles. Somehow, all these data and analyses seem to have been ignored or missed by this reviewer.

Moreover, the discrepancy between the analyzed time windows in our improved analysis and the DeLong et al response (which was supposed to copy our analysis) was not picked up by this reviewer. R2 thus faulted us, rather than DeLong et al, for being unclear about the data analysis, while all our analyses were reproducible and following the details of DUK05.

R3 was very negative about our efforts, and argued against publication based on several points. First of all, R3 made what I consider an ad hominem argument by suggesting that we intentionally failed at the replication. R3 also suggested that our data were collected by poorly trained technical members, and that we “misplaced” electrodes. In fact, we did not misplace electrodes, but our laboratories had different EEG channel montages so we had to interpolate some channels for one lab to arrive at a common set of channels. R3 thus clearly demonstrated a lack of basic knowledge of how an EEG lab operates.

R3 also faulted us for not having all original materials, and said it was inappropriate to suggest that the original authors did not want to share their materials and demanded to see evidence. I had already informed the editor that we had those emails but that I could not make them available without permission. All I could do was cite my own personal communications where I asked for the materials with the stated purpose of replication. R3 thus suggested that our failure to replicate was either intentional or due to sloppiness, but did not seem too bothered that DUK05 had omitted crucial methodological details.

R3 also provided another, rather odd argument for rejection, namely that if the studies would be published together, readers would only read our study and ignore the commentary. But this does not seem like a reasonable argument for rejection, and in fact it is completely incompatible with the journal’s reason for publishing refutations together with commentaries in the first place.

We pointed out some of these issues to Nature Neuroscience in an email and briefly considered appealing yet again. However, we quickly decided against this given that rejecting our paper based on such comments conveyed (to us at least) an intention to reject our paper no matter what. We submitted elsewhere after making some further edits to pre-empt the concerns raised in the review. Painfully for us, yet another Nature editorial appeared two weeks later, again showcasing commitment to replication by stating that “Rewarding negative results keeps science on track: Creating a culture of replication takes prizes, grants and magnanimity — as well as publications.”

We submitted our paper to eLife, a nonprofit publishing organisation inspired by research funders and led by scientists. About two months later, we received a long list of comments from 3 reviewers, most of which are published along with our paper.

Final thoughts

The purpose of this post was to provide a transparent, behind-the-scenes account of our replication study and what happened when we submitted our study to Nature Neuroscience. On the one hand, I can understand why Nature journals might be hesitant to publish replication studies. It might open the floodgates to a wave of submissions that challenge conclusions from publications in their journal (although that in itself is not necessarily a bad thing).

On the other hand, a few things from this case study stand out by clearly contradicting Nature’s commitment to replication and transparency. Nature Neuroscience triaged our study for lack of general interest, failed to follow their own submission procedure in terms of timeline, failed to follow their own policy on data and materials sharing, failed to correct important omissions in the academic record of the original study, and failed to provide, in my opinion, a fair review process (i.e. by relying on one reviewer who faulted us for the lack of clarity due to the original paper, and on one non-expert reviewer who mostly just questioned our intentions and disagreed with the publication format).

In the end, the final decision letter demonstrated a lack of engagement, did not go beyond a mere 2-sentence summary of the negative comments by the reviewers, and did not even attempt to explain which of the concerns weighed most strongly in the editorial decision, why they couldn’t have been addressed in a revision, and which conclusions of DUK05 remain unchallenged.

Replication research may become mainstream, and that’s a good thing. Rolf Zwaan and colleagues recently argued that there is no compelling conceptual argument against replication (see also here). And surely they’re right, most researchers appreciate the importance of replication, but what about the practical reality of replication? What happens when you try and do it? The practical difficulties of doing and publishing replication research create substantial obstacles.

In my experience, reviewers and editors may place the burden of proof on replicators to account for different methodology (even if the methodology was never reported in the first place and cannot be verified), replications may be subjected to methodological critiques never raised against the original study (see also here), and reviewers and editors typically do not scrutinize claims of successful replications in the same way as those of failed replication. In addition, as in the current case, replication studies often appear in lower ranked journals than the original, feeding the suspicion that replications are not highly valued or of lesser quality.

Nature prides itself for its commitment to replication research and for its increasing transparency in publishing, and while Nature indeed is developing several initiatives, it seems they have a very long way to go, like many other journals/publishers. Here are some suggestions (see also here):

  • Increase transparency of the review process, by publishing decision letters, reviewer comments and author responses (as eLife does)
  • Treat replication studies as primary research articles and not as mere refutation correspondence. The refutation correspondence format is not intended for full presentation of data, whereas the whole point of replication research is to fully present a new set of primary research data of which the details may matter a good deal. Nature states that “we do consider high-value replications, subjecting them to the same criteria as other submitted studies,” but refutation correspondences and their responses are currently not subjected to the same criteria as other studies, so Nature could start applying the same criteria (e.g., on data availability, on requiring a statistical reporting checklist), and could officially enlist statistical expertise as some other journals have done.
  • Nature could also follow the ‘Pottery Barn rule’ and take responsibility for publishing direct replications of studies by publishing such studies (after review on technical merit), regardless of the outcome, as brief reports that are linked to the original and therefore in the same journal, rather than relaying them to other journals such as Scientific Data.
  • Even better, Nature could dedicate a submission format to replication research across all all journals (not just Nature Human Behavior), for example as registered reports via the Open Science Framework. Many issues that arose during our publication process could have been avoided had Nature Neuroscience had a Registered (Replication) Report format. (In fact, one lesson to take away from this experience is: if you’re going to do a large-scale replication study, do it as a Registered Report even if that is with a different journal than the original)

Ultimately, I am happy with how things turned out, and I’m very satisfied with the format and content of our publication in the non-profit journal eLife. Regardless of what happened during the publication process, and while some senior colleagues have voiced their distrust of and annoyance with our study, we have received many more, very positive responses from colleagues and from the open science community more broadly. On that note, I want to close by arguing for replication, pre-registration, and increasing transparency (see also here and here), no matter what occurs.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

22 thoughts on “One team’s struggle to publish a replication attempt, part 3”

  1. Did RW reach out to DeLong and offer him a chance to offer his side of the story before running this series? What about Nature? Additionally, there are several messages referred to in the narrative, that apparently cannot be linked to the story because they were not authored by Dr. Nieuwland (?) — how much of that evidence has RW reviewed and confirmed?

    1. It would be very interesting to hear from NPG editors on how they intend to fulfil all of their promises regarding the correction of science and replications. They’ve had a bit of time to think about how they are going to proceed since making all of those promises. Maybe they could do a dedicated interview with RW if they don’t wish to respond to this story directly?

    2. Let’s assume RW did not reach out to DeLong and Nature, and made no attempt to obtain messages that cannot be made public without the author’s permission. What would that imply concerning Dr. Nieuwland’s report?
      Nothing at all, I think.
      What’s your point, Dr. Jondoe?

      1. That it might be better suited for his personal blog or as a social media post, if indeed it hasn’t been subject to the standards and practices typically expected of legitimate journalists whenever they give someone use of their platform. Its a simple question.

        1. What a great idea, Dr. Jondoe! Maybe Retraction Watch should be given access to all original materials needed to back each claim in the article. Maybe the article should be retracted or corrected if it turns out that the author only shared some of the evidence supporting his claims, gave incomplete or misleading information, or refused to provide information that was needed to independently verify his claims.

          But that would mean the standards for evidence at Retraction Watch would be even higher than those for a Nature publication.

          1. > But that would mean the standards for evidence at Retraction Watch would be even higher than those for a Nature publication.

            Well, you know what they say about glass houses and stones.

  2. Nice write up. However, I don’t agree with the sentiment that it’s the editors and reviewers specifically who place the burden of proof on replicators. In my experience, the large majority of scientists will hold the same view. We need to change the incentive system to improve publication rates of both confirmatory as well as contradictory replication attempts and at the same time, educate students that studies identifying significant effects are not necessarily better designed/more meaningful than negative results.

  3. During my PhD, I attempted to replicate a published study, but couldn’t get the results in the original study. Guess what my advisor said?: “Oh they (original authors) may have done something in task administration that you are not aware of.” To clarify, this advisor was trying to tell me to not doubt the original study despite my failed replication attempt. And to clarify, my advisor had no professional relationship with the original authors.

    This is like a religion. You must continue to believe it despite contradicting evidence. What a joke.

    1. You are correct – in many many cases, the establishment actively works against any challenges to published work by powerful scientists. IMHO it mostly derives from fear of losing funding due to offending those scientists or their friends. This fear of challenging authority is the primary defect leading to the reproducibility issues we now see.

      1. The thing is, that particular study was not authored by a well known scientist by any measure. The author was a new assistant professor. My advisor was also an assistant professor at that time. They were not friends, professional or otherwise, and never cited each other. This makes the knee-jerk reaction to preserve the status quo (and to discourage challenges to it) even more insidious!

    2. My advisor published a very high profile paper, then asked me to follow the same methods. I only discovered later that the previous paper’s method was wrong basically (the signal was due to the impurity of chemicals). Guess what, I was adviced to change my project and forget the previous paper.

  4. While the full details of this situation cannot be known here, I think there is no question that journals are generally reluctant to publish replication papers, either positive or negative, no matter how well the studies have been conducted. The journal editors seem almost maniacally driven by the impact factor metric, which is probably the reason so many high-impact journals are publishing meta-analyses, regardless of the relative scientific rigor or value.

    A paper that reports a positive new discovery will almost always be cited more than any replications that follow. Journal editors love to publish the “discovery”, and let other journals do the dirty work of publishing subsequent studies that refine or attempt to replicate it. In this way, these journals act like the huge mining corporations that scrape away the valuable minerals (and associated profit) of a region and leave behind a mess that other groups need to repair and restore, long after they’re gone.

    I believe all journals should have an ethical commitment to publish any well-conducted, adequately-powered study that attempts to replicate a new discovery published in their journal. They should “own” what they publish, meaning that they have an obligation to see through to resolution (confirmation or refutation) any paper they publish that reports a meaningful new discovery. The current “print and run” approach makes identifying and disseminating true scientific advances highly inefficient, and harms the public health.

  5. I fully agree that the transparency of the review process is dramatically increased (my adverb) by publishing decision letters, reviewer comments and author responses. But it’s fair to say that this truly innovative strategy was pioneered and made a regular feature a long time ago by the good folks at EMBO Press (EMBO Journal, EMBO Reports, EMBO Molecular Medicine and Molecular Systems Biology, all very reputable and successful journals). It is true that eLife does this too (and kudos to them), but only to a certain extent as they publish summaries, not verbatim author/editor/reviewer correspondence as the EMBO journals do. Nature Comms on the other hand do something similar but I believe they do not show editor correspondence, which I would argue is crucial for transparency. Wwhy others don’t do this is unclear (well, probably it is…) . It’s certainly not a cost issue as some would like us to believe: it’s just a matter of sticking texts together and generating pdf files for download.

  6. It’s time to change the whole scientific publication system. Today there is only one winner: the publisher. All the others; scientists, research institutions, science and society are loosers. They pay for publishing and then for reading the published papers which we no longer trust. The publishers have a very high profit on other scientists labor (manuscript preparation and peer-reviewing) but show no responsibility when you address serious problems in published papers. If the experiments are impossible to replicate or you point out clear data manipulation doesn’t matter for them. They are not interested in the scientific truth, only to increase their already high profit.

    Research institutions should stop paying for journal subscriptions and collaborate on starting non-profit journal where scientists can publish their results and failed replication experiments.

  7. I plan to discuss this case briefly in a paper (a replication attempt of another study on predictive processing, involving eyetracking) that is under review with the Journal of Memory and Language.

    To complete this record of events, I append the response I got from Kutas lab when I asked for the data.

    I’m quite neutral in this debate; and in fact I have myself been in a situation in the distant past where I didn’t have a good enough pipeline to be able to reconstruct old data. In my lab, we are still generally unable, even in 2018, to arrive at a consistent workflow for data analysis and reporting results in papers and online repositories. So the situation that unfolded in the DUK05 case is really not very surprising to me. I think that one lesson to learn from all this is to develop more rigorous open access standards for data *and code*. Many labs have started using osf.io to release data; I myself use github but may switch at some point entirely to osf. This has to become standard operating procedure.

    ### begin
    Apr 8, 2017

    Thank you for your interest in the 2005 a/an study.

    We’ve been inundated and are dividing things up.

    For data wrangling at our end feel free to contact me directly and CC Katherine ([email protected]) and Marta ([email protected]).

    I’m comfortable with R and lmer. Also MATLAB, Python, C/C++ if it matters.

    I’m also aware of what is needed for the 2005 a/an single trial dataframe.

    Naturally, we would like to have that too, and for the same reasons.

    Unfortunately getting “the data” in that form is not so simple.

    Here’s why.

    The 2005 report analyzed the continuous EEG as time-domain average ERPs using the lab’s compiled C data analysis pipeline.

    The pipeline is rock solid and highly performant for calculating and measuring time-domain average ERPs by subjects across items or by items across subjects, i.e., exactly what is needed for the F1 and F2 ANOVAs that were industry standard in language research before mixed-effects modeling came into vogue. It’s a good system for ERP research. Marta forked the code base from Steve Hillyard’s lab, and it lives on, in spirit, as the EEGLAB project Steve Luck has been developing over the past several years.

    As always, the speed and compactness of compiled C comes with a tradeoff: the pipeline components are black boxes, only the input and output are available to users. The averager does all the single trial processing at run time and doesn’t expose the single trials along the way. This isn’t an oversight, it is good design for *average* ERP analyses. Sum-count-and-divide-by-n is a fast and space efficient algorithm. The entire motivation for averaging is that the single trials individually are not sufficiently informative to draw conclusions. Maintaining or writing the single trial epochs simply duplicates chunks of the continuous data which wastes resources and serves no purpose for this kind of analysis.

    Since the 2005 paper was an ERP analysis (within subjects, across items in each of 10 cloze bins) the upshot is that we didn’t then and don’t now have the individual single trial data in a handy format: the single trials all came and went at runtime.

    Obviously there many ways to analyze EEG data where it *is* essential to retain single trial information, e.g., various forms of regression analysis, including mixed-effects modeling, some kinds of time-frequency analysis, and such.

    We confronted the need for single trial analysis long ago and considered various options.

    1. Trick the compiled C averager into spitting out degenerate “averages” of n=1 trial per “condition”, i.e., individual items. This can be done, but it is inefficient because the features that make the workflow smooth for pooling and averaging dozens or hundreds or thousands of single trials, make it clunky for handling them one by one. Tracking artifact exclusions (n=0) and blink corrections (= unexcluded artifacts) especially is a chore. In the end it is a long road to get from configuring the analysis to extract singleton averages to a tabular data structure like an R dataframe in a way that tracks all the run time processing behind the scenes in the pipeline.

    2. Patch the compiled C ERP averager to dump the single trials. The code base is mature and stable. It is, nevertheless, an accretion of decades of patched spaghetti for reading, processing, and writing proprietary binary format EEG and ERP data files. A patch would solve *this* problem but not the next and would contribute further to the spaghetti.

    3. Port the compiled C data processing functions and libraries to MATLAB, Python, and/or R along the lines of Luck’s ERPLAB project. Some things are trivial, e.g., averaging is a single line of code anywhere. Other things like re-implementing the exact digital filters and the blink correction algorithm from the C are more, and a lot more, work respectively. Given the ready availability of functionally similar alternatives in the different ecosystems, reinventing these wheels on different platforms didn’t seem like a good use of time.

    Given all the above, plus the maturation of open source EEG data analysis programs like EEGLAB/ERPLAB, Fieldtrip, Brainstorm, and MNE Python, which are more flexible, extensible, and, especially, introspectible than compiled C we, decided to use the C pipeline for conventional average ERPs when single trials were not needed and to handle single trials in these other toolboxes according to their strengths.

    This means our lab has evolved two complementary approaches to EEG data analysis depending on the aims.

    * We can have average ERPs computed directly from the continuous EEG data by the compiled C, albeit without ready access to the single trials.

    * We can have analyses of continuous EEG data with ready access to the single trials albeit with a processing pipeline similar but not identical to the compiled C.

    We can’t have ready access to the single trials identical to those contributing to the average ERPs computed with the compiled C.

    And the latter is what people appear to have in mind for the 2005 a/an study.

    So providing “the data” for this study as single trials is not a matter of providing the original data or the original analysis. It is involves restructuring a 15 year old project to accommodate a type of analysis that came along later and only very recently for ERP studies.

    So that’s the hold up on the 2005 single trial data. The impediments to getting exact single trial reanalysis of the 2005 data are not insurmountable, just a great deal of work.

    Given the advances in single trial data analysis since 2005, our view is that conducting new studies specifically designed to take advantage of the new methods is likely to be more instructive than dropping old studies into the machinery and turning the lmer crank. And this is our approach as we move forward.

    But since different groups may view the costs and benefits of retrospective analysis differently, we are open to providing unfettered access to all the original data, the binary executables and source code we used to analyze it, along with all the lab manuals and instructions either onsite here or offsite. With the clear understanding, of course, that unfettered access is a two-way street and we would likewise have the same fully open access to the methods and results of any and all analyses conducted on our data.

    Best,

    — Tom

    P.S. If it’s of interest there are some possible options for access to the 2005 data. If the individual (non-aggregated) EEG data are to remain here, the simplest approach might be a MOU, ssh access to our server plus some UCSD IT admin to get on the VPN, and adding a co-investigator to our IRB protocol. If network bandwidth is too low for remote access I have an experimental virtual box VM snapshot of our system. Our code base compiles and runs, and as far as I can tell, it behaves like a real system. That would be everything in one file that works on anything that can run VirtualBox. The hitch would be that shipping the EEG data offsite for storage means it would have to be added to an IRB protocol there and with data security procedures that complied with our IRB. But that should be a short amendment to your protocol. If you want to explore these options or have something else in mind for access to the data please let me know.

    Fair warning, even with the lab manuals, our cook-book tutorial, and lab wiki there is a steep learning curve and we just don’t have the resources to do online tutorials or be a help desk for everyone who wants to work with our old data. So if you think you’ll want more than just access to the data, programs, and documentation, please email Marta. The best option might be to visit here in San Diego.

    ### end

    1. I should add (and thanks to Mante for reminding me about this in a personal communication) that when I first got the response from Kutas lab saying that the data were not available, I was very surprised. I thought that data should be easily available.

      When I say now that I am not really surprised by the fact that the data are not available, what I mean is that, upon reflection, I realize that in general we (psycholinguists in general) do not have a good enough workflow to be able to easily make our data available.

      When I think about my own last 15 or so years in psycholinguistics, I realize that I didn’t pay enough attention to the question of data release until quite recently. This is why I now think it’s not surprising that the Kutas lab can’t easily release data—almost nobody has a well-designed workflow such that they can release data (also not my own lab). This is something that needs to change.

      1. There’s a lot to be said for the ‘ready to go VM’ method of providing a means to replicate the analysis conducted.

        Even if the custom source code written, the build process, and the actual commands run in the analysis are archived, that doesn’t mean a lot unless it’s entirely self contained. There could be a correction- or bug- introduced to a library that it depends on that would significantly alter the result with nothing more than a single comment on a commit, or no disclosure at all in the case of proprietary software.

        Here’s an interesting thought… knowing this, would there be value in lodging the entire workflow with some organisation that could rerun the analysis with later versions of the libraries used, and monitor changes in the results? Would be a great test for the creators of the libraries, if nothing else.

    2. About 25-30% of people I ask for raw data behind a published paper are either unable or unwilling to release it. It is hard to believe in many of these cases that something isn’t very wrong in the original analyses.

      It’s high time psycholinguists started releasing data and code as a matter of course as soon as their paper is published, or even before. There really is no excuse any more not to.

      1. Just one point relative to the non-availability of the data. Delong et al. eventually did access to their trial-level data from the original study. We know this because they reported a reanalysis of that data (using lmer) in their commentary on our manuscript which was sent to reviewers. (As Mante notes, for some reason, the analysis was on a different time window than the original study.) In any case, now that they have recovered the data, perhaps it would be worth requesting it again for independent verification.

  8. Excellent write up RW.

    Imagine if that was a replication attempt for a hugely successful preclinical trial already published using several internationally respected centres and World leaders in their fields, with a patented molecule, ready to go into a human clinical trial?

    What if a subsequent preclinical study found is was all hocus pocus, had solid data to prove it and were blocked from publishing it by the very journal that states it wishes to publish these types of replication studies?

    Those who block the publication of the reproduced experiment are akin to those who did the first experiment.

    As Burke said “the only thin necessary for the triumph of evil is for good men to do nothing”. We now add women to this, of course.

  9. I remain unconvinced. They had 3 referees, 2 of whom were unenthusiastic. He does not share the referees’ comments verbatim (I do not understand why; they are anonymous, hence the postal secret does not apply). Furthermore Nat Neurosci did offer to publish his study as “refutation”, but he did not want that. At the very least, I’d say that there are two sides to this story.

    1. Of course, there are always two sides to a story. But, for the record, I explicitly asked for permission to publish the reviewer comments (as can be seen in my attached e-mails), Nature Neuroscience refused and that is why I do not share them. And Nature Neuroscience did not ‘offer to publish this study as refutation’; after the initial rejection, they suggested the refutation format, and I appealed but eventually it was submitted and rejected as a refutation.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.