“Truly extraordinary,” “simply not credible,” “suspiciously sharp:” A STAP stem cell peer review report revealed

science 62714Retraction Watch readers are of course familiar with the STAP stem cell saga, which was punctuated by tragedy last month when one of the authors of the two now-retracted papers in Nature committed suicide.

In June, Science‘s news section reported:

Sources in the scientific community confirm that early versions of the STAP work were rejected by Science, Cell, and Nature.

Parts of those reviews reviews have surfaced, notably in a RIKEN report. Science‘s news section reported:

For the Cell submission, there were concerns about methodology and the lack of supporting evidence for the extraordinary claims, says [stem cell scientist Hans] Schöler, who reviewed the paper and, as is standard practice at Cell, saw the comments of other reviewers for the journal. At Science, according to the 8 May RIKEN investigative committee’s report, one reviewer spotted the problem with lanes being improperly spliced into gel images. “This figure has been reconstructed,” the RIKEN report quotes from the feedback provided by a Science reviewer. The committee writes that the “lane 3” mentioned by the Science reviewer is probably the lane 3 shown in Figure 1i in the Nature article. The investigative committee report says [co-author Haruko] Obokata told the committee that she did not carefully consider the comments of the Science reviewer.

The entire reports, however, have not been made available. Retraction Watch has obtained the full text of the editor’s cover letter and reviews of the rejected Science paper. The reviews are full of significant questions and doubts about the work, as would be expected in a rejection. We present them here, to fill in some of the gaps and help readers consider how the research eventually made it through peer review:

21 August 2012

Dr. Haruko Obokata

Anesthesiology

[ROOM NUMBER REDACTED]

Brigham and Womens Hospital/Harvard Medical School

75 Francis Street

Boston MA 02115

USA

Dear Dr. Obokata:

Manuscript number: [REDACTED]

Thank you for submitting your manuscript “Stress altered somatic cells capable of forming an embryo” to Science. We have now received the detailed reviews of your paper. Unfortunately they are not positive enough to support publication of the paper in Science. Although we recognize that you could likely address many of these specific criticisms in a revised manuscript, the overall nature of the reviews is such that the paper would not be able to compete for our limited space. We hope that you find the comments helpful in preparing the manuscript for submission to another journal.

We are grateful that you gave Science the opportunity to consider your work.

Sincerely,

NAME REDACTED, Ph.D.

Senior Editor

REVIEWS

Reviewer 1

This paper claims that cells from any somatic tissue can be reprogrammed to a fully pluripotent state by treatment for a few days with weak acid.

This is such an extraordinary claim that a very high level of proof is required to sustain it and I do not think this level has been reached. I suspect that the results are artifacts derived from the following processes: (1) the tendency of cells with GFP reporters to go green as they are dying. (2) the ease of cross contamination of cell lines kept in the same lab.

I believe that the green transformation is indeed due to stress as reporters are often upregulated in stressed or dying cells. But the cells that go green may not be the ones in the later green colonies. I think these are more likely to be ES cells acquired by cross contamination and selected for growth by the B27-LlF medium. This would explain the results on marker expression, promoter demethylation, differentiation, and chimera formation. In Fig.2B and the other RT-PCR studies, it is not stated whether the Y-axis is linear or logarithmic. If it is linear, which seems more likely, then I am very surprised that all of the pluripotency genes measured in the ESC control have virtually the same RNA abundance, which exceeds that of GAPDH.

The claim about all the other tissues being similarly reprogrammed by low pH treatment is truly extraordinary. Much more detail needs to be provided about the nature of the cells and the culture conditions. Otherwise this is simply not credible, since the principal cell type of several of these tissues is postmitotic.

The DNA analysis of the chimeric mice is the only piece of data that does not fit with the contamination theory. But the DNA fragments in the chimeras don’t look the same as those in the lymphocytes. This assay is not properly explained. If it is just an agarose gel then the small bands could be anything. Moreover this figure has been reconstructed. It is normal practice to insert thin white lines between lanes taken from different gels (lanes 3 and 6 are spliced in). Also I find the leading edge of the GL band suspiciously sharp in #2-#5.

Minor points:

1. It is by no means clear that newt cells can revert to “stem cells” (presumably this means pluripotent stem cells). Recent work on newt regeneration has indicated conservation of tissue type in most cases. The Wolf (1895) reference is out of date.

2. p.8 heading: “exposure” not “expose”.

3. The sentence on p.12 line 6 up “mixture …. analyzed” is very confusing.

4. In the Fig. 4 legend it should be made clear which experiments are done with 2N and which with 4N hosts.

On the positive side, I do agree with the authors that the many claims of pluripotent stem cells from adult mammals probably arise from partial reprogramming due to stress followed by selection in culture. But I don’t think such cells often match ESC in pluripotent behavior, especially the ability to form chimeras in 4N hosts.

Reviewer 2

Obokata et aI. describe a novel method for reprogramming somatic cells to a state that possesses many features of pluripotent stem cells. By subjecting C045+ hematopoietic cells to low pH, mechanical trituration, and culture in LlF-B27 medium, ESC-like properties can be induced, including ESG-like levels of Oct4, Sox2, and Nanog mRNA and critically, the potential for germline transmission and tetraploid complementation — two of the most stringent functional assays for the re-acquisition of pluripotency. If these data are indeed robust then the observation is highly significant.

Unfortunately, the paper presents only a superficial description of many critical aspects of the work. A detailed description of the methods used to induce and maintain SACs was not provided, and the mechanisms and explanations are either not compelling or poorly defined (Figure 3). Given the novelty of the claims, a thorough characterization of the SACs is warranted. as is some probing of the mechanisms. This would necessitate a more sophisticated genomic analysis of SACs, through microarray or RNA-seq, and genome-wide DNA methylation analysis — analyses that other pluripotent stem cell lines have been routinely subjected to and for which methods for smaller cell numbers have been developed.

Issues to be addressed:

1) From the experiments performed by the authors, it cannot be ruled out that rare multipotent progenitors are being selected for and expanded under stress conditions. While this in itself would be extremely interesting, it suggests a very concept [sic] than what is being claimed about reprogramming of “mature” adult somatic populations. It is unclear whether cells are harvested from any other stages than young (7-day old) mice. Might the cells in these young mice be errant migratory germ cells or some other stem cell-like progenitors? CD45+ cells are harvested from the spleens and these are called lymphocytes, but hematopoietic stem cells (HSCs) express CD45, and the spleen contains HSCs at this young age. Thus stress conditions may be acting on HSCs, rather than fully differentiated somatic cells, which would imply a very different main conclusion of this work. Should the authors wish to maintain their conclusion, they should rule out the potential germ cell or HSC origin of SACs. This could involve perhaps the examination of genomic imprints, or expression of c-Kit.

2) The analysis of TCRb gene rearrangement in fig S6 purporting to show derivation from fully mature T cells with TCRb rearrangement is flawed. If mice are clonally derived, they should have a single gene rearrangement, not a population of polyclonal rearrangements as appears in at least some of these animals. This analysis should be done using Southern Blots to avoid the problems of PCR contamination; see Hochedlinger and Jaenisch, Nature 2002.

3) The evaluations of stress-mediated response pathways and analysis of mitochondria are purely correlative and have no demonstrated mechanistic link to the observed reprogramming.

4) The ability to permanently maintain these cell lines is not well-described. The authors claim that “spherical colonies grew to approximately 70 um in diameter … and spherical colonies could be maintained for another 7 days in that culture condition.” Figure 2C–it appears that the expression of mESC-associated pluripotency genes is decreasing significantly over time in bulk SAC populations. If these cells are truly pluripotent. they should not only exhibit the developmental potential of ESCs/iPSCs but also the ability to indefinitely self•renew. For ESCs/iPSCs, past groups provided evidence of telomerase activity to indicate the capture of an immortalized cell line. In the case that the cells cannot self-renew indefinitely, a description of what happens (differentiation, death, etc.), and an explicit statement on unlimited or limited proliferative potential of the SACs should be provided.

5) Reprogramming efficiency alter stress exposure should be monitored for each cell population. Presumably many cells die after low pH treatment, and the 40% of surviving cells expressing Oct4-GFP after 7 days represents a selected and subsequently expanded population, but this is not clear. This would help understand the proportion and give clues to the nature of the cells undergoing reprogramming. For example, more refined cell isolation followed by analysis of conversion efficiency could be very informative.

6) The nature of the B27 medium was not described. Is it serum-free or serum-containing? What is the base medium used? These are critical details because “B27 medium” is not a conventional condition for mouse ESC/iPSC propagation. but rather for primitive neural stem cell derivation/propagation. For serum-free culture of mouse ESCs/iPSCs in LlF and B27 supplement, they require N2 supplement and BMP4 or N2 supplement and 3i/2i (Ying et al. Cell. 2003; Ying et al. Cell. 2008). The original work that first describes the use of LlF-B27 medium was not cited (Tropepe et al. Neuron. 2001). In that study, they critically observed that LlF-B27 ESC-derived neural colonies are competent to colonize both neural and non-neural when exposed to an appropriate environment Given the claim of acquired pluripotency, the authors need to rule out that they have generated primitive neural stem cells by genomic characterization of the SACs they generated, and more precisely capture the nature of the “reprogramming” that is claimed (microarray, DNA methylation analysis, etc.). A systematic genome-wide characterization of SACs will establish the molecular identity of SACs in relation to other mouse pluripotent stem cell lines.

7) From the FACS analysis of Oct4 expression in day 1 CD45+ cells, it is not clear if there is a small population of weakly-expressing Oct4-GFP cells. How can the expansion of a pre-existing Qct4-GFP expression population be ruled out, rather than de novo expression from mature cells? Because the authors claim a significant increase in efficiency and pace of Oct4 locus reactivation, they should compare their method with the predominantly established method by which hematopoietic cells are reprogrammed: by defined factor induction. A head-to-head comparison of SAC induction and iPSC generation is needed (a la Figure 1B).

8) The authors surveyed changes in the mRNA expression levels of an array of stress-response genes, but did not assess their functional relevance by shRNA knockdown or overexpression during conversion.

9) Considering that mouse strains known to be either recalcitrant (e.g. Bl6) or permissive (e.g. 129) to ESC derivation were used in this study, was there any correlation between SAC derivation and strain?

10) It is stated that there was no differentiation tendency of SACs derived from any tissues when incorporated into chimeras. This does not appear to be true as liver-derived SACs exhibit a low contribution, and are skewed to liver differentiation. Similarly, skin-derived SACs appear to demonstrate a tendency to contribute to the skin.

11) Embryos generated by tetraploid complementation should be taken to term. In figure S5E, the example of the BDF1/GFP embryo presented does not look normal.

12) Regarding the existing molecular data on the identity of the cell lines, the embryonic gene expression qPCRs (Figure 2B, S3C) show unusually high values for expression levels relative to GAPDH levels. Even though the figure has an ESC control, and it may be a primer-specific phenomenon, mRNA levels of genes such as Nanog and Rex1 are more like 0.05 or 0.15 of GAPDH levels, whereas the authors observed levels as high as 12 -14 times GAPDH.

13) The authors did not describe “the low pH solution (pH5.5)” treatment in detail (Methods). The authors need to provide a detailed description of what the composition of the pH solution was, how long the treatment was, and how the cells were handled.

14) The authors did not describe the substrate used during conversion. Did they use feeders or gelatin? These are the conventional substrates on which pluripotent stem cells are derived and maintained.

15) Because the authors claimed a combination of pH decrease and mechanical stress caused Oct4 reactivation, the authors should show data indicating how the two procedures additively or synergistically promote Oct4-GFP reactivation.

16) Figure S1C — the authors quantified the number of spherical colonies, but they did not provide a normalization. This figure would additionally be significantly enhanced if the authors showed the morphologies of the spherical colonies they are obtaining in the different culture conditions

17) A more detailed description on the composition of the different media tested was not provided (Figure S1C). Additionally, ACTH is not conventionally used for pluripotent stem cell culture. The manuscript would be enhanced if the authors provided an explanation for the use of ACTH.

18) Decreased cellular size is a feature of pluripotent stem cells, but the authors did not include data or a discussion on the ESCs/iPSCs cell size in their examination of cell size (Figure 1D).

19) Figure 2B — the nuclear localization of the Nanog signal in SACs should be very clear, however the staining appears to be non-specific.

20) The authors should refer to the 2007 Cell Stem Cell paper from the Jaenisch lab when discussing reports of the existence of Oct4-expressing pluripotent adult stem cells. Here it was shown that Oct4 gene ablation in adult tissues did not affect regenerative capacity.

21) Figures are often not referred to in order, making the manuscript slightly difficult to follow at times.

Reviewer 3

The finding described in this manuscript is very unusual and unexpected. Under certain circumstances, it appears that a non-physiological non-specific stress can trigger reprogramming of terminally differentiated cells, such that the cells enter a pluripotent stem cell-like state. If these results are repeatable, a paradigm of developmental biology would be changed. Currently, that paradigm is that terminally differentiated states are set and cannot be reset. Although Yamanaka broke this biological rule by overexpressing pluripotency-associated factors, that system is highly artificial. The authors of the present manuscript propose that cells have an intrinsic capacity to reprogram. I found the manuscript to be clearly written and concise, although sometimes mildly unorthodox in terms of literature cited.

However, the methods and cell protocols used must be described in far more detail. For example, the section on Oct4 should state how many cells were sorted and describe the appearance of the cells. Is it possible that rare populations of cells pre-exist or are already apparent on day 1 (thus, what are the “dots” of Fig. 1?). The authors will argue that, indeed, under certain circumstances, they were able to reprogram terminally differentiated cells, and that this was attributable to TCR recombination. I think, ideally, that the cells should be experimentally tagged and traced. This would unequivocally clarify the source of the cells and, further, would exclude the possibility that some cells pre-existed in a pluripotent state.

Critically, it is necessary to determine whether SAC cells can propagate stably in culture and whether such cells can be passaged. CD45.2 cells from the spleen are differentiated and, unless activated by an antigen, are supposed to be in G0. Do these cells re-enter the cell cycle? The cells should be further characterized.

Some negative controls are missing. See Figs. 2A, S3B, S3C, S5A, and S5B. Unstressed cultured cells should be used as negative controls.

In Figs. 3C and D, it would be interesting to see the ATP and ROS levels in both ES and SAC cells.

In Figs. 3C and D, it is apparent that mES cells show rises in ATP and ROS levels, and in mtDNA copy number. These results should be compared to other publications.

In Fig. 2, Nanog is not located in the nucleus. Also, do the authors have data on staining of Oct4 in this experimental context?

Update, 6:30 p.m. Eastern: Here’s Paul Knoepfler’s take on these reviews.

26 thoughts on ““Truly extraordinary,” “simply not credible,” “suspiciously sharp:” A STAP stem cell peer review report revealed”

  1. So peer review worked first time, but then failed when the papers were re-hashed and re-submitted. At least we can take comfort from the fact that post publication peer review worked . This does raise interesting questions as to the responsibility of Nature in the saga and of Science in not reporting the reconstruction of Fig 3, detected by one reviewer, to RIKEN. it only takes a short e-mail from the editorial office.

    1. You raise what I consider to be one of the biggest issues with pre-publication peer review. Authors can just resubmit the manuscript to a different journal and the record of comments is lost. Eventually, the system will fail. Roll the dice enough times and you will magically compile 2-3 reviewers that fail to catch a significant issue. Perhaps requiring that authors reveal to the editor previous submission history and reviews would help alleviate this? The beauty of post-publication peer-review and websites like “pubpeer” is that the judgement is permanent and freely available for all to discuss/see.

      1. That may be so if you slide down the IF ladder, but at this level, it is very likely that at least one or two of the reviewers will be the same from submission to submission. Of course, when you get to the bottom of the pile, anything can and will get published. But that didn’t happen here.

        1. I don’t think that mandatory sharing of reviews is fair to the authors. My guess is that most scientists had reviews that very horribly unfair, deeply vindictive, or both. As of now, an unfair review can sometimes sink a submission and cost a researcher several months of their lives. What if such reviews were to follow the paper around? And what if papers are revised to some degree after the first rejection? Should the reviews still follow the now much improved paper, possibly creating a negative impression? I think not.

          However, authors should be encouraged to submit their papers with reviews from previous rejections if they think it helpful. In many cases editors could make decisions without further reviews (both positive or negative). This would speed up the whole process and relief some of the burden of the scientific community to review the same manuscript two or three times.

          MBoC actually has made this their standard policy and I can say from personal experience that they are true to their word. We had an editorial decision within days on a paper submitted with review history from a different journal where it was ultimately rejected.

    2. Bravo to “the anonymous identity” for releasing these peer reviewer reports, but a truly transparent process would be to reveal the reviewer reports of all the rejected journals, as well as the final acceptance journal’s peer reports. Of course, that will never happen, unless leaked, simply because publishers don’t want the public to see, where applicable, bad or incomplete peer review. In this case, in fact, the peer reports were good, detailed and identified the errors accurately, as pointed out by other commentators. Can you imagine the scandals that would emerge if scientists requested Elsevier or Springer, for example, to release all peer reviewers’ sheets that led to rejections, and more importantly, to the acceptance of published papers? Requesting publishers to be transparent about peer reports only for retracted papers is a start, but does not reveal the depth and scope of the rot in traditonal peer review (sorry to those who so actively defend it). When will we start to demand that all publishers release all peer reports (with names redacted if necessary, but preferably not to hold the peers accountable, too) to the public? One of the main reasons why there are increasingly so many retractions is because quality control and peer review failed. That is, the editors and the peers failed. And the publisher helped to cover up the failure by not releasing peer reports. See f1000Research as the best open model available, I believe, listing the editors names in charge of handling the process, the peers who were involved, and the peer reports as well as the authors’ responses.

      I do have one concern, however, and I think this is new territory for publishing, perhaps: is it legal or ethical for reviewers to release peer reports after a paper is published, or even of papers that were rejected? As far as I can tell, peers do not sign any legal document, or any document at all, between them and the journal/publisher. In this case, are we to expect a wave of anonymous peer reports of rejected and accepted papers floating around the internet now? Personally, I hope so, because it’s time crack open this deeply corrupted concept called pre-peer review or traditional peer review.

      1. PeerJ is one of the new, “high profile” (in the sense that it garnered a lot of media attention at launch), journals that usually publish the peer reviewer reports. Looking through them, however, you will quickly discover that many reviewers, whose job is to give scientific input and as such to be a consultant to the editor, instead act like copy-editors correcting minor typographic mistakes.

        If you can’t think of at least a couple of things to suggest or to suspect, maybe you should not be a reviewer and let the editor know. Dunning-Kruger effect…

        1. My experience with PeerJ was dreadful. In fact, I was led to PeerJ from RW. Pre-publication for free proposal (in which your paper sort of hangs in limbo online, like a homeless piece of intellect) followed by a charge for publishing as a final step. It reminded me of Nature Preceedings, another worrisome publication by Nature Publishing Group that failed after a very short existence. Some of the high-tech solutions floating around, mostly open access, are very worrisome. I suspect the Dunning-Kruger effect is well, alive and kicking among many plant science journals. The collapse of plant science will, unfortunately, probably emerge from within than from the outside.

          1. You are not being fair towards PeerJ. Their FAQs and instructions have always been very clear regarding:
            a) the existence of a formal peer-reviewed journal PeerJ and a Preprint server under the PeerJ umbrella
            b) the nature of their PrePrint server (i,e. similar to ArXiV, rather than a formal publication)
            c) the publication fees for their journal PeerJ (which are quite small compared to other OA journals)
            d) submission to PeerJPreprints is emphatically not required for submission to PeerJ

            I therefore fail to understand why you found the experience dreadful. I would have thought that nowadays most people were acquainted with the idea of PrePrints.

        2. I disagree – if a paper describes well executed experiments that are equally well controlled and the data analysed appropriately, then it is good for publication. They may have a different view to mine, I may be frustrated that they did experiments one way rather than another, but if the work is sound, it is sound. Something that some reviewers forget, I think, is that it isn’t the reviewer’s paper, it is the authors’ work.
          If, on the other hand, the data do not fully support the conclusions then the review is of the type “this must do before accepted”. I have submitted two papers at PeerJ, one accepted (https://peerj.com/articles/461/), with minimal reviews, because there wasn’t much to say. The alternative would have been to suggest in review something along the lines of “the authors should solve the co-crystal structure of neuropilin-1 and heparin”. I have had crazy reviews like that, the most notable being a request to solve the solution NMR structure of heparin, something that is currently impossible – and was even more so last decade when we submitted that paper. Happily the editor accepted our argument in that instance. The second paper submitted to PeerJ has had very substantial reviews, consisting of a long list of very appropriate technical questions. These we are addressing.
          So the prepublication peer review at PeerJ works as it should in my experience. Some comments on the paper to stimulate our thoughts would be nice though 🙂

  2. How come that the reviewers of Nature recommended accepting the manuscript? The reviewers of Science have identified so many critical points, many more or less lethal.

  3. But peer review is ‘broken’, right? Isn’t that what everyone has been saying? These review comments (especially by reviewer 1 and 2) highlight exactly what is RIGHT about peer review and we should celebrate that. They are fair, on point, critical but helpful, and they even picked up image manipulation. What more do you want?

    I have seen reviewers over-ruled by editors time and time again, and I’m sure that is what happened when the paper was finally accepted. What we need to see is the reviews of the manuscript that was eventually published. I will bet the kitchen sink that they do not look that much different than these.

  4. The criticism in these reviews (from 2 years ago!) made me think of an interview with Obokata for the Nature podcast when the paper was first published:

    David Cyranoski: It’s an extraordinary and surprising finding, was it difficult to convince people that you had actually achieved this?

    Haruko Obokata: Yeah, because I was so naïve I didn’t understand why no one believed me. In addition I didn’t know what kind of data could convince people. Therefore I just tried to collect the data which never can be created by other cells. And it took almost five years from the first experiment.

    http://www.nature.com/nature/podcast/v505/n7485/nature-2014-01-30.html

  5. If I recall the whole fiasco correctly, I believe Nature reviewers pointed out many of the similar problems but the editor decided to go ahead with publication. That’s how Nature as a journal works. Editors have the final say in which paper gets accepted.

    1. “Editors have the final say in which paper gets accepted.”

      So they should. Peer reviewers advise and editors decide – and that is not the problem. (Don’t review for a journal that you considers has incompetent editors.)

      Editors decide what goes into ALL journals. The point is that in this case the handling editors were so gormless that they could not tell that a methods paper requires that the method is laid out in sufficient detail that it can be repeated. They should have known this from basic scientific principles: But they didn’t. If we – and I expect we ultimately will – get to see the referees’ reports in full and they asked for that detail, then the problem does indeed lie with editorial incompetence. Even so, one should not confuse the fact that editors always need to make decisions on manuscript acceptance with Nature’s inability to organise an editorial system that maximises competence in that decision making process.

  6. I have been covering this issue for the German Laborjournal (www.laborjournal.de and http://www.labtimes.com).

    First of all, one has to ask: did Nature know of image manipulations, easily detectable by a special software (like the one used by JCB)? If they did, why did they press on with publishing this farcical fraud? If they didn’t, why is this epitomous elite journal not interested to have all their manuscripts checked automatically prior to any editorial evaluation? Secondly, the way Nature dealt with these retractions, by doing their best to avoid them or at least to absolve themselves of any blame, was plainly shameful.

  7. In the current peer review system, reviewers are advisors to the Journals not ‚judges’ themselves. Journals are entitled to make ‚arbitrary’ editorial decisions, whether we like it or not. In most cases, we as authors give up our rights on the published articles (copyright transfer agreement), so, why should the upstream decision-making process anger us in any special way (I’m being provocative). Trying to be realistic, I would advocate open disclosure of how the decision was made (reviewer comment-based, or purely editorial). Personally, as a readership member, I would then hold a reviewer-based decision in higher regard than an editorial decision. In the end, let’s not forget it, the readership has the ultimate power to cite or not cite given articles. If those editorial-accepted articles were not cited as much as the others, then… this might cause the Journals to revise their policies, and probably it would.

    1. Reviewers are not paid. They volunteer, but I am not the same is true of editors. We need to learn more about the contracts signed by editors and between editors and publishers. We need some leaked documents to understand what responsibilities they have, academic, legal and financial. So, imagine you are a reviewer who rejects a paper then sees it accepted? Would you not be furious? The current system works as follows: editors work for free; peers work for free; authors give their papers freely (and assign copyrights) to non-OA publishers; the publisher makes the profit. No wonder the system is cockeyed.

  8. Reblogged this on The Moldamancer and commented:
    Wow. These reviews are extremely incriminating; reviewers obviously spotted some of the key problems with this paper. “This paper claims that cells from any somatic tissue can be reprogrammed to a fully pluripotent state by treatment for a few days with weak acid.

    This is such an extraordinary claim that a very high level of proof is required to sustain it and I do not think this level has been reached”

  9. It still baffles me that Vacanti has managed to so artfully avoid any real criticism or take much blame, considering he was certainly willing to take most of the credit when it was still a sensational new discovery– for all the week that lasted.

    But, as far as the reviews are concerned… So it was rejected from cell, science, AND nature? But then later published by nature? So just assuming it was rejected the first time around for similar reasons to the reviews above when it was first submitted to nature, how did it managed to get published when I can’t really see much they did go back and address after these comments. I mean it’s hard to believe that you can get rejected three times from three different journals, do essentially nothing, then resubmit the paper to one of those journals, and everything’s fine? It just makes me wonder if this paper ultimately got published because they truly thought the science was sound, or because of whose names were at the top.

  10. If they submitted the paper 20 times to different journals, there will be 95% probability that one will accept the paper.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.