What Caught Our Attention: A tree of life paper has been axed — and based on the information in the retraction notice, we’re wondering how it ever passed peer review.
Specifically, the notice states a review of the paper found “concerns regarding the study design, methodology, and interpretation of the data.” Overall, the research “contradict(s) a large body of existing literature and do(es) not provide a sufficient level of evidence to support the claims made in the paper.” Um, so what did it get right?
Not surprisingly, the paper has been flagged by outside critics, such as on Twitter and a blog post by biologist Matthew Herron, who critiqued the paper shortly after it was published in September 2017. PLOS quickly responded that it was “looking into the concerns.” In January 2018, Herron presented more detailed criticisms, noting the paper is “flawed, deeply flawed, and it would be irresponsible to pretend otherwise.” A comment on that post, supposedly from the last author, notes that “We are in touch with the PLOS One editorial office to address the concerns raised by you.”
Eventually, the journal did address the concerns — by retracting the paper, and providing a reasonably detailed explanation why.
Title: A tree of life based on ninety-eight expressed genes conserved across diverse eukaryotic species
Journal: PLOS ONE
Authors: Pawan Kumar Jayaswal, Vivek Dogra, Asheesh Shanker, Tilak Raj Sharma, Nagendra Kumar Singh
Affiliations: National Research Centre on Plant Biotechnology, India; Banasthali University, India; Central University of South Bihar, India;
Following publication of the article, readers raised a number of concerns about aspects of this work, particularly those relating to the phylogenetic tree and the divergence times based on synonymous substitution rates. The PLOS ONE Editors have consulted with two members of the Editorial Board who have conducted an independent re-evaluation of the paper, which found concerns regarding the study design, methodology, and interpretation of the data, such that the results of the study were determined to be unreliable. Issues include:
-The findings contradict a large body of existing literature and do not provide a sufficient level of evidence to support the claims made in the paper.
-The selection of Chlamydomonas as the outgroup, contrary to the established understanding of evolutionary relationships between green algae, plants, animals, and fungi
-A conceptual flaw in placing Chlamydomonas as an outgroup then interpreting the resulting tree as evidence for the basal position of this taxon
-The molecular clock analysis methodology produced a number of inferred divergence times that contradict the fossil record data and also the phylogeny presented in the paper.
-Incorrect interpretation of distance between taxa based on adjacent position in the graphic presentation of the tree
In light of the concerns raised, the PLOS ONE Editors retract this article.
PKJ, AS, and NKS agree with the retraction. VD and TRS could not be reached.
Date of Article: September 2017
Times Cited, according to Clarivate Analytics’ Web of Science: Zero
Date of Notice: May 14, 2018
Hat Tip: Rolf Degen
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
if i had 10 cents each time i have to ask “how this paper did it through the review”, i would be millionaire. exaggeration? yes, but not that big. i still have a small collection of extraordinary strange papers which made it into relatively serious journals. twice i tried to point out it to the editor; in both cases unsuccessfully. in one case, they let a colleague of mine and me to publish our objections (which are totally ignored, and the paper is happily cited). in the second case, we were not ignored, but our objections were marginalised. there was a funny bonus to the story, the editor asked a new reviewer to go through our comments and the original paper just to find even more serious problems than we’ve found (we were criticising only part of the paper regarding our expertise). the point? paper is still out there. the newly assigned reviewer concluded, that authors should be reminded not to do it anymore, and that that should be sufficient. it was really demotivating. thumbs up for those who managed what we were not able.
I share your concerns, and I frankly not surprised to hear of your story and difficulty in (impossibility of?) correcting the scientific record.
I work in a mathematical area. As we know, there are no “gray areas” in mathematics, statements/results are true (proven) or false (disproven), based on deductive logic from axioms and theorems. Yet, I have likewise published objections (well, in my case formal disproof) in the form of comment papers on several highly cited papers in the literature. My comments passed peer review, confirming the correctness of my disproof(s), and the case was closed (or so I thought).
To my chagrin, when I look on Web of Science these days, I notice that (almost as if out of spite) the mathematically erroneous papers continue to briskly gather 100s of citations a year. On the other hand, my comments disproving them are largely ignored. This suggests the community is not concerned with correctness of the results in papers but instead just works facetiously to inflate the h-indices of certain people.
So, to make a long story short, and get back to your point: if even mathematical disproof can’t stop unscrupulous and/or careless researchers from promulgating questionable/erroneous/”how did this get published?!”-type results…then, what can? I am fresh out of hypotheses.
PLOS ONE doesn’t allow submitters to suggest peer reviewers, but it does allow them to suggest an academic editor – who then decides whether a peer review is necessary (and coordinates with reviewers) – http://journals.plos.org/plosone/s/editorial-and-peer-review-process – I would be very curious to know what the editorial process was like for this paper!
In many journals, the numbers of submissions are going up, in part due to increasing numbers of submissions from rapidly growing research labs in China and India. It is getting harder to find reviewers. We are all inundated with requests and most researchers have only so many hours that they can give to this volunteer work that does little for their careers. Although I have never done this personally, I know that others routinely give reviews to their students. I think that peer reviewers may perform better if there is an incentive of some sort. The journals are not sharing their profits in any way and are expecting more and more people to volunteer. No wonder the quality varies.
If you’re frustrated with your volunteer editing efforts boosting the profit margins of big companies, perhaps you should provide your services to non-profit open access journals or society publishers instead?
There are a number of innovations here that are increasing the reviewer pool. eLife is trying to use early career researchers more often and collecting a database of volunteers. PeerJ posts papers on their website and allows people to volunteer for the assignments. Society for Neuroscience started a reviewer mentoring program to boost their database of ECRs who can do review.
Younger or early career researchers have consistently been shown to be provide better quality reviews. And since no one is asking them, they might actually say yes.
I am a bit taken aback by the ease of making assumptions and unsolicited advice, but here it goes. I edit for two journals (and used to edit for another one), with different models. The reviewers are asked to do something for “nothing” (not nothing of course, for public good) for all of them. I have a much much harder time getting reviewers for the non-traditional/open-access one, actually. The idea that reviewers will be willing to do it if only it was non-profit and open-access is just not playing out.
Although you are correct that it is easier to get younger scholars to review, my most reliable go-tos are actually not young scholars, but very senior/near-retirement academics (if they agree, they are consistently on time and provide consistently high-quality reviews). Although younger scholars are often willing, the burden of multiple reviews on them is very high, particularly on those folks who are not good at saying no. And I can tell you that it is very junior women in particular who are not saying no. A systematic shift of these collective responsibilities to the younger female scholars is not a great idea for their productivity. I do want reviews, but I want these folks to do well and stay in academia even more.
+1
I am sorry you were taken aback by my comment. It was not my intention to offend at all. I only saw that you were frustrated with for-profit publishing and experiencing difficulty in getting reviewers. I suppose you could think of these issues as separate, but they are tied together by our incentive structures.
I am also incredibly frustrated by the academic publishing landscape, from the big for-profit publishers who bleed our libraries for subscriptions to the astronomical cost of publishing with reputable open access journals.
I appreciate your insights as an editor of a traditional journal and an open access journal. As an ECR, I am willing to help with the reviewing workload, but I am not often asked. While you say that we get nothing for reviewing, it is now becoming more expected that ECRs have some reviewing experience when applying for fellowships because it shows we have some stature in the field (so I am told). So if you were to ask someone like me, in moderation as you are trying to do, it would be quite welcome.
“The findings contradict a large body of existing literature and do not provide a sufficient level of evidence to support the claims made in the paper.”
Setting aside this particular paper (I don’t know anything about this topic), I found this justification a bit weird. I don’t understand what contradicting a “large body of existing literature” has to do with it. Would poor or misinterpreted data be okay if the claims didn’t contradict conventional wisdom? If so, this creates an advantage for studies that don’t criticize conventional wisdom over those that search for anomalies. Which in turn would create another file drawer problem.
you target only part of the argument – criticism of current model, but you have to consider also the second part – the criticism is done without proper evidence. in (natural) science, where there are no indisputable proofs, there are just few hints to that some claims are high probably not valid – e.g. an opposition to the model without sufficient evidence is one of them. to oppose current model is almost a duty of any scientist, but it has to be done properly, with evidence, otherwise it is not worth. i do of course speak theoretically, as i do not have sufficient enough experience to judge the particular case in the article mentioned above.
I think the intent of that statement was not that the burden of proof should ever be low, but sometimes it’s especially high. If a new study is going to contradict the conventional wisdom (even if that wisdom is repetition of an untested assumption), it needs to be nearly bulletproof.
I don’t have any insight on the more general problems with this justification, but in this case one sort of contradiction that (I think) the editors refer to is between divergence time estimates from this paper and the fossil record. For example, the authors infer that zebrafish and sea urchin diverged ~124 million years ago, implying a mid-Cretaceous origin of chordates. This is just one example; some of the other divergence time estimates are equally absurd.
I don’t disagree that this is a sort of logic we need to be careful with…rejecting all science that contradicts previous publications would be a terrible idea. In this case, though, and given the facts of this case, I think it’s a perfectly sound application. If you’re estimating that chordates originated long after we know from the fossil record that dinosaurs existed, either your estimates are badly wrong or the entire field of paleontology is. It’s formally possible that the latter is true, but it would take better evidence than what’s in this paper to support it.
I guess what I’m saying is that I agree that we should be wary of applying this kind of reasoning too broadly; I just don’t think it has been in this particular case.
PLOS One also has a structural problem with their peer review process. Manuscripts are passed through a technical check and are then assigned to an academic editor who handles the peer review process (including invitation of reviewers). Sometimes that assignment is made by one of the staff editors who is a researcher in the area of the manuscript topic. But sometimes that assignment is made by an editorial assistant (not a researcher) working at one of the private companies that provides editorial services to PLOS One, and those assignments often seem random with respect to the academic editor’s expertise (e.g., invitations to a life scientist to handle the peer review of social science research – that’s what happened to me while I served as an academic editor for PLOS One). If that kind of assignment happens often, it is easy to see how PLOS One manuscripts could receive non-expert peer review and non-expert editorial assessment leading to publication without any meaningful consideration of the merits of the study.
Thank you for your thoughtful response, I do appreciate the insight. I did not know about this form of incentive. I will also give some thought to searching for folks at that stage more.