Are individual scientists now more productive early in their careers than 100 years ago? No, according to a large analysis of publication records released by PLOS ONE today.
Despite concerns of rising “salami slicing” in research papers in line with the “publish or perish” philosophy of academic publishing, the study found that individual early career researchers’ productivity has not increased in the last century. The authors analyzed more than 760,000 papers of all disciplines published by 41,427 authors between 1900 and 2013, cataloged by Thomson Reuters Web of Science.
The authors summarize their conclusions in “Researchers’ individual publication rate has not increased in a century:”
When researchers raised concerns about a 2009 Science paper regarding a new way to screen for enzymatic activity, the lead author’s institution launched an investigation. The paper was ultimately retracted in 2010, citing “errors and omissions.”
It would seem from this example that the publishing process worked, and science’s ability to self-correct cleaned up the record. But not so to researchers Ferric Fang and Arturo Casadevall.
A team of Harvard and Yale biologists have retracted an Infection and Immunity paper due to data duplication.
After the duplication came to light, the erroneous figures were corrected using original data, but the results affected “some of the manuscript’s conclusions.” An ethics panel subsequently recommended retraction, according to the journal, and the authors agreed.
A new study suggests that much of what we think about misconduct — including the idea that it is linked to the unrelenting pressure on scientists to publish high-profile papers — is incorrect.
In a new paper out today in PLOS ONE [see update at end of post], Daniele Fanelli, Rodrigo Costas, and Vincent Larivière performed a retrospective analysis of retractions and corrections, looking at the influence of supposed risk factors, such as the “publish or perish” paradigm. The findings appeared to debunk the influence of that paradigm, among others:
Scientists have pulled their 2013 Infection and Immunity paper after a reader noticed duplicated data in three figures, and the first author was “unable to provide the original data used to construct the figures,” according to the journal’s editor-in-chief.
According to the retraction note, “the first author has accepted responsibility for these anomalies” — similar to another recent retraction from the same journal, also due to image duplication reported by a reader (apparently the journal has one or more careful readers).
The paper, “Pseudomonas aeruginosa Outer Membrane Vesicles Modulate Host Immune Responses by Targeting the Toll-Like Receptor 4 Signaling Pathway,” concerns the role of outer membrane vesicles excreted by the bacteria to incite an inflammatory response in mice. It was written by authors at the University of North Dakota, Sichuan University in China, and the University of Chicago, and has been cited six times, according to Thomson Scientific’s Web of Knowledge.
How should scientists think about papers that have undergone what appears to be a cursory peer review? Perhaps the papers were reviewed in a day — or less — or simply green-lighted by an editor, without an outside look. That’s a question Dorothy Bishop, an Oxford University autism researcher, asked herself when she noticed some troubling trends in four autism journals.
Recently, Bishop sparked a firestorm when she wrote several blog posts arguing that these four autism journals had a serious problem. For instance, she found that Johnny Matson, then-editor of Research in Developmental Disabilities and Research in Autism Spectrum Disorders, had an unusually high rate of citing his own research – 55% of his citations are to his own papers, according to Bishop. Matson also published a lot in his own journals – 10% of the papers published in Research in Autism Spectrum Disorders since Matson took over in 2007 have been his. Matson’s prodigious self-citation in Research in Autism Spectrum Disorders was initially pointed out by autism researcher Michelle Dawson, as noted in Bishop’s original post.
Short peer reviews of a day or less were also common. Matson no longer edits the journals, both published by Elsevier.
Bishop noted similar findings at Developmental Neurorehabilitation and Journal of Developmental and Physical Disabilities, where the editors (and Matson) frequently published in each others’ journals, and they often had short peer reviews: The median time for Matson’s papers in Developmental Neurorehabilitation between 2010 and 2014 was a day, and many were accepted the day they were submitted, says Bishop.
Although this behavior may seem suspect, it wasn’t necessarily against the journals’ editorial policies. This is the peer review policy at RIDD:
A paper on apoptosis in mice has been retracted by Infection and Immunity after a reader tipped them off that several figures were “not faithful representations of the original data.”
When the journal, published by the American Society for Microbiology (ASM), contacted the authors at Anhui Medical University in Hefei, China, they claimed they couldn’t provide the experimental data thanks to “damage to a personal computer,” said Ferric Fang, editor of the journal and a member of the board of directors of the Center for Scientific Integrity, Retraction Watch’s parent organization. Seven figures in total were compromised, including several that were duplicated throughout the article.
One of the complaints we often hear about the self-correcting nature of science is that authors and editors seem very reluctant to retract papers with obvious fatal flaws. Indeed, it seems fairly clear that the number of papers retracted is smaller than the number of those that should be.
To try to get a sense of how errors are corrected in the literature, Arturo Casadevall, Grant Steen, and Ferric Fang, whose work on retractions will be familiar to our readers, in a new paper in the FASEB Journal, look at the sources of error in papers retracted for reasons other than misconduct.