Archive for the ‘ferric fang’ Category
The title of this post is the title of a new study in PLOS ONE by three researchers whose names Retraction Watch readers may find familiar: Grant Steen, Arturo Casadevall, and Ferric Fang. Together and separately, they’ve examined retraction trends in a number of papers we’ve covered.
Their new paper tries to answer a question we’re almost always asked as a follow-up to data showing the number of retractions grew ten-fold over the first decade in the 21st century. As the authors write: Read the rest of this entry »
Regular Retraction Watch readers may have noticed that many of the people whose fraud we write about are men. Certainly, the top retraction earners — Yoshitaka Fujii, Joachim Boldt, Diederik Stapel, and Naoki Mori, to name a few — all have a Y chromosome. But that doesn’t necessarily mean our sample size is representative.
Now along comes a study of U.S. Office of Research Integrity (ORI) reports suggesting that men are in fact overrepresented among scientists who commit fraud. In a study published online today in mBio, Ferric Fang and Arturo Casadevall — whose names will also be familiar to Retraction Watch readers for their previous work — along with Joan Bennett analyzed 228 ORI reports since 1994, and found that 149 — or 65% — were male. (The vast majority of the 228 cases — 94% — involved fraud such as falsification or fabrication, while the others presumably involved misconduct such as plagiarism.)
And it’s not just that there are more men in the life sciences. At every stage of a life science career, the percentage of males found by the ORI to have committed misconduct was higher than the percentage of male life scientists overall: Read the rest of this entry »
October, apparently, is “studies of retractions month.” First there was a groundbreaking study in PNAS, then an NBER working paper, and yesterday PLoS Medicine alerted us to a paper their sister journal, PLoS ONE, published last week, “A Comprehensive Survey of Retracted Articles from the Scholarly Literature.”
The study, by Michael L. Grieneisen and Minghua Zhang, is comprehensive indeed, reaching further back into the literature than others we’ve seen, and also including more disciplines: Read the rest of this entry »
Majority of retractions are due to misconduct: Study confirms opaque notices distort the scientific record
A new study out in the Proceedings of the National Academy of Sciences (PNAS) today finds that two-thirds of retractions are because of some form of misconduct — a figure that’s higher than previously thought, thanks to unhelpful retraction notices that cause us to beat our heads against the wall here at Retraction Watch.
The study of 2,047 retractions in biomedical and life-science research articles in PubMed from 1973 until May 3, 2012 brings together three retraction researchers whose names may be familiar to Retraction Watch readers: Ferric Fang, Grant Steen, and Arturo Casadevall. Fang and Casadevall have published together, including on their Retraction Index, but this is the first paper by the trio.
The paper is Read the rest of this entry »
A group of authors at a Pittsburgh company have proposed a new way to write, review, and read scientific papers that they claim will “radically alter the creation and use of credible knowledge for the benefit of society.”
From the abstract of a paper appearing in the new Mary Liebert journal Disruptive Science and Technology, which, according to a press release, will “publish out-of-the-box concepts that will improve the way we live”: Read the rest of this entry »
We often hear — with data to back the statement — that top-tier journals, ranked by impact factor, retract more papers than lower-tier journals. For example, when Murat Cokol and colleagues compared journals’ retraction numbers in EMBO Reports in 2007, as Nature noted in its coverage of that study (h/t Richard van Noorden):
Journals with high impact factors retract more papers, and low-impact journals are more likely not to retract them, the study finds. It also suggests that high- and low-impact journals differ little in detecting flawed articles before they are published.
One thing you notice when you look at Cokol et al’s plots is that although their models seem to take retractions “per capita” — in other words per study published – into account, they don’t report those figures.
Enter a paper published this week in Infection and Immunity (IAI) by Ferric Fang and Arturo Casadevall, “Retracted Science and the Retraction Index.” Read the rest of this entry »