As Retraction Watch readers know, criminal sanctions for research fraud are extremely rare. There have been just a handful of cases — Dong-Pyou Han, Eric Poehlman, and Scott Reuben, to name several — that have led to prison sentences.
Background factors such as culture, location, population, or time of day affect the success rates of replication experiments, a new study suggests.
The study, published today in the Proceedings of the National Academy of Sciences, used data from the psychology replication project, which found only 39 out of 100 experiments live up to their original claims. The authors conclude that more “contextually sensitive” papers — those whose background factors are more likely to affect their replicability — are slightly less likely to be reproduced successfully.
When a paper is retracted, how many other papers in the same field — which either cite the finding or cite other papers that do — are affected?
That’s the question examined by a study published in BioMed Central’s new journal, Research Integrity and Peer Review. Using the case of a paper retracted from Nature in 2014, the authors found that subsequent research that cites the retracted paper often repeats the problematic finding, thereby spreading it throughout the field. However, papers that indirectly cited the retracted result — by citing the papers that cited the Nature paper, but not the Nature paper itself — typically don’t repeat the retracted result, which limits its spread.
Can we teach good behavior in the lab? That’s the premise behind a number of interventions aimed at improving research integrity, invested in by universities across the world and even private companies. Trouble is, a new review from the Cochrane Library shows that there is little good evidence to show these interventions work. We spoke with authors Elizabeth Wager (on the board of directors of our parent organization) and Ana Marusic, at the University of Split School of Medicine in Croatia.
Retraction Watch: Let’s start by talking about what you found – looking at 31 studies (including 15 randomized controlled trials) that included more than 9500 participants, you saw there was some evidence that training in research integrity had some effects on participants’ attitudes, but “minimal (or short-lived) effects on their knowledge.” Can you talk more about that, including why the interventions had little impact on knowledge?Continue reading Do interventions to reduce misconduct actually work? Maybe not, says new report
This is our 3,000th post, dear reader, and to celebrate we’re presenting you with a wealth of retraction data from fiscal year 2015, courtesy of the U.S. National Library of Medicine.
The biggest take-home: The number of retracted articles jumped from 500 in Fiscal Year 2014 to 684 in Fiscal Year 2015 — an increase of 37%. But in the same time period, the number of citations indexed for MEDLINE — about 806,000 — has only increased by 5%.
To illustrate, we’ve presented the increase in a handy graphic:
We always like to get a historical perspective on how scientists have tried to correct the record, such as this attempt in 1756 to retract a published opinion about some of the work of Benjamin Franklin. Although that 18th century note used the word “retract,” it wasn’t a retraction like what we see today, in which an entire piece of writing is pulled from the record. These modern-day retractions are a relatively recent phenomenon, which only took off within the last few decades, according to science historian Alex Csiszar at Harvard University. He spoke to us about the history of retractions – and why an organization like Retraction Watch couldn’t have existed 100 years ago.
Are individual scientists now more productive early in their careers than 100 years ago? No, according to a large analysis of publication records released by PLOS ONE today.
Despite concerns of rising “salami slicing” in research papers in line with the “publish or perish” philosophy of academic publishing, the study found that individual early career researchers’ productivity has not increased in the last century. The authors analyzed more than 760,000 papers of all disciplines published by 41,427 authors between 1900 and 2013, cataloged by Thomson Reuters Web of Science.
The authors summarize their conclusions in “Researchers’ individual publication rate has not increased in a century:”