Weekend reads: Unscientific peer review; impact factor revolt; men love to cite themselves

booksThe week at Retraction Watch featured a puzzle, and the retraction of a controversial study on fracking. Here’s what was happening elsewhere: Continue reading Weekend reads: Unscientific peer review; impact factor revolt; men love to cite themselves

Context matters when replicating experiments, argues study

PNASBackground factors such as culture, location, population, or time of day affect the success rates of replication experiments, a new study suggests.

The study, published today in the Proceedings of the National Academy of Sciences, used data from the psychology replication project, which found only 39 out of 100 experiments live up to their original claims. The authors conclude that more “contextually sensitive” papers — those whose background factors are more likely to affect their replicability — are slightly less likely to be reproduced successfully.

They summarize their results in the paper:

Continue reading Context matters when replicating experiments, argues study

How much does a retracted result pollute the field?

Research Integrity and Peer Review

When a paper is retracted, how many other papers in the same field — which either cite the finding or cite other papers that do — are affected?

That’s the question examined by a study published in BioMed Central’s new journal, Research Integrity and Peer Review. Using the case of a paper retracted from Nature in 2014, the authors found that subsequent research that cites the retracted paper often repeats the problematic finding, thereby spreading it throughout the field. However, papers that indirectly cited the retracted result — by citing the papers that cited the Nature paper, but not the Nature paper itself — typically don’t repeat the retracted result, which limits its spread.

Here’s how the authors describe their findings in the paper: Continue reading How much does a retracted result pollute the field?

Do interventions to reduce misconduct actually work? Maybe not, says new report

Elizabeth Wager and Ana Marusic

Can we teach good behavior in the lab? That’s the premise behind a number of interventions aimed at improving research integrity, invested in by universities across the world and even private companies. Trouble is, a new review from the Cochrane Library shows that there is little good evidence to show these interventions work. We spoke with authors Elizabeth Wager (on the board of directors of our parent organization) and Ana Marusic, at the University of Split School of Medicine in Croatia.

Retraction Watch: Let’s start by talking about what you found – looking at 31 studies (including 15 randomized controlled trials) that included more than 9500 participants, you saw there was some evidence that training in research integrity had some effects on participants’ attitudes, but “minimal (or short-lived) effects on their knowledge.” Can you talk more about that, including why the interventions had little impact on knowledge? Continue reading Do interventions to reduce misconduct actually work? Maybe not, says new report

Retractions rise to nearly 700 in fiscal year 2015 (and psst, this is our 3,000th post)

pubmedThis is our 3,000th post, dear reader, and to celebrate we’re presenting you with a wealth of retraction data from fiscal year 2015, courtesy of the U.S. National Library of Medicine.

The biggest take-home: The number of retracted articles jumped from 500 in Fiscal Year 2014 to 684 in Fiscal Year 2015 — an increase of 37%. But in the same time period, the number of citations indexed for MEDLINE — about 806,000 — has only increased by 5%.

To illustrate, we’ve presented the increase in a handy graphic:

Continue reading Retractions rise to nearly 700 in fiscal year 2015 (and psst, this is our 3,000th post)

What did retractions look like in the 17th century?

Alex Csiszar
Alex Csiszar

We always like to get a historical perspective on how scientists have tried to correct the record, such as this attempt in 1756 to retract a published opinion about some of the work of Benjamin Franklin. Although that 18th century note used the word “retract,” it wasn’t a retraction like what we see today, in which an entire piece of writing is pulled from the record. These modern-day retractions are a relatively recent phenomenon, which only took off within the last few decades, according to science historian Alex Csiszar at Harvard University. He spoke to us about the history of retractions – and why an organization like Retraction Watch couldn’t have existed 100 years ago.

Retraction Watch: First of all, let’s start with something you found that appears to break our previous record for the earliest retraction – a “retractation” by William Molyneux of some assertions about the properties of a stone, published in 1684. Could this be the earliest English-language retraction? Continue reading What did retractions look like in the 17th century?

Researchers’ productivity hasn’t increased in a century, study suggests

Screen Shot 2016-01-19 at 10.50.25 AMAre individual scientists now more productive early in their careers than 100 years ago? No, according to a large analysis of publication records released by PLOS ONE today.

Despite concerns of rising “salami slicing” in research papers in line with the “publish or perish” philosophy of academic publishing, the study found that individual early career researchers’ productivity has not increased in the last century. The authors analyzed more than 760,000 papers of all disciplines published by 41,427 authors between 1900 and 2013, cataloged by Thomson Reuters Web of Science.

The authors summarize their conclusions in “Researchers’ individual publication rate has not increased in a century:”

Continue reading Researchers’ productivity hasn’t increased in a century, study suggests

Ready to geek out on retraction data? Read this new preprint

thomson reutersThere’s a new paper about retractions, and it’s chock-full of the kind of data that we love to geek out on. Enjoy.

The new paper, “A Multi-dimensional Investigation of the Effects of Publication Retraction on Scholarly Impact,” appears on the preprint server arXiv — meaning it has yet to be peer-reviewed — and is co-authored by Xin Shuai and five other employees of Thomson Reuters. Highlights from their dataset: Continue reading Ready to geek out on retraction data? Read this new preprint

Weekend reads: Replication debate heats up again; NEJM fooled?; how to boost your alt-metrics

booksThe week at Retraction Watch was dominated by the retraction of “the Creator” paper, but we also reported on a scientist under investigation losing a grant, and a case brewing at a New Jersey university. Here’s what was happening elsewhere: Continue reading Weekend reads: Replication debate heats up again; NEJM fooled?; how to boost your alt-metrics

Do radiology journals retract fewer papers? New study suggests yes

ajr.2016.206.issue-2.coverThere’s good news and bad news in radiology research, according to a new study: The number of retractions is increasing in radiology journals, but the rate of retraction remains lower than that seen in biomedical journals outside the field of radiology.

According to the study in the American Journal of Roentgenology, between 1986 and 2001, radiology journals retracted — at the most — one paper per year, but from 2002 to 2013, at least two papers were pulled each year. Overall, roughly 11 articles are retracted out of every 100,000 articles published in radiology journals — compared to 15 out of 100,000 for biomedical journals outside radiology.

Still, writes author Andrew Rosenkrantz in “Retracted Publications Within Radiology Journals:”

Continue reading Do radiology journals retract fewer papers? New study suggests yes