How common are calculation errors in the scientific literature? And can they be caught by an algorithm? James Heathers and Nick Brown came up with two methods — GRIM and SPRITE — to find such mistakes. And a 2017 study of which we just became aware offers another approach.
Jonathan Wren and Constantin Georgescu of the Oklahoma Medical Research Foundation used an algorithmic approach to mine abstracts on MEDLINE for statistical ratios (e.g., hazard or odds ratios), as well as their associated confidence intervals and p-values. They analyzed whether these calculations were compatible with each other. (Wren’s PhD advisor, Skip Garner, is also known for creating such algorithms, to spot duplications.)
After analyzing almost half a million such figures, the authors found that up to 7.5% were discrepant and likely represented calculation errors. When they examined p-values, they found that 1.44% of the total would have altered the study’s conclusion (i.e., changed significance) if they had been performed correctly.
We asked Wren — who says he thinks automatic scientific error-checkers will one day be as common as automatic spell-checkers are now — to answer a few questions about his paper’s approach. This Q&A has been slightly edited for clarity.
After years of back and forth, a highly cited paper that appeared to show that gay people who live in areas where people were highly prejudiced against them had a significantly shorter life expectancy has been retracted.
A paper in Contraception that purported to show serious flaws in an earlier study of abortion laws and maternal health has been retracted, after the authors of the original study found what were apparently significant flaws in the study doing the debunking.
That’s the short version of this story. The longer version involves years of back-and-forth, accusations of conflict of interest and poor research practice, and lawyers for at least two parties. Be warned: We have an unusual amount of information to quote from here that’s worth following.
As the editor of Contraception, Carolyn Westhoff, put it:
Sometimes, corrections are so extensive, they can only be called one thing: Mega-corrections.
Recently, the Proceedings of the National Academy of Sciences (PNAS) issued a four-page correction notice to a paper about a compound that appeared to reduce the chances a cancer will recur. The notice describes figure duplication, problems with error bars and figure legends — as well as the loss of statistical significance for some data.
According to the authors’ statement in the notice:
The authors of a 2018 paper on how noisy distractions disrupt memory are retracting the article after finding a flaw in their study.
The paper, “Unexpected events disrupt visuomotor working memory and increase guessing,” appeared in Psychonomic Bulletin & Review, a publication of the Psychonomic Society. (For those keeping score at home, psychonomics is the study of the laws of the mind.)
The article purported to show that an unexpected “auditory event,” like the sudden blare of a car horn, reduced the ability of people to remember visuomotor cues. Per the abstract:
In March, a journal published a paper about blood sugar levels in newborns that caused an immediate outcry from outside experts, who were concerned it contained a sentence that could be potentially harmful if misinterpreted by doctors.
Recently, the journal explained — in impressive detail — why it’s not retracting the paper. That, of course, gives readers the ability to form their own opinions. After seeing the pros and cons, let us know if you think the journal made the right call in a poll at the bottom of the story.
Six months ago, the media was ablaze with the findings of a new paper, showing that nearly six percent of cancer cases are caused, at least in part, by obesity and diabetes. But this week, the journal retracted that paper — and replaced it with a revised version.
The new paper doesn’t change the main findings much — the share of all cancers attributable to diabetes and obesity changed from 5.6% to 5.7%, which wouldn’t change any headlines about the original paper. But soon after the paper was published, a group of researchers noticed the authors’ mistake — which was significant enough to prompt the journal to retract the paper entirely, and swap it with a new one.
Recently, a rash of news outlets posted concerns that canned tuna and other products may contain potentially dangerous levels of zinc. They were all wrong.
News outlets such as The Daily Mail and The Sun reported findings from a recent study, which showed that canned foods such as tuna may contain 100 times the daily limit of zinc — raising concerns about how such huge doses of the mineral could be causing digestion problems. The last author of the study told Retraction Watch the paper is going to be retracted, because the authors made a fundamental error calculating the amount of zinc present in canned foods.
Here’s something we don’t see that often — authors retracting one of their articles because it included new data.
But that is the case with a 2017 review exploring the potential genetic and hormonal underpinnings of gender identity. The authors Rosa Fernández García and Eduardo Pásaro Mendez told Retraction Watch that they asked bioethics journal Cuadernos de Bioética to withdraw their review after realizing it “indirectly” mentioned some of their unpublished work. According to García, the authors had hoped to publish the new data in a scientific paper before the review came out, but the review ended up being published first.