Will scientific error checkers become as ubiquitous as spell-checkers?

Jonathan Wren

How common are calculation errors in the scientific literature? And can they be caught by an algorithm?  James Heathers and Nick Brown came up with two methods — GRIM and SPRITE — to find such mistakes. And a 2017 study of which we just became aware offers another approach.

Jonathan Wren and Constantin Georgescu of the Oklahoma Medical Research Foundation used an algorithmic approach to mine abstracts on MEDLINE for statistical ratios (e.g., hazard or odds ratios), as well as their associated confidence intervals and p-values. They analyzed whether these calculations were compatible with each other. (Wren’s PhD advisor, Skip Garner, is also known for creating such algorithms, to spot duplications.)

After analyzing almost half a million such figures, the authors found  that up to 7.5% were discrepant and likely represented calculation errors. When they examined p-values, they found that 1.44% of the total would have altered the study’s conclusion (i.e., changed significance) if they had been performed correctly.  

We asked Wren — who says he thinks automatic scientific error-checkers will one day be as common as automatic spell-checkers are now — to answer a few questions about his paper’s approach. This Q&A has been slightly edited for clarity.

Retraction Watch (RW): What prompted you to perform your study? Continue reading Will scientific error checkers become as ubiquitous as spell-checkers?

Should journals credit eagle-eyed readers by name in retraction notices?

Logo of the European Society of Cardiology, EHJ’s publisher

One of the most highly-cited journals in cardiology has retracted a paper less than a month after publishing it in response to criticism first posted on Twitter.

The article, “Short-term and long-term effects of a loading dose of atorvastatin before percutaneous coronary intervention on major adverse cardiovascular events in patients with acute coronary syndrome: a meta-analysis of 13 randomized controlled trials,” was published online January 3 in the European Heart Journal (EHJ). Its authors purported to analyze clinical trials of patients who were given a loading dose of atorvastatin, a cholesterol medication, before undergoing cardiac catheterization.

How closely the study authors adhered to their own methods came under question on January 8, when Ricky Turgeon, a cardiology pharmacist, posted a series of tweets in which he claimed some of the studies included in the analysis either did not test the drug in patients undergoing the procedure — referred to as PCI — or patients had not all been diagnosed with acute coronary syndrome, commonly known as a heart attack. With many of the trials included in the analysis not abiding by the predefined inclusion criteria, the study’s conclusions are unreliable, argued Turgeon. Continue reading Should journals credit eagle-eyed readers by name in retraction notices?

Can a “nudge” stop researchers from using the wrong cell lines?

Anita Bandrowski, a neuroscientist at the University of California, San Diego, works on tools to improve the transparency and reproducibility of scientific methods. (Her work on Research Resource Identifiers, or RRIDs, has been previously featured on Retraction Watch.) This week, Bandrowski and colleagues  — including Amanda Capes-Davis, who chairs the International Cell Line Authentication Committee — published a paper in eLife that seeks to determine whether these tools are actually influencing the behavior of scientists, in this case by reducing the number of potentially erroneous cell lines used in published studies.

Such issues may affect thousands of papers. Among more than 300,000 cell line names in more than 150,000 articles, Bandrowski and her colleagues “estimate that 8.6% of these cell lines were on the list of problematic cell lines, whereas only 3.3% of the cell lines in the 634 papers that included RRIDs were on the problematic list,” suggesting “that the use of RRIDs is associated with a lower reported use of problematic cell lines.” 

Retraction Watch spoke with Bandrowski about the role of these tools in the larger movement to improve transparency and reproducibility in science, and whether meta-scientific text-mining approaches will gain traction in the research community.

Retraction Watch (RW): Your study presents RRID as a behavioral “nudge,” beyond its primary goal of standardizing method reporting. What other nudges can you envision to prevent misuse of cell lines in scientific research? Continue reading Can a “nudge” stop researchers from using the wrong cell lines?