Statisticians clamor for retraction of paper by Harvard researchers they say uses a “nonsense statistic”

via ImageCreator

“Uh, hypothetical situation: you see a paper published that is based on a premise which is clearly flawed, proven by existing literature.” So began an exasperated Twitter thread by Andrew Althouse, a statistician at University of Pittsburgh, in which he debated whether a study using what he calls a “nonsense statistic” should be addressed by letters to the editor or swiftly retracted.

The thread was the latest development in an ongoing disagreement over research in surgery. In one corner, a group of Harvard researchers claim they’re improving how surgeons interpret underpowered or negative studies. In the other corner, statisticians suggest the authors are making things worse by repeatedly misusing a statistical technique called post-hoc power. The authors are giving weak surgical studies an unwarranted pass, according to critics.

Continue reading Statisticians clamor for retraction of paper by Harvard researchers they say uses a “nonsense statistic”

Just how common is positive publication bias? Here’s one researcher who’s trying to figure that out

Robbie van Aert

While the presence of publication bias – the selective publishing of positive studies – in science is well known, debate continues about how extensive such bias truly is and the best way to identify it.

The most recent entrant in the debate is a paper by Robbie van Aert and co-authors, who have published a study titled “Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis” in PLoS ONE. Van Aert, a postdoc at the Meta-Research Center in the Department of Methodology and Statistics at Tilburg University, Netherlands, has been involved in the Open Science Collaboration’s psychology reproducibility project but has now turned his attention to understanding the extent of publication bias in the literature.

Using a sample of studies of psychology and medicine, the new “meta-meta-analysis” diverges from “previous research showing rather strong indications for publication bias” and instead suggests “only weak evidence for the prevalence of publication bias.” The analysis found mild publication bias influences psychology and medicine similarly.

Retraction Watch asked van Aert about his study’s findings. His answers have been lightly edited for clarity and length.

RW: How much are empiric analyses of publication bias influenced by the methods used? Based on your work, do you believe there is a preferred method to look at bias?

Continue reading Just how common is positive publication bias? Here’s one researcher who’s trying to figure that out

Will scientific error checkers become as ubiquitous as spell-checkers?

Jonathan Wren

How common are calculation errors in the scientific literature? And can they be caught by an algorithm?  James Heathers and Nick Brown came up with two methods — GRIM and SPRITE — to find such mistakes. And a 2017 study of which we just became aware offers another approach.

Jonathan Wren and Constantin Georgescu of the Oklahoma Medical Research Foundation used an algorithmic approach to mine abstracts on MEDLINE for statistical ratios (e.g., hazard or odds ratios), as well as their associated confidence intervals and p-values. They analyzed whether these calculations were compatible with each other. (Wren’s PhD advisor, Skip Garner, is also known for creating such algorithms, to spot duplications.)

After analyzing almost half a million such figures, the authors found  that up to 7.5% were discrepant and likely represented calculation errors. When they examined p-values, they found that 1.44% of the total would have altered the study’s conclusion (i.e., changed significance) if they had been performed correctly.  

We asked Wren — who says he thinks automatic scientific error-checkers will one day be as common as automatic spell-checkers are now — to answer a few questions about his paper’s approach. This Q&A has been slightly edited for clarity.

Retraction Watch (RW): What prompted you to perform your study? Continue reading Will scientific error checkers become as ubiquitous as spell-checkers?

Should journals credit eagle-eyed readers by name in retraction notices?

Logo of the European Society of Cardiology, EHJ’s publisher

One of the most highly-cited journals in cardiology has retracted a paper less than a month after publishing it in response to criticism first posted on Twitter.

The article, “Short-term and long-term effects of a loading dose of atorvastatin before percutaneous coronary intervention on major adverse cardiovascular events in patients with acute coronary syndrome: a meta-analysis of 13 randomized controlled trials,” was published online January 3 in the European Heart Journal (EHJ). Its authors purported to analyze clinical trials of patients who were given a loading dose of atorvastatin, a cholesterol medication, before undergoing cardiac catheterization.

How closely the study authors adhered to their own methods came under question on January 8, when Ricky Turgeon, a cardiology pharmacist, posted a series of tweets in which he claimed some of the studies included in the analysis either did not test the drug in patients undergoing the procedure — referred to as PCI — or patients had not all been diagnosed with acute coronary syndrome, commonly known as a heart attack. With many of the trials included in the analysis not abiding by the predefined inclusion criteria, the study’s conclusions are unreliable, argued Turgeon. Continue reading Should journals credit eagle-eyed readers by name in retraction notices?

Can a “nudge” stop researchers from using the wrong cell lines?

Anita Bandrowski, a neuroscientist at the University of California, San Diego, works on tools to improve the transparency and reproducibility of scientific methods. (Her work on Research Resource Identifiers, or RRIDs, has been previously featured on Retraction Watch.) This week, Bandrowski and colleagues  — including Amanda Capes-Davis, who chairs the International Cell Line Authentication Committee — published a paper in eLife that seeks to determine whether these tools are actually influencing the behavior of scientists, in this case by reducing the number of potentially erroneous cell lines used in published studies.

Such issues may affect thousands of papers. Among more than 300,000 cell line names in more than 150,000 articles, Bandrowski and her colleagues “estimate that 8.6% of these cell lines were on the list of problematic cell lines, whereas only 3.3% of the cell lines in the 634 papers that included RRIDs were on the problematic list,” suggesting “that the use of RRIDs is associated with a lower reported use of problematic cell lines.” 

Retraction Watch spoke with Bandrowski about the role of these tools in the larger movement to improve transparency and reproducibility in science, and whether meta-scientific text-mining approaches will gain traction in the research community.

Retraction Watch (RW): Your study presents RRID as a behavioral “nudge,” beyond its primary goal of standardizing method reporting. What other nudges can you envision to prevent misuse of cell lines in scientific research? Continue reading Can a “nudge” stop researchers from using the wrong cell lines?