While the presence of publication bias – the selective publishing of positive studies – in science is well known, debate continues about how extensive such bias truly is and the best way to identify it.
The most recent entrant in the debate is a paper by Robbie van Aert and co-authors, who have published a study titled “Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis” in PLoS ONE. Van Aert, a postdoc at the Meta-Research Center in the Department of Methodology and Statistics at Tilburg University, Netherlands, has been involved in the Open Science Collaboration’s psychology reproducibility project but has now turned his attention to understanding the extent of publication bias in the literature.
Using a sample of studies of psychology and medicine, the new “meta-meta-analysis” diverges from “previous research showing rather strong indications for publication bias” and instead suggests “only weak evidence for the prevalence of publication bias.” The analysis found mild publication bias influences psychology and medicine similarly.
Retraction Watch asked van Aert about his study’s findings. His answers have been lightly edited for clarity and length.
RW: How much are empiric analyses of publication bias influenced by the methods used? Based on your work, do you believe there is a preferred method to look at bias?
Continue reading Just how common is positive publication bias? Here’s one researcher who’s trying to figure that out