What leads to bias in the scientific literature? New study tries to answer

By now, most of our readers are aware that some fields of science have a reproducibility problem. Part of the problem, some argue, is the publishing community’s bias toward dramatic findings — namely, studies that show something has an effect on something else are more likely to be published than studies that don’t.

Many have argued that scientists publish such data because that’s what is rewarded — by journals and, indirectly, by funders and employers, who judge a scientist based on his or her publication record. But a new meta-analysis in PNAS is saying it’s a bit more complicated than that.

In a paper released today, researchers led by Daniele Fanelli and John Ioannidis — both at Stanford University — suggest that the so-called “pressure-to-publish” does not appear to bias studies toward larger so-called “effect sizes.” Instead, the researchers argue that other factors were a bigger source of bias than the pressure-to-publish, namely the use of small sample sizes (which could contain a skewed sample that shows stronger effects), and relegating studies with smaller effects to the “gray literature,” such as conference proceedings, PhD theses, and other less publicized formats.

However, Ferric Fang of the University of Washington — who did not participate in the study — approached the findings with some caution:

I find the data to be interesting but would not attempt to make the generalization that ‘publication pressure is not a source of bias’ on the basis of this study.

Indeed, Fang — a member of the board of our parent non-profit organization — noted that Fanelli himself published a paper in 2010 suggesting that the pressure to publish does, in fact, increase the odds of positive findings.

But in the PNAS paper, Fanelli and his colleagues suggest something entirely different. As they write:

…the notion that pressures to publish have a direct effect on bias was not supported and even contrarian evidence was seen: The most prolific researchers and those working in countries where pressures are supposedly higher were significantly less likely to publish overestimated results, suggesting that researchers who publish more may indeed be better scientists and thus report their results more completely and accurately.

Fanelli noted that, since 2010, he’s published other papers that support his more recent findings. For instance, in 2015 he suggested that pressures to publish have little influence on misconduct, and the following year released data suggesting scientists don’t publish more when the pressure to do so increases.

He added that even though the latest findings include only studies that were part of meta-analyses, he didn’t see any reason why other studies would suggest different reasons for bias in the literature:

To the best of my knowledge, all the evidence that we have about pressures to publish comes from surveys, i.e. what scientists say. Now, there is no question that these pressures exist, we all feel them, and it is reasonable to suspect that they might have negative effects on the literature. However, as scientists we should verify empirically whether these concerns are justified or not. And, to the best of my current understanding, as explained above, evidence suggests that they are partially misguided.

There are other practices that are induced by pressures to publish that should concern us and that are completely overlooked. In particular, authors might be increasingly “salami-slicing” their collaborations. This is not only ethically questionable, but seems to be connected potentially to misconduct (see studies above) and even bias, as we showed in this study!

Here’s how the authors defined pressure to publish:

Scientists subjected to direct or indirect pressures to publish might be more likely to exaggerate the magnitude and importance of their results to secure many high-impact publications and new grants. One type of pressure to publish is induced by national policies that connect publication performance with career advancement and public funding to institutions.

To measure it, they looked at the tens of thousands of individual papers that made up more than 3000 meta-analyses from 22 fields, noting the pressures that the authors might have faced. For instance, to note if the original authors’ home countries might exert some pressures, the authors noted if any offered cash or career incentives to publish papers, or if their institutions offered the same. They also noted how productive authors were.

These measures have some flaws, Fang told us:

The inference of bias is based on an overestimation of effect size and the inference of publish-or-perish pressures is based on national publication incentives and individual publication productivity.  Each of these assumptions has its limitations, and I am not aware that the latter criteria have been independently validated against direct measurement of publication pressures.

Ioannidis acknowledged some limitations to the approach:

As we comment in the paper, the measures that we use to assess pressures to publish in the PNAS paper cannot exclude some confounding. E.g. countries or institutions who put these specific pressures to publish may also have other features in the way they do research that make their research more reliable, e.g. better trained scientists, more funding, more supervision and regulatory oversight, better control or transparency on conflicts of interest, etc.

He added:

The perfect truth about any effect size is unknown. What we can assess with our approach is the relative effects, whether some studies have bigger and others have smaller effects. E.g. in theory, in some cases, all the studies may have the same effect and they may all be biased by the same amount. So, our method captures the excess irregularity, but could miss some major pervasive biases that invalidate specific fields at large and bias all the studies in the same way. On the other hand, a larger effect in one study may not always mean that this study is more biased than one with a smaller effect. E.g. a poorly run study may miss an effect or underestimate it. Our results depend on seeing large-scale patterns across thousands of studies and meta-analyses.

Fanelli told us:

These are complex and controversial issues, and even me and John [Ioannidis] may not agree on what is actually going on. Nonetheless, by now I indeed no longer believe that we are getting the publish or perish narrative right…

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

4 thoughts on “What leads to bias in the scientific literature? New study tries to answer”

  1. Both authors are well known and distinguished. I red only the abstract of their paper. Based on that cursory evaluation I believe that the authors have missed two areas that are mentioned in our book on peer review. We are studying the subject at Georgetown University and will publish the results in due time.
    1.The bias of the editors such as publication by Wakefield and the new paper on statins both in Lancet.
    2. The desire of open-access journals to make money

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.