When Retraction Watch readers think of problematic psychology research, their minds might naturally turn to Diederik Stapel, who now has 54 retractions under his belt. Dirk Smeesters might also tickle the neurons.
But a look at our psychology category shows that psychology retractions are an international phenomenon. (Remember Marc Hauser?) And a new paper in the Proceedings of the National Academy of Sciences (PNAS) suggests that it’s behavioral science researchers in the U.S. who are more likely to exaggerate or cherry-pick their findings.
For the new paper, Daniele Fanelli — whose 2009 paper in PLoS ONE contains some of the best data on the prevalence of misconduct — teamed up with John Ioannidis, well known for his work on “why most published research findings are false.” They looked at
1,174 primary outcomes appearing in 82 metaanalyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis.
And while studies
whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters.
But they didn’t find the same to be true for non-behavioral studies.
Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.
So where might this predisposition come from, ask the authors?
A complete explanation would probably invoke a combination of cultural, economic, psychological, and historical factors, which at this stage are largely speculative. Our preferred hypothesis is derived from the fact that researchers in the United States have been exposed for a longer time than those in other countries to an unfortunate combination of pressures to publish and winner-takes-all system of rewards (20, 22). This condition is believed to push researchers into either producing many results and then only publishing the most impressive ones, or to make the best of what they got by making them seem as important as possible, through post hoc analyses, rehypothesizing, and other more or less questionable practices (e.g., 10, 13, 22, 26). Such a pattern of modulating forces may gradually become more prevalent also in other countries currently and in the near future (18, 20, 21).
We asked Fanelli whether the combination of his findings and Stapel et al suggest that US behavioral scientists are more likely to exaggerate, while some EU behavioral scientists are more likely to just make it up? He said it was “an interesting hypothesis” that “could eventually be tested.”
But I currently wouldn’t think so. Evidence from surveys and other sources suggests that fabrication and falsification are just the extremes of a continuum, and that the bulk of the problem lies in questionable and/or unconscious research choices. Behind the US effect there might be some misconduct and some intentional bias, although I think that for the most part this phenomenon escapes the conscious awareness of any individual researcher. The implication, if the above is true, is that misconduct might actually be slightly higher in the US compared to other countries.
However, the point here is more general. Whether the US effect is voluntary or not, the end result of these choices is the same: a higher rate of false positives and exaggerated findings, which in the future might need corrections and in some cases retractions.
And Fanelli was also quick to point out that this kind of exaggeration doesn’t seem to be exclusive to the U.S.
The US are an ideal subject because they are relatively homogeneous and yet very big and scientifically productive, so it was easy for us to compare the US to the rest of the world. And of course the US-effect was especially interesting, since it helped us exclude classic explanations, such as editorial biases and simple file-drawer effects. But we suspect that with higher statistical power we would observe specific biases in other countries, in Europe and elsewhere, possibly limited to specific fields and periods in time.
Before opening the floor to what we hope will be a robust discussion, we’ll close with lovely description of science that opens the paper:
Science is a struggle for truth against methodological, psychological, and sociological obstacles.
We desperately need specialists from every field of study to come forward and to start quantifying the fraud, the misconduct and the shenanigans by scientists and by publishers through formal, open access publications like this one. Hats off to fanelli for speaking the truth and for quantifying his statements.
Fanelli and Ionnidis is not OA. It’s behind a paywall at PNAS. I confess I find that reassuring.
PNAS papers become open access after 6 months.
This Saturday there was an interview with Diederik Stapel in the Dutch newspaper Trouw. With help of Google I attempt a translation:
He is proud of his old field, the way social psychology has recovered. “I think social psychology now comes first, as far as integrity is concerned. They are now light years ahead of other academic fields. Especially many young researchers are concerned with good and solid research execution and replication. I find that beautiful to see. It was a great pity, for example, how the Royal Academy of Sciences president Hans Clevers called social psychology an immature science. Or how the commission Levelt not only committed character assassination on me, but also a violated a discipline. ”
http://translate.google.com/translate?sl=nl&tl=en&js=n&prev=_t&hl=nl&ie=UTF-8&u=http%3A%2F%2Fwww.trouw.nl%2Ftr%2Fnl%2F6700%2FWetenschap%2Farticle%2Fdetail%2F3497892%2F2013%2F08%2F24%2FDiederik-Stapel-Iedereen-heeft-recht-op-een-tweede-kans.dhtml
For some reason Stapel is too kind here.
There are thousands of research publications in the field of social psychology that we cannot trust because of the possibility of exaggeration and publication bias. The stack should be cleaned up.
The journal Psychological Science published last year an insane paper by Lewandowski’s paper on climate science skeptics. This should be a far bigger scandal than “Stapel”, but it is not. Honest scientists please stand up, otherwise your field should not be taken seriously.
“Especially many young researchers are concerned with good and solid research execution and replication.” Yeah, yeah, he would know…
The key has to be that “peer review” starts when you submit a paper and is continues after its publication. PubPeer (www.Pubpeer.com), an early step in developing this process, seems to be having some success in this respect.