Two years ago, following heated debate, a sports science journal banned a statistical method from its pages, and a different journal — which had published a defense of that method earlier — decided to boost its statistical chops. But as Matthew Tenan, a data scientist with a PhD in neuroscience relates in this three-part series, that doesn’t seem to have made it any easier to correct the scientific record. Here’s part one.
In July 2019, my colleague Andrew Vigotsky contacted me. He was curious, he said, whether a paper published in Sports Medicine had undergone statistical review because he was concerned about some of its claims. The link he sent me was to “A Method to Stop Analyzing Random Error and Start Analyzing Differential Responders to Exercise,” a paper published on June 28, 2019 by Scott Dankel and Jeremy Loenneke.
As it happened, I knew that paper, and I had also expressed concerns about it – when I reviewed it before publication as one of the members of the journal’s editorial board. Indeed, I was brought on to the editorial board of Sports Medicine because the journal had recently received a lot of bad press for publishing a paper about another “novel statistical method” with significant issues and I had been a vocal critic of the sports medicine and sport science field developing their own statistical methods that are not used outside of the field and validated by the wider statistics community.
Continue reading Why — even after reforms for an episode involving bad statistics — is it so difficult to correct the sports medicine literature? Part 1