Two years ago, following heated debate, a sports science journal banned a statistical method from its pages, and a different journal — which had published a defense of that method earlier — decided to boost its statistical chops. But as Matthew Tenan, a data scientist with a PhD in neuroscience relates in this three-part series, that doesn’t seem to have made it any easier to correct the scientific record. Here’s part one.
In July 2019, my colleague Andrew Vigotsky contacted me. He was curious, he said, whether a paper published in Sports Medicine had undergone statistical review because he was concerned about some of its claims. The link he sent me was to “A Method to Stop Analyzing Random Error and Start Analyzing Differential Responders to Exercise,” a paper published on June 28, 2019 by Scott Dankel and Jeremy Loenneke.
As it happened, I knew that paper, and I had also expressed concerns about it – when I reviewed it before publication as one of the members of the journal’s editorial board. Indeed, I was brought on to the editorial board of Sports Medicine because the journal had recently received a lot of bad press for publishing a paper about another “novel statistical method” with significant issues and I had been a vocal critic of the sports medicine and sport science field developing their own statistical methods that are not used outside of the field and validated by the wider statistics community.
The paper by Dankel and Loenneke proposes a novel statistical method to assess “differential responders,” which some researchers refer to as responder analyses. The flap over Magnitude Based Inference (MBI), the previous “novel statistical method,” was on my mind when I began reviewing the Dankel and Loenneke paper in April 2019. The field of sport science has a history of inventing “novel statistical methods” that just flat out don’t work or make misleading claims about how they work and their interpretation.
In the case of MBI, Will Hopkins, a former Sports Medicine editorial board member, has made a number of claims over the years which are demonstrably false (e.g. “it is a reference Bayes method” or “superior Type I and Type II error rates to standard null hypothesis testing”). Yet it gained traction in many areas of sport science research and is currently in use by a substantial number or professional soccer/football and rugby teams. Hopkins, now a professor at Victoria University, does not have Ph.D. in statistics but his university continues to sell courses in which it promotes MBI.
All of that meant I was happy to review the Dankel and Loenneke manuscript because I am very suspicious of both responder analyses and statistical methods developed by sport scientists. While I wasn’t familiar with Dankel’s work, I knew Loenneke has done some interesting work in the area of muscle physiology, and has had an impact. I read the manuscript and said in my comments for the authors that it omitted many key publications about the issues in responder analyses, suffered from what statistician Stephen Senn describes as “Dichotomania,” (or the unreasonable desire to categorize continuous variables and outcomes) and suggested they review some public discussion that the well-known statistician Frank Harrell has posted about responder analyses.
In confidential notes to the editor, I wrote: “their proposed method is not effective, nor does it solve the actual problem of looking at responders vs. non-responders. They have also not done a good enough literature review outside of the exercise science field in this area. I fear this is an MBI episode waiting to occur.”
Upon completing my review, I believed I had provided both the authors and the editor with enough information to reject the current version of the manuscript and also reconsider the idea of responder analyses entirely. Apparently, I was incorrect. And in the more than ten months since I received that email from Vigotsky, I’ve been trying to convince the editors of the journal to make what I think are necessary corrections to the scientific record.
See installment two here.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Some journals do not take kindly to criticism, regardless of the validity of the comments. They take the position that our review process is beyond criticism and encompasses all valid analysis. Letters to the New England Journal of Medicine and other publications demonstrate that there are knowledgeable readers whose expertise may exceed that of an article’s reviewers.