‘A flawed decision:’ What happened when sports scientists tried to correct the scientific record, part 2

Matthew Tenan

Why is it so difficult to correct the scientific record in sports science? In the first installment in this series of guest posts, Matthew Tenan, a data scientist with a PhD in neuroscience, began the story of how he and some colleagues came to scrutinize a paper. In this post, he explains what happened next.

The‌ ‌journal‌ ‌Sports‌ ‌Medicine‌ ‌is‌ ‌widely‌ ‌considered‌ ‌one‌ ‌of‌ ‌the‌ ‌top‌ ‌journals‌ ‌–‌ ‌if‌ ‌not‌ ‌the‌ ‌top‌ ‌journal‌ ‌–‌ ‌in‌ ‌the‌ ‌fields‌ ‌of‌ ‌sport‌ ‌science,‌ ‌exercise‌ ‌science‌ ‌and‌ ‌physical‌ ‌education.‌  ‌This‌ ‌journal‌ ‌is‌ ‌managed‌ ‌by‌ ‌two‌ ‌professional‌ ‌editors‌ ‌who‌ ‌do‌ ‌not‌ ‌hold‌ ‌PhDs‌ ‌in‌ ‌the‌ ‌journal’s‌ ‌subject‌ ‌area‌ ‌but‌ ‌are‌ ‌generally‌ ‌versed‌ ‌in‌ ‌the‌ ‌topic‌ ‌and‌ ‌have‌ ‌the‌ ‌goal‌ ‌of‌ ‌managing‌ ‌a‌ ‌successful‌ ‌journal‌ ‌for‌ ‌SpringerNature.‌ ‌

The‌ ‌manuscript‌ ‌by‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌was‌ ‌reviewed‌ ‌by‌ ‌three‌ ‌reviewers.‌  ‌I‌ ‌know‌ ‌this‌ ‌because‌ ‌I‌ ‌was‌ ‌one‌ ‌of‌ ‌the‌ ‌reviewers‌ ‌and,‌ ‌as‌ ‌noted‌ ‌in‌ ‌the‌ ‌first‌ ‌post‌ ‌in‌ ‌this‌ ‌series,‌ ‌I‌ ‌strongly‌ ‌advised‌ ‌against‌ ‌its‌ ‌publication.‌ ‌Greg‌ ‌Atkinson,‌ ‌a‌ ‌practicing‌ ‌scientist‌ ‌in‌ ‌the‌ ‌area‌ ‌of‌ ‌health‌ sciences,‌ ‌has‌ ‌publicly‌ ‌stated‌, in a private Facebook group, that he‌ ‌was‌ ‌one‌ ‌of‌ ‌the‌ ‌reviewers‌ ‌who‌ ‌recommended‌ ‌the‌ ‌paper‌ be‌ ‌published.‌ ‌Both‌ ‌myself,‌ ‌Atkinson,‌ ‌and‌ ‌the‌ ‌senior‌ ‌author‌ ‌on‌ ‌the‌ ‌manuscript,‌ ‌Loenneke,‌ ‌sit‌ ‌on‌ the‌ ‌editorial‌ ‌board‌ ‌of‌ ‌the‌ ‌journal‌ ‌Sports‌ ‌Medicine.‌ ‌And‌ ‌while‌ ‌the‌ ‌paper‌ ‌published‌ ‌in‌ ‌the‌ ‌journal‌ ‌by‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌proposes‌ ‌a‌ ‌novel‌ ‌statistical‌ ‌method,‌ ‌neither‌ ‌of‌ ‌the‌ ‌two‌ ‌authors‌ ‌on‌ ‌the‌ ‌manuscript,‌ ‌myself,‌ ‌nor‌ ‌Atkinson,‌ ‌have‌ ‌PhDs‌ ‌in‌ ‌statistics.‌ ‌The‌ ‌published‌ ‌paper‌ ‌does‌ ‌not‌ ‌cite‌ ‌a‌ ‌single‌ ‌statistics‌ ‌journal‌ ‌in‌ ‌the‌ ‌course‌ ‌of‌ ‌reporting‌ ‌their‌ ‌“novel‌ ‌method.”‌

‌What‌ ‌could‌ ‌go‌ ‌wrong,‌ ‌right?‌ ‌

When‌ ‌I‌ ‌heard‌ ‌from‌ ‌Andrew‌ ‌Vigotsky‌ ‌raising‌ ‌concerns‌ ‌about‌ ‌the‌ ‌paper,‌ ‌I‌ ‌knew‌ a‌ ‌lot‌ ‌had‌ ‌indeed‌ ‌gone‌ ‌wrong.‌ ‌I‌ ‌immediately‌ ‌contacted‌ ‌Steve‌ ‌McMillan,‌ ‌co-editor-in-chief‌ ‌of‌ ‌Sports‌ ‌Medicine,‌ ‌to‌ ‌raise‌ ‌my‌ ‌concerns.‌ ‌McMillan‌ ‌–‌ ‌whom‌ ‌I‌ ‌believe‌ ‌is‌ ‌committed‌ ‌to‌ ‌publishing‌ ‌good‌ ‌research‌ ‌in‌ ‌the‌ journals‌ ‌he‌ ‌runs‌ ‌–‌ ‌referred‌ ‌me‌ ‌to‌ ‌Roger‌ ‌Olney,‌ ‌his‌ ‌co-editor-in-chief‌ ‌who‌ ‌handled‌ ‌the‌ ‌manuscript.‌ ‌Olney‌ ‌wrote,‌ ‌among‌ ‌other‌ ‌things,‌ ‌that‌ ‌“…other‌ ‌two‌ ‌reviewers‌ ‌were‌ ‌also‌ ‌experts‌ ‌in‌ ‌the‌ ‌area‌ ‌(both‌ ‌in‌ ‌terms‌ ‌of‌ ‌statistical‌ ‌expertise‌ ‌and‌ ‌a‌ ‌specific‌ ‌interest‌ ‌in‌ ‌the‌ ‌topic)‌ ‌and‌ ‌I‌ ‌accordingly‌ ‌considered‌ ‌their‌ ‌comments‌ ‌and‌ ‌recommendations‌ ‌to‌ ‌be‌ ‌equally‌ ‌credible.‌ ‌When‌ ‌experts‌ ‌disagree,‌ ‌it‌ ‌can‌ ‌be‌ ‌difficult‌ ‌for‌ ‌editors‌ ‌to‌ ‌make‌ ‌a‌ ‌publication‌ ‌decision,‌ ‌and‌ ‌in‌ ‌such‌ ‌circumstances‌ ‌we‌ ‌tend‌ ‌to‌ ‌side‌ ‌with‌ ‌the‌ ‌majority‌ ‌view.”‌ ‌ ‌

I‌ ‌am‌ ‌generally‌ ‌sympathetic‌ ‌to‌ ‌this‌ ‌viewpoint,‌ ‌especially‌ ‌in‌ ‌the‌ ‌case‌ ‌of‌ ‌professional‌ ‌editors‌ ‌who‌ ‌are‌ ‌not‌ ‌actively‌ ‌publishing‌ ‌in‌ ‌the‌ ‌field.‌  ‌However,‌ ‌I‌ ‌felt‌ ‌that‌ ‌I‌ ‌had‌ ‌provided‌ ‌substantial‌ ‌evidence‌ ‌from‌ ‌the‌ ‌statistical‌ ‌community‌ ‌which‌ ‌showed‌ ‌that‌ ‌the‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌method‌ ‌was‌ ‌not‌ ‌valid‌ ‌and‌ ‌that‌ ‌evidence‌ ‌provided‌ ‌by‌ ‌the‌ ‌reviewers,‌ ‌not‌ ‌just‌ ‌the‌ ‌reviewer’s‌ ‌opinion,‌ ‌is‌ ‌what‌ ‌should‌ ‌be‌ ‌considered‌ ‌when‌ ‌reviewers‌ ‌disagree.‌  ‌Moreover,‌ ‌the‌ ‌“difference‌ ‌of‌ ‌opinion”‌ ‌between‌ ‌reviewers‌ ‌is‌ ‌not‌ ‌a‌ ‌rationale‌ ‌that‌ ‌works‌ ‌when‌ ‌the‌ ‌math‌ ‌does‌ ‌not‌ ‌add‌ ‌up.‌ ‌ ‌I‌ ‌needed‌ ‌to‌ ‌do‌ ‌the‌ ‌math.‌

Fortunately,‌ ‌I‌ ‌had‌ ‌a‌ ‌transcontinental‌ ‌flight‌ ‌and‌ ‌nothing‌ ‌but‌ ‌time.‌  ‌It‌ ‌only‌ ‌took‌ ‌me‌ ‌45‌ ‌minutes‌ ‌in‌ ‌a‌ ‌cramped‌ ‌airplane‌ ‌cabin‌ ‌with‌ ‌no‌ ‌internet‌ ‌to‌ ‌“break”‌ ‌the‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌method,‌ ‌demonstrating‌ ‌that‌ ‌the‌ ‌error‌ ‌rates‌ ‌of‌ ‌their‌ ‌method‌ ‌were‌ ‌incorrect‌ ‌under‌ ‌even‌ ‌mild‌ ‌deviations‌ ‌from‌ ‌constant‌ ‌measurement‌ ‌error.‌ ‌

If‌ ‌my‌ ‌rough‌ ‌simulations‌ ‌were‌ ‌correct,‌ ‌there‌ ‌was‌ ‌a‌ ‌touch‌ ‌of‌ ‌irony‌ ‌that‌ ‌the‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌manuscript’s‌ ‌title‌ ‌proposed‌ ‌to‌ ‌“…Stop‌ ‌Analyzing‌ ‌Random‌ ‌Error…”‌ ‌when‌ ‌it‌ ‌was‌ ‌highly‌ susceptible‌ ‌to‌ ‌non-constant‌ ‌measurement‌ ‌error.‌  ‌I‌ ‌worked‌ ‌with‌ ‌‌Aaron‌ ‌Caldwell‌‌ ‌to‌ ‌flesh‌ ‌out‌ ‌the‌ ‌simulations.‌ ‌Vigotsky‌ ‌was‌ ‌particularly‌ ‌helpful‌ ‌formalizing‌ ‌the‌ ‌mathematical‌ ‌issues‌ ‌and‌ ‌flaws‌ ‌surrounding‌ ‌the‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌method.‌ ‌

On‌ ‌August‌ ‌8,‌ ‌2019,‌ ‌we‌ ‌contacted‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌to‌ ‌relay‌ ‌our‌ ‌concerns‌ ‌about‌ ‌their‌ ‌manuscript‌ ‌and‌ ‌incorrect‌ ‌error‌ ‌rates‌ ‌we‌ ‌identified.‌  ‌We‌ ‌also‌ ‌provided‌ ‌them‌ ‌with‌ ‌our‌ ‌‌full‌ ‌proofs‌ ‌and‌ ‌simulations‌.‌  ‌The‌ ‌authors‌ ‌were‌ ‌cordial,‌ ‌but‌ ‌have‌ ‌repeatedly‌ ‌refused‌ ‌to‌ ‌provide‌ ‌any‌ ‌sort‌ ‌of‌ ‌simulation‌ ‌or‌ ‌mathematics‌ ‌demonstrating‌ ‌their‌ ‌“novel‌ ‌method”‌ ‌was‌ ‌effective‌ ‌and‌ ‌had‌ ‌the‌ ‌properties‌ ‌they‌ ‌claim.‌ ‌Notably,‌ ‌they‌ ‌also‌ ‌failed‌ ‌to‌ ‌find‌ ‌any‌ ‌issue‌ ‌with‌ ‌the‌ ‌simulations‌ ‌or‌ ‌mathematics‌ ‌we‌ ‌provided‌ ‌them‌ ‌showing‌ ‌their‌ ‌method‌ ‌to‌ ‌be‌ ‌flawed.‌  ‌They‌ ‌continually‌ ‌provide‌ a‌ statistically‌ ‌sounding‌ ‌rationale‌ ‌where‌ ‌they‌ ‌argue‌ ‌that‌ ‌the‌ ‌error‌ ‌rate‌ ‌is‌ ‌5%;‌ ‌however,‌ ‌our‌ ‌math‌ ‌and‌ ‌simulations‌ ‌show‌ ‌that‌ ‌their‌ ‌error‌ ‌rates‌ ‌can‌ ‌be‌ ‌well‌ ‌north‌ ‌of‌ ‌60%.‌ ‌ ‌

Since‌ ‌they‌ ‌weren’t‌ ‌able‌ ‌to‌ ‌provide‌ ‌any‌ ‌actual‌ ‌evidence‌ ‌their‌ ‌method‌ ‌has‌ ‌the‌ ‌properties‌ ‌they‌ ‌claim,‌ ‌we‌ ‌asked‌ ‌them‌ ‌to‌ ‌retract‌ ‌their‌ ‌paper‌ ‌and‌ ‌offered‌ ‌to‌ ‌co-author‌ ‌a‌ ‌new‌ ‌manuscript‌ ‌which‌ ‌would‌ ‌focus‌ ‌on‌ ‌better‌ ‌describing‌ ‌the‌ ‌problem‌ ‌they‌ ‌wished‌ ‌to‌ ‌address‌ ‌with‌ ‌their‌ ‌method,‌ ‌in‌ ‌addition‌ ‌to‌ ‌presenting‌ ‌valid,‌ ‌established‌ ‌statistical‌ ‌solutions.‌ They‌ ‌rejected‌ ‌our‌ ‌overture‌ ‌and‌ ‌stated,‌ ‌“We‌ ‌spoke‌ ‌with‌ ‌a‌ ‌biostatistician‌ ‌who‌ ‌recommended‌ ‌that‌ ‌we‌ ‌do‌ ‌not‌ ‌retract‌ ‌the‌ ‌paper.”‌ ‌ ‌

We‌ ‌asked‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌to‌ ‌provide‌ ‌the‌ ‌name‌ ‌of‌ ‌their‌ ‌anonymous‌ ‌biostatistician,‌ ‌which‌ ‌they‌ ‌did,‌ ‌while‌ ‌asking‌ ‌us‌ ‌not‌ ‌to‌ ‌contact‌ ‌him.‌ ‌After‌ ‌some‌ ‌debate‌ ‌among‌ ‌my‌ ‌colleagues,‌ ‌we‌ ‌decided‌ ‌that‌ ‌it‌ ‌was‌ ‌important‌ ‌to‌ ‌know‌ ‌if‌ ‌the‌ ‌biostatistician‌ ‌thought‌ ‌that‌ ‌our‌ ‌math‌ ‌was‌ ‌wrong‌ ‌or‌ ‌unconvincing‌ ‌before‌ ‌we‌ ‌continued‌ ‌our‌ ‌push‌ ‌for‌ ‌retraction.‌ ‌We‌ ‌contacted‌ ‌the‌ ‌biostatistician,‌ ‌but‌ ‌have‌ ‌yet‌ ‌to‌ ‌receive‌ ‌a‌ ‌response.‌ ‌

We‌ ‌went‌ ‌back‌ ‌to‌ ‌the‌ ‌journal,‌ ‌this‌ ‌time‌ ‌asking‌ ‌them‌ ‌to‌ ‌editorially‌ ‌retract‌ ‌the‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌manuscript‌ ‌because‌ ‌it‌ ‌is‌ ‌fatally‌ ‌flawed,‌ ‌has‌ ‌misrepresented‌ ‌mathematics,‌ ‌and‌ ‌because‌ ‌the‌ ‌authors‌ ‌have‌ ‌been‌ ‌unable‌ ‌to‌ ‌provide‌ ‌any‌ ‌evidence‌ ‌their‌ ‌method‌ ‌meets‌ ‌the‌ ‌standards‌ ‌they‌ ‌claim‌ ‌it‌ ‌does.‌ ‌The‌ ‌editors‌ ‌declined,‌ ‌but‌ ‌invited‌ ‌us‌ ‌to‌ ‌write‌ ‌a‌ ‌letter‌ ‌to‌ ‌the‌ ‌editor‌ ‌and‌ ‌said‌ ‌they‌ ‌would‌ ‌reconsider‌ ‌their‌ ‌decision‌ ‌after‌ ‌reviewing‌ ‌our‌ ‌letter‌ ‌and‌ ‌Dankel‌ ‌and‌ ‌Loenneke’s‌ ‌response.‌ ‌ ‌

We‌ ‌submitted‌ ‌our‌ ‌formal‌ ‌letter‌ ‌to‌ ‌the‌ ‌editor‌ ‌on‌ ‌October‌ ‌1,‌ ‌2019,‌ ‌and‌ ‌Dankel‌ ‌and‌ ‌Loenneke‌ ‌submitted‌ ‌a‌ ‌response‌ ‌around‌ ‌October‌ ‌20.‌ ‌On‌ ‌November‌ ‌21,‌ ‌the‌ ‌editors‌ ‌told‌ ‌us‌ ‌that‌ ‌after‌ ‌considering‌ ‌both‌ ‌letters,‌ ‌they‌ ‌had‌ ‌decided‌ ‌to‌ ‌publish‌ ‌the‌ ‌correspondence‌ ‌but‌ ‌not‌ ‌retract‌ ‌the‌ ‌manuscript.‌ ‌‌Both‌‌ ‌‌letters‌‌ ‌appeared‌ ‌on‌ ‌December‌ ‌21.‌

Needless‌ ‌to‌ ‌say,‌ ‌we‌ ‌think‌ ‌that‌ ‌is‌ ‌a‌ ‌flawed‌ ‌decision.‌

Read part three here.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

6 thoughts on “‘A flawed decision:’ What happened when sports scientists tried to correct the scientific record, part 2”

  1. Sports Medicine has a high IF simply because it is strictly a review journal, and people find it convenient to cite reviews vs. original sources. Among those actually working in the field, it is hardly considered a “top journal”.

    1. Hi Andrew,
      There is always that discussion around how one quantifies “top journal”, so you are correct, I was using IF as a metric in this case. It isn’t entirely correct to state that Sports Medicine is a Review Journal as I think they’re at something like 50% Reviews, 25% Original Research and 25% Commentaries. I actually think that NOT being a review journal increases the impact factor more than doing reviews alone. My understanding is that citations to Commentaries are counted in the IF but that the Commentary itself is not counted in the IF denominator. It wouldn’t shock me if this is actually the reason you’re seeing more of the American Physiological Society’s journals doing more Commentary and Back-and-Forth style publications.

      Anyhow, quantifying “Top Journal” is challenging and the moment you define a metric, you’ve got journals gaming said metric. I find publication quality in the Exercise Science/Sport Science field to be uneven, regardless of Journal.
      Cheers,
      Matt

      1. I find publication quality in all fields/journals to be spotty. Such is the nature (pun intended) of the beast.

  2. They specifically asked you not to contact the biostatistician, which you then did because you thought it was important.

    How is that in any way ethical?

    1. I appreciate the comment. Our decision was not made lightly. It was made because the authors of the manuscript did not appear equipped to address the mathematics and simulation evidence which we were presenting them. Our hope was that “their biostatistician” would be able to inform us if our simulations or math were not consistent with their proposed method. Unfortunately, “their biostatistician” decided not to respond to our query and we did not send follow ups as we would have viewed this as inappropriate.

      We needed to weigh the requests of the authors (not to contact their biostatistician) against the need to confirm whether or not their claimed method was valid. We are comfortable with the decision made because having invalid statistical methods polluting the applied scientific literature can have long term financial and public health ramifications.

    2. Indeed! How could it be ethical to forbid contact with the biostatistician? It might be impolite, but it certainly isn’t unethical to contact them, particularly on a statistical technique being put forward by non-statisticians.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.