The scientific paper inspired international headlines with its bold claim that the combination of brain scans and machine learning algorithms could identify people at risk for suicide with 91% accuracy.
The promise of the work garnered lead author Marcel Adam Just of Carnegie Mellon University in Pittsburgh and co-author David Brent of the University of Pittsburgh a five-year, $3.8 million grant from the National Institute of Mental Health to conduct a larger follow-up study.
But the 2017 paper attracted immediate and sustained scrutiny from other experts, one of whom attempted to replicate it and found a key problem. Nothing happened until this April, when the authors admitted the work was flawed and retracted their article. By then, it had been cited 134 times in the scientific literature, according to Clarivate’s Web of Science — a large amount for a young paper — and received so much attention online that the article ranks in the top 5% of all the research tracked by Altmetric, a data company focused on scientific publishing.
All this could have been avoided if the journal had followed the advice of its own reviewers, according to records of the peer-review process obtained by Retraction Watch. The experts who scrutinized the submitted manuscript for the journal before it was published identified many issues in the initial draft and a revised resubmission. One asked for the authors to replicate the work in a new group of study participants, and overall, they recommended rejecting the manuscript.
Continue reading How a now-retracted study got published in the first place, leading to a $3.8 million NIH grant