Retraction Watch readers may recall the name Yoshihiro Sato. The late researcher’s retraction total — now at 51 — gives him the number four spot on our leaderboard. He’s there because of the work of four researchers, Andrew Grey, Mark Bolland, and Greg Gamble, all of the University of Auckland, and Alison Avenell, of the University of Aberdeen, who have spent years analyzing Sato’s papers and found a staggering number of issues.
Those issues included fabricated data, falsified data, plagiarism, and implausible productivity, among others. In 2017, Grey and colleagues contacted four institutions where Sato or his co-authors had worked, and all four started investigations. In a new paper in Research Integrity and Peer Review, they describe some of what happened next:
By November 2018, three had reported to us the results of their investigations, but only one report was publicly available.
Grey and colleagues had access to two of the other reports, and found that “investigations covered 14%, 15% and 77%, respectively, of potentially affected publications.” They then used a 78-item checklist that we, along with C. K. Gunsalus of the University of Illinois, published in JAMA last year to guide reviews of such reports:
Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.
What’s more, they found an alarming lack of response from government agencies:
The [U.S. Office of Research Integrity] ORI did not acknowledge receipt of emails sent in October and November 2017 outlining concerns about research conducted at the US institution. Our emails to the [Ministry of Education, Culture, Sports, Science and Technology] MEXT in Japan in November and December 2017 reporting the concerns about research conducted at the Japanese institutions, including ones written by a Japanese colleague, either failed to elicit a response or generated brief unhelpful replies, promising a response that has not yet materialised.
The findings, Grey and colleagues write,
identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.
We have made more than a dozen such institutional reports available over the years, and will continue to do so.
What can we learn?
Gunsalus, who was one of the paper’s peer reviewers, noted that two of the three institutions reopened their investigations after Grey and colleagues raised concerns about the findings:
On the one hand, that’s encouraging, on the other, why was it necessary?
And Gunsalus was perplexed that institutions were not investigating all of the papers whose potential problems were brought to their attention.
Gunsalus said that Grey and colleagues “made useful observations about potential clarifications and improvements to the checklist during the review process,” which she said will be incorporated into a new version in the future.
This is important work, and we hope to see more applications of the checklist over time, which we believe will result in improved institutional investigations.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].