Retraction Watch readers may recall the names Jun Iwamoto and Yoshihiro Sato, who now sit in positions 3 and 4 of our leaderboard of retractions, Sato with more than 100. Readers may also recall the names Andrew Grey, Alison Avenell and Mark Bolland, whose sleuthing was responsible for those retractions. In a recent paper in in Accountability in Research, the trio looked at the timeliness and content of the notices journals attached to those papers. We asked them some questions about their findings.
Retraction Watch (RW): Your paper focuses on the work of Yoshihiro Sato and Jun Iwamoto. Tell us a bit about this case.
Andrew Grey, Alison Avenell and Mark Bolland (AG, AA, MB): We first raised concerns about the integrity of work by these researchers 8 years ago. The researchers were employed at 3 institutions in Japan and 1 in the United States. They published > 300 papers, a mixture of human clinical trials and observational studies, animal trials, and reviews and meta-analyses. Drs Sato and Iwamoto co-authored about 150 papers. Publications rates and workloads were highly implausible. At least some of their publications were fabricated. Almost all of the papers co-authored by Sato and Iwamoto featured gift authorship. Ethical oversight was often lacking or not reported. Many papers were affected by other integrity problems, including impossible and implausible data, duplicate publication and text recycling/self-plagiarism, reported in the current manuscript using the REAPPRAISED publication integrity checklist. Our group has been raising concerns about these publications since 2013: to our knowledge, no publication has been assessed and reported/shown to be valid. At present, about 120 publications have been retracted.
RW: You found that the median time to the first correction notice was 22 months of notification to the journal about issues in the paper, and that only a quarter of the papers included a correction within 12 months of notification. How would you characterize that time frame?
AG, AA, MB: We think readers deserve better, since their health (members of the public) and their practice (clinicians) and research (scientists) depend on a reliable scientific literature. When serious concerns are raised they deserve prompt attention and resolution – if delays are likely, notification of readers using Expressions of Concern should be undertaken. Despite the delays we found, Expressions of Concern were few and publishers and journals appear reluctant to use them. Often when they are used, they are not used as “placeholders” while a definitive decision is made, but are final decisions which do not let readers know whether the paper is valid or not.
We were surprised that, even after public notification of fabrication and authorship malfeasance by the Sato/Iwamoto group, time to correction increased rather than decreased. Publishers and journals declined to assess individual publications by Sato and Iwamoto unless specific concerns were raised. There are instances of prolonged delays (>11 months) between decision to retract and publication of retraction notices.
RW: You judged the content of retraction notices by three standards — REAPPRAISED, COPE guidelines, and recommendations we presented in 2015. How well did the notices meet these standards?
AG, AA, MB: We used REAPPRAISED to collate and code the concerns we raised, so we could compare the information contained in the retraction notices with the concerns reported. That analysis found that the vast majority of notices failed to include the vast majority of concerns that had been raised (and not assuaged) – the median (range) proportion of concerns raised with the journal that were mentioned in the retraction notices was 9.5% (2-49%).
We assessed the content of the retraction notices against the COPE recommendations and the Retraction Watch minimum and optimum recommended content. Overall, the notices were deficient in many areas. The COPE recommendations are very slight and limited: even if followed, the retraction notice will be uninformative. So it proved – although 75% of the notices satisfied 7 of 9 of the items in the COPE recommendations, 88% could not be graded as factual because they failed to report the concerns raised and 74% failed to clearly state who was retracting the paper. The Retraction Watch optimal set of recommendations is the most demanding, containing 17 items – only 1 in 5 retraction notices met the recommendations for 9 of 16 evaluable items in our analysis.
RW: Do you think the retraction notices you reviewed allow researchers to evaluate the work of the authors of such retracted papers? What about scholars who study the reasons for retraction and other phenomena?
AG, AA, MB: No, the notices considerably underrepresented the scale and extent of the problems that existed in the Sato/Iwamoto papers. This is particularly concerning if authorship misconduct or self-plagiarism are the only reasons that are mentioned. Readers may still consider data reliable.
For academics interested in publication integrity research, the notices are utterly unhelpful. We know the lack of information in notices does not apply to this case alone, in which case academic research evaluating reasons for retractions may itself be unreliable.
RW: You write that your findings, when compared to similar historical cases “suggest that publisher processes to correct the literature in a timely fashion have not improved.” Can you elaborate?
AG, AA, MB: It is readily apparent from, for example, browsing the Retraction Watch website and perusing the literature on publication integrity that the assessment and resolution of concerns about the integrity of publications by researchers such as Joachim Boldt and Yoshitaki Fujii has been slow, incomplete and inconsistent. The Sato/Iwamoto case, now in its 9th year (but red flags were raised with journals 15 years ago), isn’t faring any better in terms of efficient resolution.
RW: What would you recommend to journals, publishers, and others involved in investigating allegations?
AG, AA, MB: Focus first and foremost on the integrity of the publications in question rather than the reason(s) for compromised integrity. Request and review raw data and relevant study documents and alert readers. Apply systematic evaluation tools. Convene and support the establishment of independent expert panels to advise on publication integrity. Invest in publication integrity, which is, after all, quality control for publishers. Assess all publications by researchers and co-authors when it is established that integrity of some publications is compromised, without waiting for concerns to be raised about each publication. Critically evaluate the quality and conclusions of institutional investigations. Be transparent – publish the concerns raised, together with the authors’ responses and journal/publisher conclusions, and the time frames. This includes the retraction and correction notices, which should provide readers with a clear understanding of all the problems identified, the processes undertaken, the responses, and decisions made. Establish a system to share information with publishing colleagues who are dealing with concerns about the same researchers. Audit the processes and report the results.
Part of the problem with the current processes around publication integrity is that the information provided to readers is so slow and incomplete, and the processes themselves not clearly separated from investigations of misconduct, so that there is stigma associated with every aspect. Even corrections for simple honest errors that anyone might make can be associated (unfairly) with stigmatization. If publication integrity assessment and resolution processes were conducted rapidly and independently, and reported in a systematic, open and transparent manner, with decisions regarding validity of papers separated from decisions about researcher behavior, perhaps some of that stigma could be mitigated as authors, journals, and publishers recognise the value in promptly correcting or withdrawing publications with compromised integrity.
Like Retraction Watch? You can make a one-time tax-deductible contribution or a monthly tax-deductible donation to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Not to speak of some “journals” never retracting anything to the point that these “journals” should be entirely retracted… Nine of such “journals” listed there : http://www.cristal.org/CHM/CHM.html
Seems like journals are very reluctant to make any of the changes suggested in this interview. A way to inform readers better right now is to post concerns (not conclusions) on PubPeer (and install their plug-in). Researchers can choose to join that conversation or not, but at least there’s some “flag” on a paper that may cause readers to think more critically about a particular paper.