Two researchers from Japan — Jun Iwamoto and the late Yoshihiro Sato — have slowly crept up our leaderboard of retractions to positions 3 and 4. They have that dubious distinction because a group of researchers from the University of Auckland the University of Aberdeen, who have spent years analyzing the work. As their efforts continue, those researchers have been analyzing how journals respond to allegations, and what effect Sato and Iwamoto’s misconduct has had on the clinical literature. We asked three of the common authors of two recently published papers to answer some questions.
Retraction Watch (RW): Tell us a bit about the case you analyzed in these two papers, and what you found.
Alison Avenell, Mark Bolland, and Andrew Grey (AA, MB, AG): We’ve recently published two papers. The first, in Science and Engineering Ethics (SEE), examined how 15 journals responded to our raising of concerns about duplicate publication, authorship transgressions and errors in published data. The second paper in BMJ Open took a sample of retracted clinical trial reports and looked to if these influenced clinical guidelines, systematic reviews, other reviews and clinical trials.
Both papers relate to a major case of research misconduct, led by two Japanese researchers, Yoshihiro Sato and Jun Iwamoto, presently third and fourth on Retraction Watch’s leaderboard. More than 70 different journals and 300 publications are potentially affected. We first submitted concerns about these investigators to a journal in 2013, based on detailed statistical and methodological analysis of a subgroup of 33 clinical trial reports from the authors. Others had written to journals as early as 2004-2007, but no action had resulted. Since 2013, while attempting to preserve the integrity of academic literature by investigating other publications from this group, we have learnt a huge amount in dispiriting detail about how publishing and academia are failing to promptly examine and correct integrity concerns.
One striking feature of the Sato/Iwamoto case was that, even in the context of known research misconduct, there was no systematic process to evaluate the integrity of all publications by those researchers. So we ended up taking on that job. In our SEE paper, we collated and presented the overlapping concerns about a set of animal trials in a structured way allowing us to systematically assess the responses, processes and decisions of the affected journals/publishers.
The problems raised were about gift authorship, unacknowledged duplicate data reporting, data errors and discrepancies and failure to report funding. We found that journals’ responses were slow. For example, only half of the journals acknowledged receipt of the concerns within a month, and by 1 year fewer than half had communicated a decision. They were opaque – despite receiving a list of specific concerns, none of the decision letters addressed them completely, and most did not do so at all. They were inconsistent – the nature and number of concerns (e.g. the amount of duplicated data) were similar among publications, yet sometimes no action was deemed necessary, while other papers were corrected or retracted.
In our BMJ Open paper, we examined whether 12 retracted trial reports had influenced clinical guidelines, systematic and other reviews, and clinical trials. We found 68 of these types of publications had cited the retracted trial reports, but only one had publicly identified that the retraction had happened. From the 32 reviews and guidelines, 13 had findings or recommendations that were likely to change if the retracted trials reports were removed. It’s likely that if initial concerns from other researchers in 2004-2007 had been explored, the current evidence base would be different. Even now there’s no mechanism to initiate ways to mitigate the impact of retracted research on other’s work, guidelines, or policy.
RW: In one of your papers, you found that “13 guidelines, systematic or other reviews would likely change their findings if the affected trial reports were removed, and in another eight it was unclear if findings would change.” How significant were these 21 papers, i.e., were any of the 21 papers used by regulatory institutions, or in ways that had a direct impact on people.
AA, MB, AG: It’s hard to be certain if patient care was directly affected, but it is likely. Some of the affected guidelines were generated by influential organizations. One systematic review reporting prevention of osteoporotic hip fractures by vitamin K, which was published in JAMA Internal Medicine, didn’t show that effect when retracted trial reports were removed. This systematic review was the only evidence cited to support the use of vitamin K for osteoporosis in Japan in guidelines published in 2011.
2007 US guidelines for osteoporosis published by the Agency for Healthcare Research and Quality (AHRQ) relied entirely on affected trial reports to demonstrate that bisphosphonates prevent fractures for patients at high risk of falls, as did guidelines from the American College of Physicians. ARHQ also relied on affected trial reports to show that bisphosphonates prevent fractures in people with Alzheimer’s disease, Parkinson’s disease or stroke, and that 2.5mg risedronate prevented hip fractures. Although this dose of risedronate does not have marketing approval in the US, it does in Japan.
These publications appear to be those most likely to have had an impact on patients, but it is possible that others, such as systematic reviews by Sato/Iwamoto, could have been used by clinical groups producing guidelines or technology assessment groups in individual countries.
We are continuing to explore the impact of this case of misconduct. It’s extremely time-consuming work, without hope of supportive funding, but someone really needs to be doing this in the absence of systems to reduce the effects of misconduct. We’re working to alert affected organisations and researchers, but it was hard to do this earlier in the absence of retractions, which have taken so long to happen.
RW: You excluded “self-citing publications” from your dataset. Is it possible that might have affected guidelines or practice?
AA, MB, AG: It’s possible that self-citing systematic reviews from the authors may have been cited by guidelines and/or influenced clinical practice. So far, we haven’t come across examples of that happening, but we haven’t undertaken a systematic search for this, including looking at Japanese language guidelines, for which we don’t have the resources. We know that by systematically reviewing topics so that they could cite their own work, these authors weren’t unusual for those who have numerous retracted publications. There at least 40 systematic reviews or other reviews led by one of the two main authors, 11 of which have been retracted in response to our concerns to date. To our knowledge none of the institutions, journals or publishers involved intend to initiate any investigations with respect to these other reviews, and all the retractions of reviews to date have been through our raising of concerns. Clearly, this is an unacceptable situation, where these reviews won’t be investigated unless we raise concerns. Of course, with present systems it doesn’t mean that retracting a paper stops it getting cited, we badly need to change processes, from researchers to publishers to reference managing software to indexing services, to prevent that happening.
RW: You noted that only 27 papers of 33 you flagged in a 2016 study had been retracted by May of 2019, with only one more retracted by the time your other article “Assessing and Raising Concerns About Duplicate Publication, Authorship Transgressions and Data Errors in a Body of Preclinical Research” went to press. Did it surprise you to have such a delay in the journals taking action?
AA, MB, AG: When we first submitted our concerns about the 33 trial reports to JAMA in March 2013, we were naively hopeful that retractions would quite quickly follow. We soon learnt otherwise. Journals were, and are, extremely reluctant to even publish expressions of concern – JAMA didn’t do so for more than 2 years, and even when they did, the formal notice provided journal readers with no useful information about the case. We know that there are many long-term investigations underway, including other cases we are involved with, where no expressions of concern have been published after more than 3 years. Our situation is not unique, long delays, running into years, in posting expressions of concern and retractions seem to be the rule. Even when we have been told that a retraction will occur it can take months for the notice to appear online.
Long delays in decision making and action have led us to think that the assessment of publication integrity should be the sole concern of publishers and journals, who should not await the determination of misconduct before they act. We need better mechanisms for the efficient assessment of publication integrity.
RW: Did you find that journals responded differently based on the type of problems you brought to their attention? If so, why do you think such a difference exists?
A, MB, AG: In the SEE paper, the types of concerns raised with each journal were similar, so the analysis can’t address that question. More generally, it has not been our experience that either the type or number of concerns raised predicts either speed or nature of response. One Elsevier journal has been sitting on several pages of concerns about 8 publications that include unethical research, impossible data, highly implausible participant recruitment, and failure of randomization, for more than 3 years.
RW: In one of your articles you looked at journal responses to authorship issues. Tell us why you think this is worthy of focus.
AA, MB, AG: The authorship concerns we raised were of gift authorship, for which there was very strong evidence for the majority of the publications, in the form of an affidavit from one of the authors. Gift authorship is plainly dishonest and violates ethical and publishing standards established more than 30 years ago. It models unethical and dishonest behaviour to colleagues and junior staff. When it involves more than one co-author, it raises the question as to who, if anybody, actually did the work reported. It is likely often not recognised, but when it is clearly apparent it should be actioned. We were (and continue to be) bemused by the indifference displayed by many journals and publishers to this problem. They appear not to respect their own standards.
RW: One of your studies addressed funding statements in the problematic studies and found that none of the studies included them. What does that suggest?
AA, MB, AG Preclinical research is costly. We think the absence of reporting of funding for a set of preclinical trials that involved 992 animals and various interventions and assessments is a ‘red flag’ – how can the work be done without adequate resources? Most funders, quite reasonably, like to be acknowledged and most investigators do so as a matter of course.
RW: You write that “The investigation of research integrity might be improved by the establishment of an independent body whose specific responsibility is ensuring the integrity of the published scientific literature.” What would such a body look like, and how might it be different from the Committee on Publication Ethics?
AA, MB, AG: COPE provides general guidance for editors on how to respond to concerns about publication integrity, but not on how to assess publication integrity. Nor does it become involved in either the assessment of publication integrity or, in our experience, the timely and accurate resolution of concerns. We know that those who commit misconduct often do so repeatedly. COPE guidance doesn’t reflect the fact that wider investigations of other publications and authors may be needed.
Lastly, COPE just provides guidance. A body that is independent of journals and publishers that actually makes decisions that can be acted upon is needed. Establishing an independent body would benefit science by resolving some of the problems that exist in ensuring publication integrity, e.g. conflicts of interest within institutions, by coordinating investigations for a researcher’s body of work and wider enquires, and addressing inconsistency and lack of transparency among journals and publishers.
Ultimately, publication integrity is the responsibility of journals and publishers, as publications are their ‘product’. They profit greatly from their publishing activity. Investing a greater proportion of that profit in a robust and transparent process for ensuring quality control would help all stakeholders, the most important of whom are the members of the public, who expect and rely on publication integrity to guide those who use that evidence, such as clinicians, other researchers, and policy makers.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
If journals applied the same operational criteria to handling reports of possible problems with papers in the same way that reviewing is structured we would be better off.
1) Timely acknowledgment of criticisms
2) Deal with the problem in a short, reasonable time
3) Delineate the response to each point raised by the people reporting the problems.
4) Make a clear, transparently communicated decision.
Journal editors demand this from authors every day, they should be able and willing to live by their own required procedures.
I know people go into academia to avoid real life. But, there needs to be negative consequences for those who attempt to publish falsified research. Once a falsified paper is published and gets cited it is part of the body of knowledge forever. We are awash in false information. The publish or perish model, huge financial incentives for clinical liers, combined with zero consequences for lying, means a lot of stuff is just getting thrown at the wall.