How can universities and journals work together better on misconduct allegations?

Elizabeth Wager

Retractions, expressions of concern, and corrections often arise from reader critiques sent by readers, whether those readers are others in the field, sleuths, or other interested parties. In many of those cases, journals seek the input of authors’ employers, often universities. In a recent paper in Research Integrity and Peer Review, longtime scientific publishing consultant Elizabeth Wager and Lancet executive editor Sabine Kleinert, writing on behalf of the Cooperation & Liaison between Universities & Editors (CLUE) group, offer recommendations on best practice for these interactions. Here, they respond to several questions about the paper.

Retraction Watch (RW): Many would say that journals can take far too long to act on retractions and other signaling to readers about problematic papers. Journals (as well as universities) often point to the need for due process. So what would a “prompt” response look like, as recommended by the paper?

Sabine Kleinert

Sabine Kleinert and Elizabeth Wager (SK + EW): As with so many ethical problems, this involves judgement and balance between possible harms. Timing also depends on the nature of the evidence presented to the journal. At one extreme, if a journal receives well-documented findings of a properly organised institutional investigation which found serious problems (eg data fabrication) in a published article, it should usually issue a retraction immediately. However, if there are concerns about the quality of the investigation, or the findings are unclear or contested by the authors, then the journal may need to resolve these problems before communicating with readers. At the other extreme, Expressions of Concern (or retractions) should not be published at the first hint of suspicion (eg on receipt of an email making vague allegations). In nearly all cases, journals should communicate with the authors affected, and sometimes also with the institution, and this can cause delays.  So, boring as the answer is “It depends …”.

RW: The paper recommends that institutions “notify journals directly and release all relevant sections of reports of misconduct investigations (or a summary of their findings) to any journals that have published research that was the subject of the investigation, clearly indicating which articles or manuscripts are affected” and “allow journals to quote from misconduct investigation reports or cite them in retraction statements and related publications (e.g. explanatory editorials and commentaries).” Institutions frequently use administrative excuses, such as “personnel matters,” to avoid publicizing the results of their internal investigations into misconduct cases, even if that sometimes means that researchers who commit misconduct can move freely from university to university. What can funders and even publishers do to help foster a more transparent and expedient release of the findings?

SK + EW: This is one of the main problems that CLUE identified but could not resolve. Institutions and employers in many countries are bound by law to keep disciplinary proceedings confidential, and therefore failure to share details is not just an “excuse” (however convenient it might also be to an institution that prefers not to publicise such things). And, as you point out, findings are sometimes not shared with future employers (as happened, for example, in the Jatinder Ahluwalia case) so better cooperation between institutions is also needed. Funders might be able to influence institutions or build in safeguards into their contracts to ensure that both they and affected journals are properly informed in research integrity cases. While it is harder to see what publishers might do, one suggestion from the CLUE workshop was that authors might be asked to waive rights of anonymity in their contract with journals, but this idea needs more refining and probably some more legal input to know how practicable and effective it might be. However, we do hope that institutions will review their processes and see if journals can be identified as affected parties that not only “need to know” but also need to be able to act on information about, and conclusions from, investigations into research integrity.

RW: One issue, it seems to us, is that we speak of “journals” as actors when in fact we mean “editors.” How can we incentivize editors to feel that policing the literature is not just an irritating, time-consuming part of the job but in fact a critical component of the role? 

SK + EW: While editors-in-chief are ultimately responsible for their journal, they often work closely with publishing staff, especially on research integrity cases, so the term ‘journal’ is useful in covering both.  On your second point, while ‘policing’ may not be a job description that most editors would relish, many journals make considerable efforts to detect and therefore prevent problems before publication, for example by screening for plagiarism and image manipulation. They also try to educate authors and reviewers on topics such as authorship criteria and declaring conflicts of interest. The type of research integrity cases that involve liaison with institutions are, thankfully, not too frequent, so it’s hard to imagine how incentives for editors could be designed. Our personal view (not necessarily that of all the CLUE Working Group, but probably unsurprising, given our previous roles as Chair and Vice-Chair of COPE) is that educating editors and ensuring they know where to turn when faced with tough cases is the key. COPE started life as a “self-help group” for journal editors and continues to provide advice and consultation to its members and resources such as the flow-charts, which are freely available in several languages on the website.

RW: The paper recommends that “Peer reviewer reports and comments to the editor should generally only be shared with authors’ institutions with the reviewers’ express permission.” How can editors make best use of critical comments if they are limited in how they can share them?

SK + EW: Some journals publish reviews (of accepted work), in which case comments may already be in the public domain, along with the identity of the reviewer. However, in traditional peer-review systems (and for unpublished work), reviewers don’t expect their comments, or their identity, will be shared beyond editors and authors. We therefore felt that journals should usually seek reviewers’ permission before sharing their comments or their identity more widely. However, the role of the journal is mostly to alert an institution of a possible problem, which may require investigation, so if an editor chooses not to share specific comments (rather than a paraphrase), or a reviewer’s identity, with an institution this should not prevent the university from looking into the problem.

RW: One recommendation is that journals “judge anonymous or pseudonymous allegations on their merit and not dismiss them automatically.” Have you seen a shift in attitudes about this since 2016?

SK + EW: The frequency of anonymous or pseudonymous allegations to journals does seem to fluctuate and may be related to the activities of specific groups or websites such as Pubpeer. During our time at COPE (ie up to 2012) we were aware that some universities were unable to launch investigations without a signed communication, and therefore dismissed any anonymous or pseudonymous allegations out of hand. We felt that this was unhelpful, since whistleblowers may have good reason not to reveal their identity, and the credibility of an allegation depends on the evidence provided rather than the source. COPE guidelines therefore recommend that all communications about possible research integrity issues should be judged on their merits and not automatically dismissed if the source is anonymous. We haven’t studied this systematically to know whether journal or institutional views on this have changed.

RW: Another recommendation is that institutions “develop mechanisms for assessing the validity of research reports that are submitted to, or published by, academic journals, which could be used if concerns are raised; these mechanisms should be distinct from processes to determine whether misconduct has occurred.” Is this a similar idea to what a handful of institutions have begun doing?

SK + EW: The Nature article you mention is about pre-submission screening. This was not something recommended in CLUE, although we did discuss whether institutions might create repositories of submitted manuscripts to check if changes had been made after rejection by a journal to cover up problems identified by reviewers (see Issue 6 in the CLUE document). The phrase you quote relates to a different concept and is one of the most important CLUE recommendations. We believe many of the difficulties and misunderstandings between journals and institutions occur because institutional investigations focus almost exclusively on whether an individual has committed misconduct within strictly defined definitions of this. While this assessment is important for the employer, it sometimes fails to address the question of greatest importance to readers and therefore editors, of whether a published research report is reliable. Publications can mislead readers because of omissions (deliberate or unintentional), weak methods of research or statistical analysis, or unavoidable errors (a famous example being a study retracted from Science because the lab had been supplied with incorrectly labelled chemicals). Our proposal is for institutions to support journals in deciding whether an article should be corrected or retracted, by focusing not on the actions and intentions of the researchers but on the article itself. As you can see, CLUE does not propose that institutions should vet articles before submission, but recommends that they should have mechanisms available “if concerns are raised”. Our hope was that such procedures would not only assist journal editors but also give answers more quickly than misconduct investigations which, by their nature, must be exhaustive and follow due process with rights of appeal, etc. to give researchers a fair hearing. A finding that a publication needed correction or retraction should not prejudice nor prevent a misconduct investigation but should speed up the correction of the literature and maybe remove the need for further investigation if a clear “honest error” is identified.

RW: The paper is based on a workshop held in 2016. How confident are you that the recommendations remain relevant, in particular given changes in the landscape in the last five years, including the growth of PubPeer and other scrutiny of the literature?

SK + EW: We see no evidence that cooperation between journals and institutions has materially changed (either for the better, or the worse) since our original discussions. Tensions between journals’ responsibility to alert readers to proven or potentially misleading work and employers’ legal requirements to maintain confidentiality around disciplinary procedures remain. Even without this problem, both universities and journals need to ensure that responses are proportionate and that researchers’ careers are not harmed by vexatious allegations. Due process, respecting the rights of researchers/employees and the needs of readers/society can be time-consuming and certainly requires expertise and judgement.

PubPeer was set up in 2012 and was going strong by 2016. Such initiatives may have increased the frequency of readers contacting journals to raise concerns about research integrity, which only increases the need for clear guidance for editors.

Like Retraction Watch? You can make a one-time tax-deductible contribution or a monthly tax-deductible donation to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.