When misconduct occurs, how should journals and institutions work together?

Elizabeth Wager

When the World Conference on Research Integrity kicks off at the end of this month, one topic that will be on attendees’ minds is how journals and research institutions should collaborate when investigating the integrity of individual publications. That’s because this week, a group of stakeholders from institutions and the publishing world released draft guidelines on bioRxiv for how this relationship might work, dubbed the Cooperation And Liaison Between Universities And Editors (CLUE) guidelines. We spoke with first author Elizabeth Wager, Publications Consultant at Sideview and a co-Editor-in-Chief at Research Integrity & Peer Review (as well as a member of the board of direcftors of our parent non-profit organization).

Retraction Watch: The Committee on Publication Ethics (COPE) has issued guidelines for how research institutions and journals should cooperate when investigating concerns over research integrity. What do the CLUE guidelines add to that discussion?

Elizabeth Wager: The COPE guidelines are a great start but we realized they didn’t answer all the questions. For example, institutions were concerned that if journals always asked authors for an explanation before alerting the institution, this might give the researchers a chance to destroy or tamper with evidence. Also, we wanted to understand more about differences around the world to understand why some institutions were able to share reports from investigations but others maintained these were strictly confidential.

RW: In what ways do the CLUE guidelines differ from what COPE recommends?

EW: The CLUE guidelines suggest that journals need to think about times when they should contact an institution directly, either before or at the same time as alerting the authors. This would only be in rare cases where the journal had strong suspicions of data fabrication or falsification, but they need to recognize that this might happen.

We also address the question of problems with research done some time ago and the need for institutions to have data archives, and for journals to keep records of peer review. Typical practice on storing data varies, but for clinical data in the US, the current recommendation is only six years.

With cloud storage, space shouldn’t be an issue, so ideally records should be kept permanently, and certainly for at least 10 years. Journals often hit problems when investigators say they can’t supply data because it has been destroyed. We feel institutions and funders should take responsibility to make sure this can’t happen, by making sure there is proper archiving.

Another new proposal is that journals could ask authors for contact details of their university’s research integrity officer when they submit a manuscript. We don’t envisage these details being published, just kept on file in case they’re needed. This is a response to the problems journals often report of finding the right person to contact. We also think it might encourage universities to appoint such people and publicize their role, both externally and to researchers.

RW: What are your more radical proposals?

EW: I think our most radical suggestion is that institutions should develop new systems designed to assess the reliability of a publication. This wouldn’t prevent investigations into individual researchers designed to show whether they are guilty of misconduct, but would be quite separate from them. The CLUE guidelines recognize that publications may be unreliable due to honest error, or sloppy science, but the current system — focusing on narrow definitions of misconduct — doesn’t address these questions. We hope that processes for evaluating reported research could be much quicker (and cheaper) than misconduct hearings. That would also help journals alert readers to problematic publications sooner.

RW: You suggest that institutions provide journals with relevant sections of reports of misconduct investigations – what is the typical practice now?

EW: This is so varied, it’s not really possible to say what’s typical. In some parts of the world, institutions (or research integrity bodies) publish reports — Retraction Watch contains several examples, such as investigations into Stapel (in The Netherlands), and Fujii (in Japan). Some national bodies also release reports, eg in Denmark, and the U.S., but in other parts of Europe, such as the UK, universities often won’t release reports.

Editors also say that universities sometimes share a report but then forbid the journal from citing or quoting from it, which can make writing an informative retraction notice tricky.

RW: We’ve published a few stories recently that show how long it can take for journals to act on a paper that an institution has told them is problematic – meaning, it should be retracted or corrected. Do the new guidelines offer any suggestions for how to speed up that process?

EW: Yes, we hope our suggestion for investigating publications rather than people should speed things up considerably.  Currently, journals may have to wait not only for an investigation to conclude (which can take several years) but for an appeal process on top of that.  And if a university can’t even say whether it IS investigating somebody, the journal may not even feel it can issue an Expression of Concern. Being able to judge the validity of an article without having to decide if a researcher committed misconduct could really speed things up.

Another thing we propose is that institutions should contact journals directly rather than relying on researchers to do this.  This would apply both after an investigation but also if our new system for assessing publications works out.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

13 thoughts on “When misconduct occurs, how should journals and institutions work together?”

  1. The guidance is good.

    However, the real problem in the institutional compliance office:journal interface is when the institution chooses to stonewall or actively stifle the journal from correcting misconduct.

    This happens more often than not particularly when too-big-to-fail “superstars” with robust grant/contract support are involved.

    1. Glad you like the guidance. How could we make it even better to prevent the problem you mention?

  2. I for one, would welcome institutions getting out of the way in some cases. Authors (PIs) submit papers by direct liaison with the journals, so why should removal of papers be any different?

    Editors have the authority to declare what’s allowed into a journal, and as authors we’re perfectly OK ceding this authority to them at publication time. Why, when it comes to removing papers, do the editors go all weak-kneed, and the authors now insist on hiding behind a closed-door institutional investigation?

    There’s a much simpler path… a journal becomes aware of problems in a paper, they approach the author to address the issue, and if not satisfied with the answer, then retract. The addition of an RIO to this rather simple transaction often causes obfuscation and delay.

    The guidelines need to focus on how journals should interact with authors (e.g. setting deadlines for provision of original data when asked). The involvement of RIOs should be limited to extreme cases, and should be heavily policed due to the entrenched COIs. A lot of problems could be solved if the journals simply “grow a pair”, instead of passing off their responsibility as knowledge gatekeepers onto heavily conflicted Deans and other college folk.

    1. Paul’s comment identifies a weak point in the whole process. What needs to be recognized is that journals hold conflicting responsibilities. They serve academic institutions and, by proxy, their funders in evaluating the veracity of research results. In that regard, they are a independent contractor for the university and funding agency. Independent of this, journals serve the public as the publisher of the scientific record. Often these responsibilities are aligned. Occasionally they are not.

      When there are questions about the veracity of the research journals are publishing, journals face a conflict in whom they serve: the institutions that sign the check, or the public. These same conflicts are faced by financial auditors. And doctors. Even lawyers. They all must serve their employers, but are also guided by principles that can supersede this relationship.

      One way to view some of these problems is to ask whether the research would have been published by the journal initially if problems that came to light post-publication were known initially. If not, then why would there be a hysteresis for allowing research into the scientific record? Why should the bar be higher to maintain something in the record than allow it in initially?

      In the end, it is human nature to have a desire to defer to a higher power. It reduces our freedom, but simplifies our decisions. Paul’s “grow a pair” comment recognizes this in a less sophisticated, but just as accurate, way. Journals need to clarify, for themselves and others, whom they serve under what conditions.

    2. Thanks for your comment. We’ll discuss all the comments we receive among the group. My personal view is that many journals are not equipped to validate publications beyond traditional peer review. I suspect the ability may vary between disciplines and also depend on the complexity (and availability) of underlying data. In some cases, it may be reasonable for a journal to request source data and check it … but not in others. While I agree that institutions have conflicting interests, I think they are generally better placed than most part-time academic editors, and therefore journals, to do an investigation. One thought — could funders play a role in this?

      1. Here is one of the cruxes of the matter…You wrote “I think they are generally better placed than most part-time academic editors”.

        NSF + NIH budgets are over $40B. And all that keeps the scientific record they are funding from being polluted with bad science are volunteer reviewers, “part-time academic editors”, and conflicted, poorly-trained administrators.

        I do agree with you. Funders need to “play a role” in this. In the US, all investigations of potential misconduct in federally-funded research need to run through independent, federal offices like OIG and ORI. That’s one role funders need to begin to play.

        But, journal editors and groups like COPE and CSE need to stop passing responsibility. They need to stop being complicit. Their role in the future of science is not in making pdf’s. Anyone can do that.

        So, what should their role be? Can you imagine Deloitte and Touche being run by part-time accountants? If they were, who would ever rationalize that corporations are the best to investigate their own potential malfeasance because the accountants are part-timers? No, the answer would be professionalize auditing.

        Same holds for science. Professionalize peer review. Create standards. Enforce them pre- and post-publication. Spin-off pdf-making to some mega-corporation or trust. And then finally concentrate on professionalizing scientific review.

  3. The driving force will come from outside, Pubpeer, Retraction Watch. The most effective use of ones time is to post and convince others at these sites.

    1. Since Retraction Watch reacts to retractions I don’t see how it can be a driving force in investigating problematic articles. I know commentary sites such as PubPeer have raised concerns about publications, some of which have resulted in retractions and corrections. However, I think there is still a need for detailed examination by a properly constituted group (of some kind) and anonymous comment creates problems of possible conflicts of interest. So I think site such as PubPeer may act as first alerts, but do not replace the need for journals and institutions to liaise.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.