When it comes to delays in correcting the scientific record — and less-than-helpful retraction notices — it’s not uncommon to see journals blaming universities for being slow and less than forthcoming, and universities blaming journals for being impatient and not respecting the confidentiality of their processes. So in 2021 and 2022, a group of university research integrity officers, journal editors and others gathered to discuss those issues.
In a new paper in JAMA Network Open, the group recommends specific changes to the status quo to enable effective communication between institutions and journals:”
(1) reconsideration and broadening of the interpretation by institutions of the need-to-know criteria in federal regulations (ie, confidential or sensitive information and data are not disclosed unless there is a need for an individual to know the facts to perform specific jobs or functions), (2) uncoupling the evaluation of the accuracy and validity of research data from the determination of culpability and intent of the individuals involved, and (3) initiating a widespread change for the policies of journals and publishers regarding the timing and appropriateness for contacting institutions, either before or concurrently under certain conditions, when contacting the authors.
We asked Susan Garfinkel, the associate vice president for research compliance at The Ohio State and the corresponding author of the article, some questions.
Retraction Watch (RW): The group recommends “broadening institutions’ interpretation of the federal concept of need-to-know.” Did it consider broadening beyond journals?
Susan Garfinkel (SG): The working group was composed of RIOs and journal editors and publishers, so the discussions focused on issues relevant only to institutions and journals. The recommendation to broaden the interpretation of the federal concept of “need to know” is a significant change for institutions to adopt and implement. The working group was very cautious to weigh the effect of such a change on the confidentiality requirements of research misconduct proceedings and therefore did not consider broadening it to include other bodies.
RW: Did the group consider changing the practice at many institutions of waiting until the ORI or NSF OIG has made a finding in a report to release the findings of its own investigation?
SG: The working group focused solely on false, fabricated, or plagiarized data and not on institutional findings of research misconduct. We were cognizant that appropriately releasing information about false, fabricated, or plagiarized data will adhere to the confidentiality requirements in federal regulations and institutional policy, i.e., the requirement to protect the identity of respondents and complainants to the extent possible. The working group focused on the timing for institutions to provide information about false, fabricated, or plagiarized data to journals, but did not consider the timing or release of institutional or federal findings of research misconduct.
RW: Who should decide what constitutes “additional information” in this recommendation? “Limit additional sharing of any information provided by a journal to the extent possible.”
SG: This recommendation refers to limiting sharing of information with other parties. For example, the working group recommended that an institution share complete and sufficient information about false, fabricated, or plagiarized data or text with a journal so that a comprehensive correction or retraction can be published. But to maintain confidentiality, this recommendation advises that journals should restrict sharing the information provided with any other party who may use it for other purposes.
RW: The group recommends that journals “should not expect institutions to share information about the involved researchers, such as whether a researcher is currently, or has recently been, investigated for serious concerns including misconduct or specific fact-finding that occurred in a research misconduct proceeding.” Why not?
SG: The working group made this recommendation based on a few facts. First, the confidentiality provision of the federal regulation (42 CFR §93.108) that states, “Disclosure of the identity of respondents and complainants in research misconduct proceedings is limited, to the extent possible, to those who need to know, consistent with a thorough, competent, objective and fair research misconduct proceeding, and as allowed by law.” Second, the primary concern of journals is the integrity of the data or text submitted or published, and not who committed potential research misconduct. Third, it is the institution that must focus on who was responsible for research misconduct, which is a difficult and time-consuming process. Thus, to enable earlier communications between institutions and journals, this recommendation proposes that journals appreciate the limitations under which institutions operate and their inability to share certain information in an ongoing matter.
RW: The article describes two problems – delays and transparency – which are closely linked but distinct, and journals and institutions have different pressures here. How do you think this initiative will help reduce delays, particularly in retractions, expressions of concern, and correcting the scientific record?
SG: Retractions, expressions of concerns, and corrections are currently delayed because journals usually are not provided with any information until research misconduct findings are made by institutions. However, institutions may identify and confirm early in a research misconduct proceeding that data are false, fabricated, or plagiarized. The recommendations from the working group propose that when false, fabricated, or plagiarized data or text are identified and confirmed, that determination can be shared with the journals to allow earlier actions to be taken by the journals. In addition, the working group also proposes that institutions provide complete and specific information limited to the data falsification, fabrication, or plagiarism, which conforms to the confidentiality requirements in federal regulations and institutional policy. These recommendations should allow for shorter delays in correcting the literature and more transparent journal notices.
RW: Decoupling data validity from culpability and intent has been suggested before, as the group notes. But it means retraction notices to the scientific community will be even more likely to be incomplete, since journals are extremely unlikely to update prior notices if an investigation report is even made available. How do you propose to fix this problem?
SG: The decoupling of data validity from culpability and intent should allow retraction notices to be more complete regarding the data. The recommendation is that the institution provide the journal with complete information to understand the specific falsification, fabrication or plagiarism of data or text. For example, rather than limiting a notice to include only that a specific figure is false, the working recommends that the journal be provided with information to explain why the data or text are at issue, which can then be included in their notices. This will allow readers a better understanding of the concern leading to a correction or retraction.
RW: We and others have noted flaws in many institutional investigation reports. How will these recommendations address that problem?
SG: This working group did not discuss, and its recommendations are not relevant to, the quality of institutional investigation reports.
RW: Do the imperatives of public universities here differ from those for private institutions? Use of NIH/NSF funds? Can taxpayer/public interests outweigh expectations of privacy in some instances?
SG: The recommendations of the working group would apply equally to public and private institutions.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
I get why they want to focus on data quality in these publications rather than who was at fault, but what if it’s multiple publications with the same author? Would you really still want to only focus on one pub at a time? Why would you continue to publish someone who has added fabricated or plagiarized content multiple times and added it to the scientific record? I think the author matters when there are repetitive issues.
” I think the author matters when there are repetitive issues”.
The authors are how you tell there are repetitive issues.
I thought that Susan Garfinkel was tight- lipped. Neither the universities, or the journals, really want to do much about the problem. It’s not the lines of communication that are fault but the institutional culture of wanting as manyvpublications as possible (or a matrix of number X impact factor). Without a change of institutional culture it’s like worrying about the commas and full stops, when the meaning is the most important.
At Susan Garfinkel got a publication out of it.
Very thoughtful piece by many great longtime colleagues. Best wishes for getting appropriate changes and co-operations as you have recommended in our communities of RIO and Editors.
Journals and universities could start by listing an email address on their websites for reporting. I know some do, but not all. It’s frustrating and time consuming hunting down an email address for ethics and reporting. Some of the biggest don’t publicise it and so you have to guess which executive to contact then dig up some place they have left their email if you are going to have any hope if your email being received.