As Retraction Watch readers may recall, the UK’s House of Commons Science and Technology Committee has been holding an inquiry into scientific misconduct for well over a year. During that inquiry, we submitted written evidence including some statistics about how the UK’s retraction rate compared to other countries, and our Ivan Oransky gave oral testimony late last year.
Today, the committee released a report of its findings, along with several recommendations. Among them are for all UK universities to “establish a national Research Integrity Committee to provide a means of verifying that university investigations into research misconduct are handled appropriately.”
Norman Lamb, chair of the committee, said in a statement:
While most universities publish an annual report on research integrity, six years from signing a Concordat which recommends doing so it is not yet consistent across the sector. It’s not a good look for the research community to be dragging its heels on this, particularly given research fraud can quite literally become a matter of life and death.
We asked C. K. Gunsalus, who serves as director of the National Center for Professional & Research Ethics and who has studied institutional integrity for decades, for her take on the report.
The House of Commons Science and Technology Committee’s Research Integrity: Sixth Report of Session 2017-19 Report is both good news and bad news. The good news is that the report crisply lays out the importance of a number of important challenges to research integrity, not only in the UK, but for all research communities internationally:
- prevention through effective education is a good investment;
- the environment in which researchers work is just as important as individual proclivities and must be assessed and improved;
- serious research fraud is relatively rare and yet can be devastating when it occurs;
- and that there are major gaps in a system that relies upon voluntary institutional self-policing, membership, and transparency.
The bad news is this is not the first capable report to make these findings. Many have done so — often less clearly and succinctly than this new report — in a range of settings over recent decades without much resulting change or action to show from it.
This is not criticism of this strong and clear report. It is criticism of the ongoing preference the research community has shown for analyzing and discussing known challenges while continuing to do more of the same. The Report documents that around quarter of universities did not take even the simplest steps to fulfil the basic recommendation of producing an annual report on research integrity—which has been required since 2013 to receive central research funding in the UK.
Earnest and well-meaning efforts and exhortations are not sufficient if there is no follow-through or requirements for action. Nor do they help where there is no effective consequence for institutions that do not police their own effectively–or that act in individual interest and let resign quietly those who start up anew and commit the same acts over again, as noted in the report and in a recent piece at Undark by Alison McCook of Retraction Watch. At present, institutions judge their own for reasons of cost, practicality, and accessibility: universities generally employ the scientists, enroll the students, own the facilities, and hold the funding for any research that is questioned. Yet we all know that there are profound conflicts of interest in judging your own.
At the National Center for Professional & Research Ethics, we have been engaged in projects to improve transparency and the trustworthiness of institutional self-policing by seeking out, posting, and proposing standards institutional reports should meet. This approach is based on the model of peer review for making research rigorous and what is known about overcoming conflicts of interest and cognitive biases tending to favor members of our own “in group.” Along with Retraction Watch, we have published a sample starting place checklist, and made our assessments of one report public–with more to come.
We hope to create a repository of research misconduct allegation review reports and to build a community of qualified reviewers to share reviews of them. We hope and believe that shared transparent community standards used by peer reviewers will contribute to stronger reports–and ultimately improve public trust. This is already mandated in Japan, where findings of research misconduct are listed and publicized along with institutional responses. This resolve is rooted in the principle that misconduct “violates the true nature of research activities… is a betrayal of science itself, undermines faith in science, and hinders scientific progress.”
We have started these projects while we await some meaningful community change–change we hope might come from the implementation of the recent NASEM Fostering Integrity Report recommending creation of a Research Integrity Advisory Board (RIAB), or a plan for the signatories to the UK’s Research Concordat to meet to discuss the new report’s recommendations. As noted by Nature, the functions of the recommended RIAB are much needed to bring a wide range of stakeholders together, to create a clearinghouse of resources, to address institutional conflicts of interest in reviewing the conduct of their own more effectively and credibly for the good of the community as a whole. The proposed RIAB would serve a different function from the committee recommended by the newest report, one which seems more like a national appeals body for institutional processes gone awry.
While the appeals committee suggested by this report may be a step in the right direction, it is also arguably too little, too late. With our community’s accumulating record, we have come to believe that only bringing more sunlight and transparency to reports can produce the rigor they need to be credible. We acknowledge there are challenges in establishing a system in which institutional reports are consistently transparent–made public–in a way that is even-handed, fair, and does not create new, perverse system-gaming. It is past time to tackle these challenges head-on.
Similarly, we must do more than simply talk about the importance of the research environments our institutions provide. NCPRE has created and administered online the only validated tool available for assessing the integrity of research environments, the Martinson-Thrush Survey for Organizational Research Climate (SOURCE). The data produced through institution-wide measurements of research environments–profoundly important influencing factors on individual choices–and the growing international benchmark database to compare results with anonymized data from peer institutions. If national or international bodies ever come into being, perhaps these functions would be incorporated in some fashion into their portfolios. In the meantime, we have dived in because the need is so pressing.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at firstname.lastname@example.org.