“At a time when scientists and scientific research are already being criticised by persons who identify science with technology and who deplore some of the consequences of technology, dishonesty among scientists causes unease among scientists themselves and regretful or gleeful misgivings among publicists who are critical of science.“
Daryl Chubin wrote that in 1985 — a time when institutions we now take for granted, like the U.S. Office of Research Integrity, did not yet exist. We asked him to reflect on what has happened in the intervening four decades.
The phrase “misconduct in research” today is a quaint reminder of how much science has been captured by for-profit, politicized, international interests. As a four-decades-removed social researcher of misconduct, I marvel at how an investigation industry has emerged to monitor, analyze, report and decry the mischief around us. This “watcher community” represents an industry in an era of science most of us never envisioned.
In the days before the Office of Research Integrity, many accused researchers and their academic institutions were grasping for an accountability structure that was fair to all parties — adhering to due process – and swift in its resolutions. Good luck! Today, the headlines in Retraction Watch reflect a publishing industry seemingly under siege—awash in retractions, plagiarism, AI mischief, undeclared conflicts of interest, whistleblowing, and a subset of ills that are dizzying and disconcerting to degrees never seen before.
Retraction Watch monitors an industry ever more self-conscious about misdeeds in research, from analysis to interpretation to reporting. By setting the threshold low, it focuses on misdeeds that may be rare in a particular field, but substantial when aggregated across fields. Yes, there is a risk of overgeneralizing from statistical anomalies, but readers care about violations that sully “their” field.
Within this environment of deceit and suspicion, I worry about besieged editors and the effect on collegial relations between known and unknown authors and reviewers. I despair for a peer review system sagging under the weight of uncompensated labor, unrealistic deadlines, and the ever-higher stakes of publication-dependent outcomes such as sustained funding and tenure.
In the current AI-drenched landscape, some — such as Bruce Macfarlane — argue the traditional meaning of originality and priority (enshrined in Merton’s 1942 idealized norms of science) has been displaced by a new ethos of institutional norms focused on egoism, capitalism and advocacy. Hence the elevation of findings deemed clinically-robust and bankable.
Within this rhetorical context, open-access publishing has traded originality for affordability. Such publication “opportunities” have disadvantaged many younger scientists/scholars and those at institutions without the wherewithal and infrastructure to contribute and compete. Then there are the now-exacerbated normal biases — unconscious and conscious — that can favor article rejection over acceptance while subordinating science content to assumed author attributes (gender, race, academic pedigree). The stakes have never been higher.
As a policy practitioner would say, the “unintended consequences” of publishing now haunt the scientific landscape. From the perspective of an observer, not the participant, I offer the watcher community some prescriptions (and a few questions) without solutions (or answers):
- Research should not be a policed activity. A policy that permutes practice can regulate or undermine it. Is the trust-based publication honor system–perhaps always a fiction–now eclipsed by extrinsic interests or actually subverted by profit motives?
- Journals should be dissemination devices — depositing information, data and interpretation — that, in the process, bestow credit (and sometimes blame, issuing retractions stating malfeasance by authors). Journals were not created to be gamed or treated as a resource for trading, buying or selling. Owners, however, will often try to gain some form of advantage.
- Publication metrics should capture quantitatively what a community values. Are citation counts and a 20+ h-index the best we can do? Can those numbers be evaluated without knowledge of individual and community contexts? What, then, is qualitatively valued: English-language articles, authors sponsored by Fortune 500 companies, sustainability of a field?
- The frequency of scandals of fabrication and misappropriation should be statistically insignificant. But the magnitude of these relatively few misdeeds can have profound real-world consequences for careers, funding, perceptions and confidence in “scientific progress.”
- Peer review should be recognized as service to the community. Reviewers who are specialists in the field soliciting their reviews serve as a proxy for community standards evolving, yet validating. Some of the best analysis is done in this behind-the-scenes role, yet it remains largely unrewarded. Without this specialized knowledge, reviewers lacking subject expertise are perpetrating fraud.
- Anonymity protects the identity of both the innocent and the guilty. It can preserve or dilute candor — reinforce, distort or calcify prevailing community judgment. If participants in the publishing industry differ on the value of anonymity, what other values should be compromised or honored? More experiments are needed to examine the marriage of ethics to practice.
- Policy encompasses at least a triad of stakeholders: the publishers who own the journals; the editors who steward the journals; the federal agencies whose funding both enables authors and regulates their misbehavior. The triad too often appears to compete against each other than to collaborate. Shared objectives for the enterprise are elusive. How can policymakers broker more constructive relationships?
- The watcher community should be mindful of both “lab culture” and the individual aspirations of researchers. What motivates watchers’ behavior—documenting dimensions of misconduct, establishing accountability for those accused, and promulgating rules for adaptation by various research communities? Can watchers serve as honest brokers, or must they inevitably become partisan by virtue of investing in the publication industry?
These reflections derive from nearly a half-century of my own publishing, editing, and reviewing. I may be naive, unrealistic, and locked in an earlier era. Just as Merton’s norms afforded 20th century scientists with a rhetorical device to portray their research behavior as lofty and merit-based, Macfarlane’s formulation stresses the darker underbelly of scientists’ competition and avarice in the 21st. Both capture kernels of truth. Deceit and fallibility are manifested in publications as opportunities and situational pressures mount.
Thus, science needs a policy analysis that transcends any one research field or model of publication. My reflections here pose fundamental issues of purpose: what is publishing for, what do producers and consumers of knowledge hold to be sacrosanct, and what do they consider fundamental behavior for advancement of their field ? No doubt some of these questions are ongoing topics of debate in various communities, countries, and journals.
Can the watcher community — which should include you and me — do more than better report the perversions of scientific publication? Indeed, how can watchers meet the moment and increase the promotion of better practices?
Daryl E. Chubin is an independent consultant living in Savannah, Georgia, former federal staff, and the Founding Director of the American Association for the Advancement of Science Center for Advancing Science & Engineering Capacity.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
“Can the watcher community — which should include you and me — do more than better report the perversions of scientific publication?”
Yes! Include as many people as possible.
“report the perversions of scientific publication” is what the system allows for. You can report what seems unlikely to be real at Pubpeer. The system does not provide arenas to discuss how things might be. We are stuck with what there is. Perhaps science could be decentralised away from universities/institutions, those with the resources. The universities/institutions and those who have the resources, funding bodies are not going to very receptive to that notion as their salaries depend on the present system. We are locked into a business-model, in which the different players are very unlikely to change anything.
Only by reporting “the perversions of scientific publication” can watchers hope to gain any traction. The rest is all pie in the sky.
I marvel at how an investigation industry has emerged to monitor, analyze, report and decry the mischief around us. This “watcher community” represents an industry in an era of science most of us never envisioned.
I didn’t realise that my hobby made me part of an industry.