‘Anyone can do this’: Sleuths publish a toolkit for post-publication review

For years, sleuths – whose names our readers are likely familiar with – have been diligently flagging issues with the scientific literature. More than a dozen of these specialists have teamed up to create a set of guides to teach others their trade.

The Collection of Open Science Integrity Guides (COSIG) aims to make “post-publication peer review” more accessible, according to the preprint made available online today. The 25 guides so far range from general – “PubPeer commenting best practices” – to field-specific – like spotting issues with X-ray diffraction patterns.

Although 15 sleuths are named as contributors on the project, those we talked to emphasized the project should be largely credited to Reese Richardson, the author of the preprint.

Richardson, a metascience researcher at Northwestern University, told us the motivation behind the project was to “affirm that anyone can do post-publication peer review, especially working scientists.” 

“We wanted to demonstrate how far just a few tidbits of specialized knowledge — like how to check energy dispersive X-ray spectra, what to look for in cases of suspected plagiarism and what softwares are available for image forensics — can go towards spotting integrity issues in the scientific literature,” Richardson said. 

Jennifer Byrne, who contributed to the biology and medicine sections, told us she thought the guides would be most helpful for early-career researchers. They are “often the people who are closest to the literature and they’re also people that might be not aware of these issues already,” she said.

“I tend to think that most people will start within their own fields because they will apply these guides to research they’re already reading,” Byrne, a molecular oncology professor at the University of Sydney, added.

People might use the guides differently depending on whether their focus is experimental or theoretical science. In the experimental sciences, spotting issues in the literature could cut back on unnecessary follow-up studies. Reproducing an experiment could take “weeks or months” of work, she said. “Reading one of the guides could stop someone from doing that because they realize before they start that the experiment wasn’t done correctly,” she said.

For theoretical papers, plagiarism is one of the most prominent issues, Byrne told us. The general guides, like “Citations and “Formulaic research,” might be more helpful in these cases, she said.

Many of the sleuths we spoke with shared concerns about making the guides public, but maintained the outcome of COSIG will ultimately be positive.

 Elisabeth Bik told us the team initially worried providing the “red flags” of paper mills was “basically revealing how we detect them. If we’re telling them what we’re paying attention to, in a way, we’re telling them how to fraud better,” she said. (A note that Retraction Watch administers the Elisabeth Bik Science Integrity Fund.)

However, Bik also noted the information is already available in PubPeer comments, and the purpose of the guide was to centralize the information. 

Anna Abalkina, research fellow at Freie Universität Berlin in Germany and creator of the Retraction Watch Hijacked Journal Checker, called  the current situation with both paper mills and scientific fraud “critical.” 

“The number of activists reviewing problematic papers is far too small given the overall scale of the issue,” Abalkina said. 

But Byrne said journals aren’t responding to issues with papers even now, so “the risk is that the backlog will just get more acute.”

Richardson said he plans to add more guides and that he and his colleagues hope COSIG will be “a constantly-expanding resource.” The list of contributors includes three “Maintainers” who will “review and edit contributions, seek feedback on contributions/guides from other members of the community and from subject-matter experts and manage official updates to COSIG,” he said.

Solal Pirelli, a maintainer and former Ph.D. student at École Polytechnique Fédérale de Lausanne in Switzerland, told us his role is to “help newcomers contribute productively, even if they aren’t sure of exactly what to say, or if they don’t have expertise in the specific tooling we use.”

Byrne says she foresees a guide on generative AI being added, among other topics researchers and future sleuths might find useful.

Brandon Stell, the president of PubPeer, said an organizer was involved in initial discussions around the guides. The first guide in the collection is devoted to best practices for PubPeer commenting. 

As for what contributors hoped might come out of the guides, Bik told us she hopes they will bring a “sense of professionalism” to the art of sleuthing. Sleuths “don’t have magic eyes or come up with random things we see. We have documents on how we do it, we can learn from each other and we can make those documents better,” she said. 

In discussions about breaches of scientific integrity, “there is always concern that we are only giving ammunition to the anti-science crowd,” Richardson said. That concern is especially relevant with the current U.S. administration, which is “trying to redefine what constitutes scientific misconduct and what constitutes ’gold standard science’ to meet their specific policy goals,” he added. 

“Now is the time to double down on making the scientific enterprise better and commit to doing good science – and good peer review thereof – out in the open,” he said. “We want to arm scientists with the tools to uphold the integrity of the literature, precisely so that the anti-science crowd isn’t allowed to redefine science integrity on their terms.”


Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].


Processing…
Success! You're on the list.

One thought on “‘Anyone can do this’: Sleuths publish a toolkit for post-publication review”

  1. Good work but regarding the PubPeer guidelines, the following sounds like necessary boilerplate:

    “PubPeer is a forum for facilitating scientific discourse. Comments should adopt a polite and neutral tone. Users should write with the goal of engaging readers and authors of a publication in discussion, not with the goal of airing their personal opinions or discussing matters that are not relevant to the publication at hand.”

    Have you actually taken a look what is commented there nowadays? Should one really “facilitate scientific discourse” and “engage authors” when dealing with 98% non-science and plain fraud in most cases? But, sure, even with outright fraud, I try to remain polite but a little humor helps because it is really crazy nowadays with genAI.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.