As Retraction Watch readers will likely recall, Paul Brookes ran Science-Fraud.org anonymously until early 2013, when he was outed and faced legal threats that forced him to shut down the site. There are a lot of lessons to be drawn from the experience, some of which Brookes discussed with Science last month.
Today, PeerJ published Brookes’ analysis of the response to critiques on Science-Fraud.org. It’s a compelling examination that suggests public scrutiny of the kind found on the site — often harsh, but always based solidly on evidence — is linked to more corrections and retractions in the literature.
Brookes looked at
497 papers for which data integrity had been questioned either in public or in private. As such, the papers were divided into two sub-sets: a public set of 274 papers discussed online, and the remainder a private set of 223 papers not publicized.
His results?
For primary outcomes, the public set exhibited a 6.5-fold fold higher rate of retractions, and an 7.7-fold higher rate of corrections, versus the private set. Combined, 23% of the publicly discussed papers were subjected to some type of corrective action, versus 3.1% of the private non-discussed papers. This overall 7-fold difference in levels of corrective action suggests a large impact of online public discussion.
Brookes noted several limitations, as Nature notes:
It is hard to know whether the unpublicized allegations, coming as the site grew more popular, were as well substantiated as the ones Brookes blogged about early on. The privately discussed papers were about three years older, on average, than the public ones, and Brookes speculates that there might have been less pressure to correct them, as the US Office of Research Integrity has a six-year statute of limitations for investigating allegations of misconduct. And papers in the private set might catch up with the public set in time, although this seems unlikely, he says.
Another limitation surfaced when Ivan was asked to review this paper. (We are occasionally asked to peer review, and given the inherent conflicts in reviewing and covering a particular study, we only accept in cases in which the journal will allow us to say that we reviewed the paper, and publish our comments.) One of the issues that came up at the time was that Brookes decided not to make the primary data — in this case, the critiques, along with the titles and authors of the papers in question — available to readers. (A de-identified data set is now included as supplemental information.)
Brookes did not make that decision — which is certainly understandable, given the legal concerns — lightly. In Ivan’s comments (made available on PeerJ), he wrote:
While I appreciate the sensitive issues around this manuscript, and welcome all attempts to correct the scientific literature, I am reluctant to offer a review without being able to see the data upon which the findings are based. The decision to not make those data available is based on sound reasoning, but it still means that this paper is not being held to the same standard as others. If we demand deposition of data, it should be for all papers. This doesn’t mean I think the author should necessarily reverse his decision, just that I would be uncomfortable making a decision without access to the data.
Brookes acknowledges in the paper that because of these limitations, “it is unlikely that the study can be reproduced independently.” And Ferric Fang, whose work on retractions will be familiar to Retraction Watch readers, told Nature:
“It’s a real limitation,” says Fang, who adds that the same problem beset two recent, widely cited studies in which scientists at pharmaceutical firms said that much high-profile academic research could not be replicated. “I don’t have any reason to question the interpretation, though it would be more persuasive to see it with one’s own eyes,” Fang says.
Still, it’s hard to disagree with the paper’s conclusions, given what we’ve seen of many editors’ responses to allegations:
…journals and other institutions may not wish to engage in dealing with such matters. Many journals do not respond to allegations from anonymous correspondents as a matter of policy, and while there are several reasons for this (e.g., not wishing to allow scientific competitors to sabotage rivals’ work), it is clear that journals do have some leeway in determining whether to respond to anonymous correspondents. Aside from the issue of anonymity, these anecdotes are diagnostic of a corrective system that is far from perfect. While it is beyond the scope of this manuscript to speculate on ways to improve the corrective system in the scientific literature, recent developments such as PubPeer and PubMed Commons are seen as steps in the right direction, toward universal and open post-publication peer review.
Our response to the question is “Yes”, but to a limited degree. Our experience has told us that publically exposure, through anonymous reports, of flawed science, primarily redundant publications or duplications, has LIMITED positive effects. Thus far, our experience from 2013 indicates that from 25 complaints made, usually to the authors, copied to the editor in chief and editors, as well as to the publisher’s management (wherever contacts were available), has led to 9 retractions thus far. We have reported some of these cases here at RW: http://retractionwatch.com/2014/01/25/weekend-reads-trying-unsuccessfully-to-correct-the-scientific-record-drug-company-funding-and-research/#comments. Our complaints, which have now evolved through trial and error, now only include a listing of the problems, as well as a formal definition by COPE or other reputable or established ethics or publishing bodies of the publishing ethics. We always terminate our letters with a formal request to correct the literature. 9/25 is far from being a reason for celebration, because the remaining 16/25 remain flawed, in the literature, uncorrected, and unretracted. Our experience indicates that the 16 indicate the lack of responsibility by the authors, the editors and the publishers in correcting the literature, despite the black on white evidence. We plan to publish all cases, successful or not, into the public domain, also anonymously.
Only 16 retractions in total? Feeling very efficient now: a single post scored two retractions.
Looks like a good argument for open, perpetual (i.e. pre and post publication) peer review.
Who has the time and money?
I guess time and money can be better allocated by peer review, particularly post-publication peer-review.
Let me give you a simple example. I write up a proposal for 3-year long project based on ideas you had from reading a very recent publication. You can bet thinking and writing about proposals and experiments make up most of the life of a PI. If the publication proves to be flawed/biased in any way that compromises your ideas, your time and money on this were wasted.
As today grants are being distributed based on published works per se, there is a lot of pressure to publish. This pressure has severely affected traditional peer review in many ways, and a lot of crap is being published every month. Even in top-rank periodicals. The more you are trained in reviewing and criticising others’ data, the better you become at not wasting your time and money on the wrong projects. However, if you belong to the team surviving on publishing any crap anyway maybe you will not have time to waste questioning what is published and will not think much about projects when writing proposals. Still I am pretty sure exposure of reviews on past publications will greatly affect the success of getting funding in the near future, and that this will soon affect greatly serial crappy publishers.
In reply to eli Rabbit……
Who has the time and money?
Not most people.
Lets ask a different question…..
Who has the persistence and courage?
Many.
Aye Stewart, you’ve been quiet lately. Too quiet.
But you have already nailed your colours to the mast and it is clear that, so long as you are a practising scientist, you will not tolerate acquiescence when confronted by obvious garbage in scientific publications. Stay critical young lad…
My pre-planned criticism of this article (which RW watchers will know was recently foretold) has been obviated, given that both the author and Ivan have made sterling efforts to point out unavoidable statistical weakness and reproducibility problems in the data. Since I expect this article to be a game changer in scientific discourse it is, for me, all the more important that obvious caveats are laid out up front.
My question now is: At what point does anecdotal data become useful data? Many commentators at RW have reported miserable responses from Editors/Publishers, while we also know that (correctly) most Journals/Institutes have rules to give authors some protection from accusation, to allow for the case that this should prove malevolent. But how to distinguish between justified protection and convenient cover up: Is there any way to deal with acquisition bias when researchers try to criticise the scientific literature?
We read papers all the time. What we don’t do is post our thoughts in the public domain. That does not take much time compared to a careful read. So no problems, we just have to engage more in post publication peer review, rather than keep our thoughts private.
Journals are not eager to correct the scientific record as more corrections and retractions may impact the journal reputation. Public reviews, and shaming if necessary, make a lot ore sense.
I had some runs with high impact journals, some took any irregularity so seriously and records were corrected in few month. Others, and unfortunately very high impact medical journals, responded half heartily and took them forever.
It ultimately comes down to three key-words: purpose, conscience, responsibility. All three are being seriously eroded, or may already have been lost, in science publishing. It is only now about three pseudo key-words: productivity, profit, PR (public relations).