The title of this post is the title of a new study in PLOS ONE by three researchers whose names Retraction Watch readers may find familiar: Grant Steen, Arturo Casadevall, and Ferric Fang. Together and separately, they’ve examined retraction trends in a number of papers we’ve covered.
Their new paper tries to answer a question we’re almost always asked as a follow-up to data showing the number of retractions grew ten-fold over the first decade in the 21st century. As the authors write:
…it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn.
In other words, is there more poor or fraudulent science being published, or are readers and editors just better at finding it — perhaps thanks to better awareness? These explanations aren’t mutually exclusive, of course. Steen et al:
The recent increase in retractions is consistent with two hypotheses: (1) infractions have become more common or (2) infractions are more quickly detected. If infractions are now more common, this would not be expected to affect the time-to-retraction when data are evaluated by year of retraction. If infractions are now detected more quickly, then the time-to-retraction should decrease when evaluated as a function of year of publication.
When the authors looked at 2,047 retracted articles indexed in PubMed, they found:
Time-to-retraction (from publication of article to publication of retraction) averaged 32.91 months. Among 714 retracted articles published in or before 2002, retraction required 49.82 months; among 1,333 retracted articles published after 2002, retraction required 23.82 months (p<0.0001). This suggests that journals are retracting papers more quickly than in the past, although recent articles requiring retraction may not have been recognized yet.
Fang and Casadevall have also showed that high-impact factor (IF) journals are more likely to retract. In the new study, the authors report that
Time-to-retraction was significantly shorter for high-IF journals, but only ~1% of the variance in time-to-retraction was explained by increased scrutiny.
And plagiarism and duplication — the latter reason for retraction having become so frequent that we can’t cover them all — are relatively new on the landscape, meaning a jump in numbers is to be expected:
The first article retracted for plagiarism was published in 1979 and the first for duplicate publication in 1990, showing that articles are now retracted for reasons not cited in the past.
The effect of those who would have shown up frequently on an earlier version of Retraction Watch — think the analogues of modern-day scientists like Joachim Boldt, Yoshitaka Fujii, and Diederik Stapel — was impressive:
The proportional impact of authors with multiple retractions was greater in 1972–1992 than in the current era (p<0.001). From 1972–1992, 46.0% of retracted papers were written by authors with a single retraction; from 1993 to 2012, 63.1% of retracted papers were written by single-retraction authors (p<0.001).
More details on that:
Authors with multiple retractions have had a considerable impact, both on the total number of retractions and on time-to-retraction. In 2011, 374 articles were retracted; of these, 137 articles (36.6%) were written by authors with >5 retractions. Articles retracted after a long interval (≥60 months after publication) make up 17.9% of all retracted articles; approximately two-thirds (65.7%) of such articles were retracted due to fraud or suspected fraud, a rate of fraud higher than in the overall sample [8]. Among fraudulent articles retracted ≥60 months after publication, only 10.4% (25/241) were written by authors with a single retraction.
We asked Daniele Fanelli, who studies misconduct in science, for his reaction to the findings:
The finding that journals are retracting papers more quickly than in the past is very good news, as it shows how the scientific system of self-correction is improving. All the other data presented in the paper can also be interpreted, most simply, as an improvement in the system of detection. Retractions, whether by single or multiple authors, are growing because more journals are retracting. High-impact factor journals retract more and more rapidly because they have more readers and better policies. Studies have shown that impact factor is the best predictor of a journal having clear and active policies for misconduct. So any correlation between retractions and impact factor has a trivial explanation.
In sum, there is no need to invoke “Lower barriers to publication of flawed articles”, as the authors do. I am not saying that scientific misconduct is not increasing. Maybe it is, maybe it is not. But the evidence is inconclusive, and statistics on retractions have no bearing on the issue. Whatever the current prevalence of misconduct might be, it is most likely higher than the extremely small proportion of papers that are currently retracted each year. So retractions are a good thing, and we should just hope to see more of them in the future.
We happen to agree that the growing number of retractions is a good thing, as we wrote in Australia’s The Conversation last year, and not just because it means we have more to write about. What we’d really like to see, of course, is more transparency in those notices — which is something the authors of the new study end with:
Better understanding of the underlying causes for retractions can potentially inform efforts to change the culture of science [41] and to stem a loss of trust in science among the lay public [42], [43].
Reblogged this on lab ant and commented:
Interesting paper on the increasing number of paper retractions over the last years? I was always wondering if the pressure to publish fast and high could negatively impact the quality of scientific articles?
The pressure to publish , the inability to cover fraud in the face of solid evidence, and more importantly, the willingness of universities to weed out highly paid professors!
Daniele Fanelli wrote: “So any correlation between retractions and impact factor has a trivial explanation.” Burt the data show that it only explains 1% of the variance. Moreover, the authors emphasize that their data also shows changes in author behavior (e.g. increased plagiarism). Thus, it’s both: more scrutiny and more fraud, as the data prior to this paper already indicated:
http://www.frontiersin.org/Human_Neuroscience/10.3389/fnhum.2013.00291/full
1% of the variance in time to retraction, but, as you also quote in the (very interesting) paper you have linked, the coefficient of determination for retractions-IF is a staggering 0.77.
The increase in plagiarism, again, could be the effect of better detection ability. We didn’t have Turnitin, Deja vu, or even Google until a few years ago.
Reblogged this on The Firewall.
Re-blogged on http://aleebrahim.blogspot.com/2013/07/why-has-number-of-scientific.html and comment on LInkedIn.
1- The number of plagiarism detection software increased dramatically.
2- The number of online material increased.
3- The plagiarism detection software improved.
4- The number of submissions increased radically.
So, the number of “Scientific Retractions Increased?”.
http://www.mindmeister.com/39583892
These are some of the factual reasons. Quite obvious actually that technology has led the way. However, you ignore the psychological ones: 1) The victims of fraud are now sick and tired of the fraudsters; 2) those who were honest are now not longer afraid to attack the antics and games played by the fraudsters; 3) the fear of the elite is now over; 4) increasing blogs like this one or tools like Facebook or Twitter are spreading the word that it is ok to call out and expose fraud and fraudsters; 5) anonimity gives the whistle-blower the power to smoke out the rats without suffering any negative consequences; 6) the fraudsters are now starting to fear… I think a revolution is taking place now and each of us has the responsibility of detecting fraud in our own fields of study and reporting and publishing it. How? Simple, but it will require your effort, skill and free time and passion to seek justice. Conduct a full-blown post-publication peer review of a paper that you feel should never have been studied. Try to stick to papers in Impact Factor journals since those are being apparently rewarded for quality, so they will feel the pressure to change more than predatory journals. Pull in some colleagues or professionals who could support your findings of unscholarly behaviour or fraud. Make all contacts sing a pseudonymn or fake name, but not in any sinister way. Explain your objective clearly and honestly, always. Contact the editor board and publisher. Be polite, factual, and diplomatic. Do not cave in to fear-mongering by arrogant editors and pushy publishing managers. Stand your ground, as a peer, as a professional, and let’s start to clean up this mess. I can with 100% confidence say that in my field of study (wink, wink), the situation is really bad, and thus I have pulled together teams of anonymous professionals to start to clean up the literature. One by one, paper by paper, the editor boards will start to cave in under the pressure as they feel how they allowed so much unscholarly, and sloppy work to be published. It is time to retract, again, and again. When there is concrete proof of fraud, of course. We, the professionals in our fields, suddenly hold the power. This is the truth behind the increase in retractions. 2 decades ago, the publishers held the reigns of power, and that is now in full 180 degree swing. I see a future where, except for retracted papers, there will be a load of PDFs showing post-publication peer review next to the original PDF. Where edits exceed a certain threshold, i.e., where excessive errors exist, as detected by peers, then its time to retract. This is the future of retractions.
I couldn’t agree more. Our labs are wasting so much time and money trying to reproduce dodgy and presumably fraudulent papers. There need to be more efforts to ‘smoke out’ the fraudsters. Unfortunately pubpeer has caved in. Nothing seems to get posted there anymore.
PubPeer was a fad. Like Facebook or Twitter, its brief success resided in its innovation. Why it failed (is failing) is simple to understand. Ultimately, scientists want justice for an observed injustice. They want a fair response to a complaint. They want a tangible result to evidence of fraud. They want transparency, even if it follows anonymity. That site pointed out errors and there seemed to be this back and forth of anonymous e-mails and on-site postings, but nothing ever seemed to emerge from it. No tangible results. As I suggested before, we need PubMed (or a similar powerful group) to start a separate data-base of retractions. There, one would be able to do a search of author, publisher, journal or topic, to see the papers that have been retracted and where stamped PDFs of poor resolution could be downloaded. The site would have to allow legal issues and copyright issues to be excused. This would be the first step to providing a platform of fear. Then, as I indicated above, sort of like a Science Anonymous movement, individuals need to start taking control and action, without fear. I recommend three steps: Step 1: find a paper that is clearly fraudulent, either in its content, or in the lack of scholarly information it contains. Be sure of your claims. Then, get 3-4 peer reviewers from different countries to provide the strictest possible critique. You want to minimize the risks of COIs, so keep contacts as anonymous as possible. Now, with the hard-core evidence in hand and 5 anonymous reports, you move to Step 2. Step 2 you send an anonymous e-mail to several members of the editor board and publisher. Make sure you can negotiate a time line for a formal response. Point out the editorial responsibilities of following through with the investigation. Gently, of course! Step 3: If the editors exceed the time limit, fail to reach a conclusion or fail to take action, then widen the communication campaign. Send a copy of the anonymous report, anonymously, to dozens of professionals in the field, and ask them to critique your critiques and, if they second an erratum, or retraction, to also send an e-mail, anonymous or not, to the same editor board members and publisher. I think after 10-20 complaints about the academic quality of a paper, especially if anonymous, it will be difficult for the editor board and publisher to hide. In such a process, the rats are smoked out, the fraud is omitted from the literature and scientific record is corrected. The editors are held accountable and the publisher may insist on new and more rigorous peer review criteria. It’s a win-win situation. But unless each and every one of us starts t take the control and responsibility of post-publication quality control, and anonymous reporting and follow-up, we will be disappointed by sites like PubPeer time and time again. The secret is in the power of the individual. This takes time, it takes effort, you get no rewards, or financial support. But you will have the greatest satisfaction of all: justice.
That is a lot of trouble to complain over one paper. Almost all contacted anonymously about a suspected paper and asked for a review (=takes time for something potentially dangerous) will not reply.
I think Pubpeer fails because the owners have an idealized application their website and try to force readers and contributors to follow their intentions. They decided to not include any quick posts about repeated images and plagiarism, which are most easily detected flaws and to date awfully common in the literature. Thus their ideas for the PubPeer unfortunately do not meet the demands of Science reality.
Dear JATdS,
lets put your ideas to work! About 6 months ago, PlosOne editors and Biochemical Pharmacology (BP) editors were contacted about concerns of WBs re-used in two papers. While PlosOne showed some preocupation about that, BP editor decided that there was no necesity of pursuing those concerns (see below).
It would be really interesting if you, and maybe other Retraction Watch readers, can check whether the concerns are right (anyone who is familiarized with western blot can easily understand the problems). If u are agree, please write to BP and PlosOne, and maybe post the journal reply here. This could be taken as an “experiment” to prove what is the response of these journals to several different post-publication peer reviews.
Specific concerns:
1) Reuse of beta-actin WBs among a paper published in Plos One http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006206 and a paper published in Biochemical Pharmacology http://www.sciencedirect.com/science/article/pii/S0006295210005873
More specifically, the beta-actin WB on Fig. 5A (PlosOne) is the same b-actin WB of Fig. 7A (PlosOne), but rotated 180 degrees. The dot that appears above and between bands 3 and 4 of Fig. 5A (PlosOne) can be used as a reference to compare the shape and position of the rest of the bands between the blots on different figures.
The beta-actin blot of Fig. 5A (PlosOne) is also the same than the beta-actin blot of Fig.2C (Biochemical Pharmacology). Note that in FIg 2C, the reference dot is between bands 2 and 3.
The beta-actin blot of Fig. 5A (PlosOne) is also the same than the beta-actin blot of Fig.3A (Biochemical Pharmacology). Note that in FIg 3A, the reference dot is between bands 3 and 4.
In summary, it was used the same WB for Fig. 5A (PlosOne), Fig. 7A (PlosOne, rotated 180 degrees), Fig. 2C (Biochemical Pharmacology) and Fig. 3A (Biochemical Pharmacology). The reused b-actin WBs are representative of totally distinct conditions in the different figures.
2) Beside that, other different beta-actin WB was used in both papers. Please compare Fig. 3D b-actin panel (bands 2-8), http://www.sciencedirect.com/science/article/pii/S0006295210005873 with Fig. 8C b-actin panel (bands 1-7) http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.p
Dear JATdS,
I read your comments and I wonder if you have seen the evidence (concerning a paper published by my former PhD supervisor) presented at http://www.universitytorontofraud.com
I am trying, unsuccessfully, to get the paper retracted. It’s interesting that, in addition to the prima facie evidence in 50 scanned documents, I also have two letters each signed by two experts, in my view – confirming my allegation of plagiarism (doc. #30 and #33 on this site).
But what I am finding is that for your recommendations to work, there must be a reaction from the media – simply reporting your case and your allegations. I have from the media a consistent (very consistent) refusal to say a single word about the case. So, I believe you should add this reaction from the media as a necessary condition for the success of justice.
If PubPeer is in for the long game, it is way too early to say they have failed. They might fail, they might give up, but they have not failed yet. Also it is clear that they want to promote a more general dialogue, not just fraud accusations. I think they will have to go through a phase of fraud accusations to get the high profile attention to be successful as a debating forum on published scientific work. But I also think it would be great if we had such a forum that all real scientists (as opposed to those barely sentient, Batesianly imitative organisms so often being discussed here) could appreciate and value. And one thing can safely be predicted: despite their best intentions, neither the worthy scientific bodies, nor the publishers themselves will ever provide that forum. PubPeer and their ilk may fail, but surely they should be encouraged and supported in this initial phase of learning to exploit our new found world wide communication? Hmm, now if only I had something interesting to post on PubPeer next week?
Could I also be allowed to make a plea that you research as to why readers might appreciate your structuring your argument(s) in paragraphs? See how your correspondent Lopez does it – one item at a time.
There was a brief time window, a couple of months ago, when you could get pubpeer to publish factual reports that things that should not be the same were in fact the same. This was in the polite language that pubpeer suggested. “Please compare…X with Y”.
As soon as these reports got back to the ears of the authors they started to complain. Now if there are quite factual reports submitted that may not be comfortable for the ensconced they do not get published. That is after several weeks.
It looks like PubPeer is back in full swing. I don’t know if anyone has made a ranking of science fraud by country. But look at the recent entries in PubPeer and you know what I mean. It is quite stunning.
I am not all too hopeful that PubPeer will succeed. They have mistreated several posters according with their own judgement and expectations, and they seldom reply to direct emails. Of course most comments on PubPeer will have negative critique, and many will imply suspected fraud, otherwise people would openly contact the authors on email and conferences. As a result of their policy of constant interference, their website is getting inactive, for the sole joy of the exposed authors.
Agree 100%. Can’t wait to see your notion gains momentum. It’s about time all fraudsters are exposed and brought to justice! We need to remember those who lost their jobs because they failed to keep up with standards set forth by phoney researchers and academics.
“We happen to agree that the growing number of retractions is a good thing”. Now, that’s a shocking revelation from Retraction Watch! 🙂
Jennifer Lopez and Pyshnov, kindly note that I am not advocating that my idea and solution is the only one. But what I am saying is that if there is sufficient critical mass, then the system will buckle under pressure, and change. What both of you are suggesting is that non-scholarly, non-academic and fraudulent peer review has taken place. This is a serious accusation, but you will fall on deaf ears unless you provide proof. In this case, if I were you, I would do literally a line-by-line critique of the papers that you feel need to be retracted. List every single error that exists. Divide the errors into editorial, scientific and grammatical. Then, using your report, anonymously contact 40-50 individuals to double check what you have claimed. So, in Jennifer’s case, a Google Scholar search, or a search on Sciencedirect, SpringerLink, Wiley Online, PubMed or other academic data-bases using the term “Western blot”, restricted to 2010-2014, you should be able to find no less than 100 specialists from around the world. You invite them to do a post-publication peer review (PPPR), indicate the importance of the review and indicate what it is you want to achive, and why. Hot-blooded revenge-like comments are a no-no. This must be a cold, fact-driven evidence-based exploration of the paper. I assume that you might get about a 10-20% response rate, and assuming that all PPPR reviewers conduct the job passionately and professionately, you should have a substantial evidence set. So, you assemble one file with all critiques of the paper, making sure that digital fingerprints are removed from the Word/PDF file, that all comments are totally anonymous and that no PPPR reviewers names are included. You then write a nice, polite anonymous e-mail to the Editor, and make sure that at least the top echelon of the editor board is contacted, with at least 5-10 members. This adds heat to the editor board to respond. Give them a reasonable time-table by which you would lke to see a resolution, and make it gently clear that if no resolution comes from this request on time, that the remaining editor board members will be contacted. In other words, you must take control of the situation, through anonymity, without necessarily being a cyber-bully. If after 20-30 professionals agree that there is a serious problem, and the editor board continues to be resistant to change, well, use your imagination… there are many ways to add pressure to the journal, board and publisher when it is clear that imbeciles are in charge.
Dear JATdS,
I disagree with you. I believe that only public disclosure can help to stop fraud in science.
Journal Nature officially stands for transparency and says that journalists must work with the public and publish the alleged cases of fraud in science. Indeed, this is normal journalistic practice in other areas, because without such disclosure, corruption prevails.
And corruption is what we have today in science. Scientific journals refuse to do their job. In fact, Nature prevents the disclosure of the real situation with fraud.
Here is my exchange of emails with Nature, please read:
http://www.universitytorontofraud.com/nature.html
I read your blog with great sadness of what you’ve been going through . I am sure there are many people who feel bitter and whose lives were destroyed. I was shocked to learn that the definition of plagiarism does not include plagiarizing unpublished work! If this is true , no wonder that unethical reviewers can reject valuable manuscripts and plagiarize their content. I have said it many times before: It’s “ANARCHY “. I urge all honest academics who have witnessed injustice to come forward and do something!
I thank you for your message.
Of course definition of plagiarism includes plagiarizing unpublished work. Committee on Publication Ethics, however, invented a rule that gives them unusual freedom in interpreting the definition, they said: “Whether or not COPE chooses to consider how the term ‘plagiarism’ is to be interpreted and applied in the future with regard to its advice and guidance is in the gift of the COPE Council”, see email #17 at http://www.universitytorontofraud.com/committee.htm
Apparently, COPE was OBLIGED to say something that will prevent retraction of this plagiarised paper. And I ask further question: What exactly University of Toronto did to oblige COPE to make the definition of plagiarism so flexible and wrong?
Moreover, the COPE determination to refuse my case was so strong that when I quoted their own definition of plagiarism (including unpublished work), they gave second version of their ruling saying that there was no plagiarism, but only an authorship dispute. I patiently showed that there was no authorship dispute whatsoever. The third version of the ruling appeared: COPE now presented their own new rule prohibiting them to take my case. At this point, COPE did not need to insist on the first version and said: “We probably did make a mistake with our interpretation of the term ‘plagiarism’…”.
It’s indeed anarchy, but not a blind anarchy, it has purposes and motives.
My understanding is that COPE was some years ago established; based on their reasoning, one would expect that all frauds who published plagiarized articles prior to COPE are to escape scrutiny because the editors of journals were not registered with COPE at the time ?
Hi, aceil!
The point that prohibited COPE to take my case was that the editor had seen my case when he was not yet a member of COPE. Naturally, I said: I will go to the editor again. No, they said, this would be the same case (so, why it matters?)
Answering your question: Cope said they cannot be involved if the editor has seen the case before becoming a member of COPE.
This argument about the membership in COPE is utterly unsound, first because the publishers made the COPE Code of Conduct obligatory for all their editors years before they started this COPE membership subscription. So, the editor had to follow these same rules anyway for years before he became COPE member.
It can be supposed that COPE made this rule to avoid punishing the editor for something he did when not being a member of COPE. But there was no such danger: they did not have to punish him for his previous decision – I simply offered to go again to the editor and asked COPE to watch his NEW performance. They refused.
It can be supposed next that COPE would hate causing the editor a nervous shock if he would have to retract the paper. But somehow I think that the nervous shock would be felt here, in Canada, and a great shock indeed.
COPE never told me who was actually deciding my case However, later, the COPE Chair, Ms. Liz Wager, entered the discussion on Times Higher Education (28 August, 2009) and said:
“We have been in touch with Michael Pyshnov but were unable to respond to his complaint because it occurred many years before the journal in question became a COPE member (and, obviously, we cannot apply our codes retrospectively).”
(She meant “retroactively”.) This explanation to the public was obviously incorrect:
1) my request for retraction did not occur “many years before”… but a couple of months before the editor… etc.
2) it’s the paper that was years old, but retractions of old papers are not prohibited
3) the fact that the paper was old was NEVER AN ISSUE in any letters from COPE.
4) L. Wager did not state the actual reason for refusing my case.
5) she did not mention that they first put forward two other reasons for refusing my case and both failed.
(Ms. Liz Wager is a specialist on retractions.)
There is one point I am missing in this debate and that is that one reason for the increased number of retractions could also be that accusing other researchers of fraud is an easy way of getting back at them if a dispute occurs.
I for one have a former PhD supervisor who i.e. has written one of the editors where I have published that I might have committed fraud. First he claimed that my lack of reporting actual follow up time in the study was fraud. I admitted that I had made a mistake and wrote a corrigendum which was approved by my former PhD supervisor and published because.
However, as you will notice, I write former PhD supervisor, as I retracted my thesis because of his accusations and resubmitted it at another university with a new adviser. So not too long ago I again got an email from the editor stating that my former PhD supervisor had now escalated his complaint to one of “possible fraud”.
My former PhD supervisor still refuses to state what it is he suspects I have cheated with making it a bit difficult to defend myself. And my point is, how many of the retractions are due to disputes between researchers where one in the “team” decides to “take an easy punch” and mention that “maybe this paper is fraud …”?
Quite complicated. Maybe you could reformulate the wirting, as it is hard to see the point, for instance “First he claimed that my lack of reporting actual follow up time in the study was fraud.”
Indeed claims of misbehavior can stain whole careers however they must be investigated fully and things made transparent, so others can clearly judge what happened. Most of cases of alleged involves pretty evident manipulation, so the claims may have a solid basis.
Dit is op From experience to meaning… herblogden reageerde:
This is a blog I follow on the retraction of scientific papers. This blogpost reports on why the number of scientific retractions seemingly has increased.
Do your data indicate what % of retracted papers had a trainee as first author ?
DSK