Study plagiarizes so many other papers, retraction notice can’t list them all

j controlled releaseIn a new retraction notice, the Journal of Controlled Release is living up to its name.

The editor-in-chief has retracted a study that plagiarized “a large number” of papers, but only three are listed in the notice. Here’s the notice for “In situ-forming hydrogels for sustained ophthalmic drug delivery,” by Basavaraj K. Nanjawade, F.V. Manvi, and A.S. Manjappa, three researchers at India’s KLES’s College of Pharmacy, JN Medical College Campus, Karnataka:

This article has been retracted: please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy).

This article has been retracted at the request of the Editor-in-Chief.

The authors have plagiarized parts of a large number of previously published papers. The three most plagiarized papers are:

Advanced Drug Delivery Reviews 16 (1995) 3–19 http://dx.doi.org/10.1016/0169-409X(95)00010-5

European journal of Pharmaceutics and Biopharmaceutics 58 (2004) 409–426 http://dx.doi.org/10.1016/j.ejpb.2004.03.019

Advanced Drug Delivery Reviews 16 (1995) 51–60 http://dx.doi.org/10.1016/0169-409X(95)00015-Y

One of the conditions of submission of a paper for publication is that authors declare explicitly that their work is original and has not appeared in a publication elsewhere. Re-use of any data should be appropriately cited. As such this article represents a severe abuse of the scientific publishing system. The scientific community takes a very strong view on this matter and apologies are offered to readers of the journal that this was not detected during the submission process.

The paper has been cited 81 times, according to Thomson Scientific’s Web of Knowledge. That’s quite a bit more than we typically see with plagiarized papers, and suggests that damage to the original authors’ work is more considerable than usual.

13 thoughts on “Study plagiarizes so many other papers, retraction notice can’t list them all”

  1. Well…it’s a review. Unfortunately, their method of making a review was to cut-and-paste sections of the papers it reviewed.

    1. Two fundamental criticisms of this site and of the reporting of plagiarism still remain: 1) why are these retractions based on plagiarism never quantifying the amount of plagiarism? Everybody’s shouting out loudly “plagiarism is bad” but no-one is setting industry standards at reasonable levels. Some publishers reject with 20-25% plagiarism, and others with < 1.5% plagiarism, so the current industry standard is, frankly-speaking, a joke. This makes retractions like this one seem like a witch-hunt more than fair justice, and the publishers who are exercising such justice, Elsevier in this case, should be prepared to quantify what their cut-off points are across the board for ALL Elsevier journals. In other words, the decision should not lie with each individual editor-in-chief, it should most definitely be across all journals of a publisher, to be fair, otherwise there is no way of knowing whether there are conflicts of interest between an editor and an author, or not. The fact that Elsevier is NOT quantifying their level of plagiarism is going to soon work against them unless they go back and quantify every single retraction made thus far. 2) I have often said that the publisher is also partly responsible for the problem. Of course, the author takes the thickest slice of responsibility, but why did the publisher, the editors and the peer reviewers not detect plagiarism during the peer review process? This smacks of politics in some cases, and the finger should, to some extent, be pointed at the publisher, for being irresponsible and for doing a sloppy peer review job. Demonizing the author while over-protecting the publisher is wrong, and unfair. In this case, Elsevier FAILED to detect plagiarism, even though they had several tools at their disposal. They clearly took profit from sales related to this paper, yet I don't see anybody criticizing Elsevier? Why not? Don't bother asking Elsevier, because they will simply put on a marketing manager to give some PRO mumbo-jumbo to make it look like they were the victim. A more recent blog entry (http://www.retractionwatch.com/2013/02/25/is-an-article-in-press-published-a-word-about-elseviers-withdrawal-policy/#more-12613) that showed the criteria for Elsevier retractions didn’t even bother to address the responsibility that Elsevier has in oversight and poor peer review. Yet, if you read that blog entry, you would say “what an incredibly responsible company that is genuinely worried about justice in publishing”. This is a farce. Moreover, on this blog alone, 14 retractions that were reported were from Elsevier journals. One or two is of concern, but 14 in approximately 45 days of blog entries shows that there are serious problems in Elsevier journals and their management and quality. Indeed, accidents and problems happen more frequently to a driver who drives longer and has a larger fleet of cars. Even so, the warning signs are quite clear, and large. 3) I have hypothesized, particularly in Springer book chapters, that alot of plagiarism is taking place. Maybe Springer book chapters are more prominent because they currently hold the world’s biggest market share of book chapters, so errors and frauds would also command a larger slice. Open Access books booming successes such as Intech also need closer scrutiny and plagiarism checking. It seems no-one is paying attention to this massive big business (book chapters) that is booming, and that could be profiting from plagiarism, simply because many universities have access to journals but not to Springer books. Are there any adventurers here who have the tools to start investigating book chapters? We cannot ignore this massive risk simply because Thomson Reuters is more interested in seeing how plagiarism affects journal rankings and the Impact Factor. Bloggers and scientists should not be so naive and should not accept what they read here at face value. Retraction should (could?) be a window on to the publishing panorama, but only if the window is transparent. Right now, too much opaqueness and quality windows exist to say anything concretely.

      1. You have raised a good point. Yes we do see this discrimination in cases with plagiarism issues. The percent may differ with the software you use as well – iThenticate and Turnitin. Moreover, journal editors and publishers, please don’t just look at the percentages come up in the analysis. If you run an article through these software – the similarity index may list even when the sentences appear post publication in another article. I am not sure whether they check this carefully. Reviews should have different types of analysis. Any one checking, single author Social Scientists opinion related articles?

  2. Reblogged this on Lyon Group and commented:
    Open note to the group: this is a cautionary tale for those of you writing review articles. Yes, a review should discuss previous papers. No, a review should not duplicate previous papers. Note that this JCR Review has been cited 81 times. As Ivan correctly points out, that is a lot for a retracted paper…be careful with what you write, and what you cite.

  3. The paper was published in 2007, before Elsevier was actively using plagiarism software. As far as I am informed, Elsevier allows each editor to run their journal and rarely actively gets into decision making processes.

    I personally think that review articles should rarely be cited. They rarely have new data.

    1. Of course reviews don’t have much original data. In fact, sticking data into reviews is rather underhanded in my opinion.

      The point of reviews is to give an overview of a particular sub-field, and as such are often the best way to direct a reader to the relevant literature. This is especially true in situations in which there is strong pressure on authors to reduce citations. I would love to cite all the relevant original literature in my discussions, but many editors simply don’t like it.

    2. Agreed. While I recommend reviews, if citing, I believe one should read and credit the original work.

  4. It would be nice if people demanding that someone else do a lot work give us some more background, like saying:
    I edited journal X for N years, found M papers with plagiarism in the review process, and no one ever found any more in all the Papers published. Another useful comment would be: I’ve reviewed papers, and have spent X hours apiece searching for plagiarism and found this many cases.

    I claim: plagiarism is often obvious in retrospect, but not during review. Automation can easily miss things for a variety of reasons. If someone plagiarized a book that has never been online, exactly how does TurnItIn discover that?
    Unless some obvious red flags appear, or one has other reasons to be suspicious (like other examples by the authors),nobody would spend the exhaustive time to go find it. I’ve had cases where history leads me to think something was plagiarized, spent hours, but could not find it. (And I even do things like backtracking in Wikipedia history files, where the current version was different, but text was taken from the version then current. There are lots of Deep Web sources that Google doesn’t see.)

    I sometimes review papers, and I do not spend hours and hours trying to find plagiarism and I doubt that other reviewers generally do either. That is generally not a reviewer’s job, and if a request for review included “please spend enough time looking for plagiarism to assure us a high confidence there is none.” … The answer would be: sorry, decline to review.

    It is insurprising that plagiarism is so often found in retrospect by plagiarized authors or others who happen to recognize something. When it comes to publishers and editors, I am quite happy if they simply react quickly and appropriately to credible complaints, rather than ignoring them.
    From personal experience (although a small sample) Elsevier has acted promptly and correctly, while some others have stonewalled or not even bothered to acknowledge well-documented complaints.

  5. Actually here we can observe the situation that many of the people editing the same journals and content for number of times, it was editing or reprinting the same content so that data is missing the quality analysis by someone. so these are maintaining by others in paste spree.. is there any option to control this?

  6. I’ll make it a bit more funny: I happened to find one review from this group in a field that I know a bit, and a journal that is pretty good (in that field). I checked the paper for plagiarism…and found at least three other review papers that boldly copied where this review had gone before! Not the whole paper, but whole sections, often including the bad English and obvious mistakes.

    Perhaps unsurprising I also found several parts in the original review that weren’t quite unique (pick a sentence here, a sentence there, put them together and you have yourself a review…). So, the plagiarists have been plagiarized. Since the clearly plagiarizing papers have been published in rather obscure journals, I don’t know whether I want to take any action. However, the original paper…I have to think about that.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.