“Identical in theory and concept”: Privacy paper pulled over redundancy

Screen Shot 2015-03-31 at 11.02.51 AMA paper on wiretapping in the Arab region has been retracted by a Qatari law review journal for redundant publication and “possible misuse of plagiarism detection software at the authoring stage.”

The 2013 article in the International Review of Law discusses how different Arab countries regulate intercepting telecommunications, and how to balance public safety with the right to privacy. According to the notice, it ripped off two other articles by author Nazzal Kisswani, published in 2011 and 2010. “Although it is not an exact copy of a previously published article, it contains parts of it,” the retraction explains.

Here’s the notice for “The “Right to Privacy” v. telecommunications interception and access: International regulations and implementations in the Arab Region”:

The article ‘The “Right to Privacy” v. telecommunications interception and access: International regulations and implementations in the Arab Region’ by Yaser Khalaileh and Nazzal Kisswani, published in International Review of Law (2013:10) is for the most part identical in theory and concept to two other articles by Nazzal Kisswani;

‘Telecommunications (interception and access) and its regulation in Arab countries’ (Int. J. Liability and Scientific Enquiry, Vol.4, No.1 (2011)) [1] and ‘Telecommunications (interception and access) and its Regulation in Arab Countries’ (Journal of International Commercial Law and Technology Vol. 5, Issue 4 (2010)) [2]

The editorial board and publisher believe the paper belongs to the category of redundant publication. Redundant publications are also called repetitive publication and refer to the publication of copyrighted material that contains additional or new material. Thus, although it is not an exact copy of a previously published article, it contains parts of it.[3]

Redundant publications are unethical and represent an infringement of copyright laws, poor utilization of resources including reviewers’ and editors’ time and journal pages, overemphasizing results, and future interference with meta-analyses. The most common motive behind these types of publications involves academic advancement by apparently increasing productivity. [4]

In addition, the editors and publisher wish to add a note of concern about the possible misuse of plagiarism detection software at the authoring stage of the 2013 paper.

References

Kisswani N. Telecommunications (interception and access) and its regulation in Arab countries, Int. J. Liability and Scientific Enquiry, Vol.4, No.1 (2011)

Kisswani N. Telecommunications (interception and access) and its Regulation in Arab Countries, Journal of International Commercial Law and Technology Vol. 5, Issue 4 (2010)

Benos DJ, Fabres J, Farmer J, et al. Ethics and scientific publication. Advan Physiol Edu 2005;29:59 –74

Castillo M. Editor’s Comment: On Redundant and Duplicate Articles. American Journal of Neuroradiology. 2007;28(10):1841–1842.

Christopher Leonard, head of academic and journals publishing for the Bloomsbury Qatar Foundation, which puts out International Review of Law, explained the bit about plagiarism detection software:

We were concerned the manuscript had been submitted to Turnitin on more than one occasion to edit down it’s similarity and likelihood of being flagged up as plagiarised.

We’ve contacted corresponding author Yaser Khalaileh, and will update if we hear back. Editor in chief Jon Truby declined to comment.

Update 4/29/15 3:49 p.m. eastern: We’ve heard from EIC Jon Truby, who sent us this statement:

The International Review of Law always undertakes checks to ensure the originality of any article.  In this case, the repetition remained undiscovered by the anti-plagiarism software until further checks following the article’s publication.   The Committee on Publication Ethics (COPE) independently verified the findings of the journal and its publishers following investigation and recommended retraction.  We have now improved our processes to undertake additional preventive measures to detect plagiarised content at the first instance and have adopted best practices proposed by COPE. It is most disappointing that the authors involved belonged to the same academic institution as the journal’s funders.

Update 4/30/15 11:53 p.m. eastern: We heard from Karim Mohammed, an “independent external reviewer connected with the publishers,” who had reviewed this case. He told us:

Possible misuse of plagiarism detection software means that the authors were suspected of repeatedly entering their article through plagiarism detection software in order to sufficiently change the wording (though not the content) to minimize the percentage of potentially plagiarized text.

They went further by ensuring the previous articles were only found in journals requiring a subscription or purchase, so that most plagiarism detection software – or indeed people investigating this breach – would be unable to access the original versions.

It seems the authors knowingly went to significant efforts to hide their violations….The authors had claimed in their written defence that the reason why a virtually identical article was published in both 2010 in the IJLSE and in 2011 in the ICLT was because: “The first publisher, the IJLSE, has concluded an agreement with the second publisher, that is ICLT, to re-publish certain papers already in the IJLSE.”  On verifying this claim with the journals, it was found to factually untrue.

Hat tip Rolf Degen

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

9 thoughts on ““Identical in theory and concept”: Privacy paper pulled over redundancy”

  1. This is one of the dangers of text matching software and institutions who believe only in the numbers they see, instead of investigating the text similarity. People, both researchers and students, will write to the software. Journal editors should perhaps use two or three different systems, as each uses different algorithms and may show widely different “originality scores”, whatever that means. One can never prove originality, only the presence of plagiarism.

  2. Deborah, I would have to disagree slightly with your last sentence, which I would edit to “One can never prove originality, only the presence of plagiarism.” I believe that to claim plagiarism requires one important ingredient: intent. Since plagiarism is a negative term associated with research misconduct, it is dangerous to use the terms similarity and plagiarism (or self-plagiarism) interchangeably. Because the latter implies an innocent or honest mistake and bland fact while the latter implies an intent to copy without due attribution. It is most likely, as I see it, the fact that these two issues have become distictly muddled (by editors and publishers) that we see so many euphemisms for “plagiarism”, most likely because many editors simply do not want to associate the term similarity (with a more innocent nuance) with plagiarism (with a more serious misconduct nuance). Whether we are dealing with similarity or with plagiarism, the software you are referring to, or the techniques serve only one purpose: as tools. You are absolutely right to state that ultimately, the humans behind the process – usually the editors in conjunction with the publisher’s lawyers? – have to make the ultimate call. Making that call, I believe, can be a tricky tightrope between calling the overlap in text an honest mistake or an intent to mislead. We need case studies of erroneous decisions of “plagiarism” to support or refute our hypotheses.

    1. Erratum. My first sentence, modifying Deborah’s statement, should have read: “One can never prove originality, only the presence of similarity.” Apologies for that.

    2. Absolutely not. A text is a plagiarism, no matter if someone “intended” to deceive or not, in my opinion. A text stands for itself, as a reader has no idea of the situation in which it may have been prepared. Intent, or indications of intent (as we cannot look into people’s brains and read out if they really did mean to do it), may be taken into consideration when deciding what the consequences of plagiarism are. But all this mincing around and speaking of “text similarity” or “text overlap” really bothers me. Let’s call a spade a spade, and call it plagiarism. But it is the *text* that is plagiarized. We may perhaps reserve the term “plagiarist” for those authors who publish multiple plagiarisms. Goodness knows, there are enough of them.

      And let’s get the lawyers out of science: I would love to have more case studies about the results of plagiarism accusations. I have personally reported over 130 cases of plagiarism in doctoral dissertations to universities (see de.vroniplag.wikia.com/wiki/%C3%9Cbersicht), yet even if their official rules for investigating academic misconduct state that I will be informed, I still don’t get answers from the universities for “privacy” reasons. It’s the lawyers for the universities who are afraid of lawsuits, although the plagiarism is out there for anyone who can read to see.

      There are a number of cases documented on VroniPlag Wiki, for example the case Go, that the university had decided not to sanction. This dissertation in medicine includes 11 pages that can be found verbatim in the Wikipedia. There is the case of Rm, who submitted a thesis to the Humboldt-University in Berlin. It was rejected there for plagiarism. A year later a very similar text shows up at the Austrian university of Innsbruck. They won’t even say if they are investigating or not. That does make it difficult to look at the cases and see *why* a plagiarized text was considered acceptable. I would, of course, love to see such case studies. But we have to get the universities to be more open about reporting on the cases and how they have been resolved.

    3. Deborah, one can certainly feel your passion about this topic. But think about it, is perhaps the fact that you reported the 130 cases as “plagiarism” and not as “similarity” is perhaps the reason why you have been met with silence? Look, I’m not condoning plagiarism, or excusing it, simply stating that by using that term, automatically there appears to be a legal negative connotation. Except for this minor issue of definition, actually I agree with most else you say. I whole-heartedly agree that lawyers have no place in science, but it seems that lawyers don’t care what we think (or they are only interested when there are legal fees to be reaped). And since it’s (still) a (relatively) free world, but a very exploratory one at that, rules sometime appears to be bent, where necessary, to suit the “beneficiary”, which is usually not science. One recent case documented at RW highlights this risk:
      http://retractionwatch.com/2015/01/06/water-bridge-hydrology-journals-wont-retract-plagiarized-papers-despite-university-request/

      I think the issues boil down to the following 4 key questions:
      a) how can (self)plagiarism be quantified? You seem to allude to the fact that a percentage of similar text does not always translate into the same percentage of (self)plagiarism.
      b) how can we change the system to get editors to abide by rules that are already in place, but which are not being implemented according to their own written guidelines (your own comments on another post at RW reflect this concern about one specific STM publisher)?
      c) should such centrally important software be freeware or openware, the logic being that all scientists, editors and publishers benefit and not just some commercial companies?
      d) are policies on (self)plagiarism really that standard across all COPE members, for example, COPE consisting of (I think) about 9000 journal members? Moreover if all these members are held to the same policies or guidelines, do they follow them, and implement them uniformly?

      How do you feel about these 4 questions?

      1. Actually, I always carefully write “A potential case of academic misconduct at your school” and we’ve castrated all of our templates to say “text similarity” instead of plagiarism (on the advice of lawyers), but I really have to grit my teeth. I want to call it for what it is: the text is a plagiarism.

        To your questions:
        a) A more difficult question is how to quantify plagiarism. Is it the number of affected pages, or lines? Do captions and footnotes count? What about copying references? What if word order is changed? What if words are replaced by synonyms? What if words are inserted or removed? These all change attempts to quantify the amount of plagiarism. When we move on to duplicate publication we need to see if these are the same authors in the same order, the same authors in a different order, author overlap, or disjunct author set. Does the later publication refer to the prior one? Are the publishers aware of this? Was it “just” a portion of text? This is a *really* hard question.
        b) I am at a complete loss how to do this, other than public shaming.
        c) Some already is! I am using a free software, sim_text, for fishing out duplicate publications from PubMed. The researchers at VroniPlag Wiki use a wide spectrum of small tools, as well as Google, for looking for plagiarism. I would love to see a database-based system such as Turnitin be free to use, if everyone understood that it was just a tool, nothing more. Since it stores the Wikipedia, I feel that their database should also be under a Creative Commons access license, but the lawyers think differently 😉
        d) The policies are amazingly similar, I often quote them when reporting a duplicate publication. And the COPE flowcharts are useful for editors. But some hide behind them, for example, Springer writes that they can’t find the author, and thus “[a]ccording to COPE guidelines, no action can be undertaken prior receiving from the authors a detailed explanation of the reasons behind the duplicate publication.” But they are not followed by every publisher, unfortunately.

        I don’t know solutions to this massive problem. But I am very glad it is out in the open and being discussed!

  3. “But they are not followed by every publisher, unfortunately.”
    Deborah, what should COPE do, if anything, when COPE members (editors or journals) do not follow the COPE guidelines or codes of conduct?

  4. The retraction notice itself bears textual similarities to Castillo (2007) AJNR. That paper is cited. But there’s a lot of overlap…

    “The Editor-in-Chief of that journal and members of the Ethical Committee on Publication of its parent organization concluded there are enough similarities between that article and a subsequent one published in AJNR to place both in the category of redundant publication. In his letter, Dr. Akan, the principal author of the AJNR article debates this point of view.2 What do we mean by redundant and duplicate publication?

    Redundant publication: This is also called repetitive publication and refers to publication of copyrighted material that contains additional or new data.3 Thus, although it is not an exact copy of a previously published article it contains parts of it. After carefully reading the articles in question here, I have concluded that they fall into this category.

    […]

    The reasons why redundant and duplicate publication are unethical include: infringement of copyright laws, poor utilization of resources including reviewers’ and editors’ time and journal pages, overemphasizing results, and future interference with meta-analyses.4 The most common motive behind these types of publications involves academic advancement by apparently increasing productivity.”

    Hmm.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.