Author appeals retraction for plagiarism in clinical research paper

Journal of Human Reproductive SciencesThe first author of a paper that discussed sample sizes in clinical research is appealing the journal’s decision to retract it for plagiarism, arguing the article is “entirely different.”

The  Journal of Human Reproductive Sciences‘s editor-in-chief told us that they first contacted the author about the allegations more than two years ago, and finally issued the notice in September, saying the paper “directly copied” from another article on randomization. “Thus owing to duplicity of text, the article is being retracted,” according to the notice.

That doesn’t jibe with first author K. P. Suresh, based at the National Institute of Veterinary Epidemiology and Disease Informatics in India. He told us that the “two articles are entirely different concept.” In subsequent emails, he added that he had not been given the chance to “represent the issues” before retraction, and said that he was going to reach out to the journal.

Indeed, Suresh sent an email to the journal’s editor-in-chief, Madhuri Patil, which he shared with Retraction Watch. He asked the journal to “kindly check these two articles once again, and rectify the error”:

In response to your mail, two articles in question is attached for your reference.

1. Suresh KP et al . Sample size estimation and power analysis for clinical research studies

2. Issues in outcome research: An overview of randomization techniques in clinical research

The dispute is material in article 1 is copied from article 2.

Since the concept of article 1 is entirely different from concept of article 2, there is no questionn of copying arises in the first instance

secondly I have rechecked the article 1 , which is written in general is no way related to article 2

kindly check these two articles once again, and rectify the error

Patil told us that the journal had reached out to Suresh in 2013 after they received a message about potential plagiarism in the article. She said the journal has spoken to Suresh over the phone regarding the paper. They waited for two years before issuing the retraction, said Patil.

We had informed dr KP Suresh  2 years ago and he just replied saying that it is his original article. We had waited for 2 years.

Patil provided Retraction Watch a message sent to Suresh September 3, 2013; we’ve asked Suresh to confirm that he received the email.

The retracted paper was published in 2012 and looked at how to determine statistically significant sample sizes in clinical research studies. It has been cited 30 times, according to Thomson Scientific’s Web of Knowledge.

Here’s the retraction notice:

In the article entitled, “Sample size estimation and power analysis for clinical research studies,” which was published in pages 7-13, Issue 1, Vol. 5 of Journal of Human Reproductive Sciences, sections in the text have been directly copied from a previously published article, entitled, “Issues in outcomes research: An overview of randomization techniques for clinical trials,” in pages 215-221, Issue 2, Vol. 45 of Journal of Athletic Training. Thus owing to duplicity of text, the article is being retracted.

The other article, “Issues in outcomes research: An overview of randomization techniques for clinical trials,” was published in 2008 by the Journal of Athletic Training and detailed the different approaches to randomly assign study participants to different groups. It has been cited 50 times.

Update 11/30/15 9:34 a.m. eastern: We’ve received a statement from the publisher:

The department of Editorial Quality Management, investigates each case to its depth and a final decision is taken only after we have proper justification for the same. We at Wolters Kluwer always maintain the highest ethical standards as per COPE and ICMJE guidelines for our Editorial processes.

The case now stands closed from our end.

Hat tip: Rolf Degen

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

8 thoughts on “Author appeals retraction for plagiarism in clinical research paper”

  1. Dr. Madhuri Patil, could you be so kind as to provide a text versus text comparison of these “sections in the text have been directly copied from a previously published article”. In addition, do your instructions for authors define a quantitative value of how much text (word-wise or percentage-wise) constitutes plagiarism or self-plagiarism.

  2. It appears possible (probable?) that the editor named the wrong paper in the retraction notice as the source of the identical material. However, Suresh & Chandrashekara’s paper does indeed contain text that had had appeared elsewhere, so the retraction for plagiarism appears justified, even if the notice is partly incorrect.

    A quick search revealed some text in Suresh & Chandrashekara’s article that appears to have been repeated verbatim or nearly verbatim from a 2002 paper on sample size by Elise Whitley and Jonathan Ball, published in /Critical Care/ 6:335-341. [doi:10.1186/cc1521 or http://www.ccforum.com/content/6/4/335.

    Suresh and Chandrashekara cited Whitley and Ball’s paper, but not exactly for the section in which the material was used (possibly a re-numbering error?), and they did not set off the identically-worded section with quotation marks.

    For example, in Whitley and Ball’s paper, the first 2 paragraphs under the “Power” heading are as follows:

    “The difference between two groups in a study will usually be explored in terms of an estimate of effect, appropriate confidence interval and P value. The confidence interval indicates the likely range of values for the true effect in the population, while the P value determines how likely it is that the observed effect in the sample is due to chance. A related quantity is the statistical power of the study. Put simply, this is the probability of correctly identifying a difference between the two groups in the study sample when one genuinely exists in the populations from which the samples were drawn.

    The ideal study for the researcher is one in which the power is high. This means that the study has a high chance of detecting a difference between groups if one exists; consequently, if the study demonstrates no difference between groups the researcher can be reasonably confident in concluding that none exists in reality. […]”

    Compare with Suresh & Chandrashekara’s first paragraph under the heading “Power”, on p.9:

    “The difference between 2 groups in a study will be explored in terms of estimate of effect, appropriate confidence interval, and P value. The confidence interval indicates the likely range of values for the true effect in a population while P value determines how likely it is that the observed effect in the sample is due to chance. A related quantity is the statistical power of the study, is the probability of detecting a predefined clinical significance. The ideal study is the one, which has high power. This means that the study has a high chance of detecting a difference between groups if it
    exists, consequently, if the study demonstrates no difference between the groups, the researcher can reasonably confident in concluding that none exists. […]”
    ———-

    Additional text found on p.9 of Suresh and Chandrashekara’s paper can also be found in a Microsoft Word file attributed to Gayla Olbricht and Yong Wang, dated 2005 and hosted by someone in Purdue’s statistics department
    [available at: http://www.stat.purdue.edu/~bacraig/SCS/Power%2520and%2520Sample%2520Size%2520Calculation.doc ]
    – possibly a stats class project by some students?:

    Olbricht and Wang, pp.1-2:

    “In research, statistical power is generally calculated for two purposes.

    1. It can be calculated before data collection based on information from previous research to decide the sample size needed for the study.

    2. It can also be calculated after data analysis. It usually happens when the result turns out to be non-significant. In this case, statistical power is calculated to verify whether the non-significant result is due to really no relation in the sample or due to a lack of statistical power.

    Statistical power is positively correlated with the sample size, which means that given the level of the other factors, a larger sample size gives greater power. However, researchers are also faced with the decision to make a difference between statistical difference and scientific difference. Although a larger sample size enables researchers to find smaller difference statistically significant, that difference may not be large enough be scientifically meaningful. Therefore, as consultants, we would like to recommend that our clients have an idea of what they would expect to be a scientifically meaningful difference before doing a power analysis to determine the actual sample size needed.”

    Compare with Suresh & Chandrashekara’s 2nd and 3rd paragraphs under “Power” on p.9:

    “In research, statistical power is generally calculated with 2 objectives. 1) It can be calculated before data collection based on information from previous studies to decide the sample size needed for the current study. 2) It can also be calculated after data analysis. The second situation occurs when the result turns out to be non-significant. In this case, statistical power is calculated to verify whether the nonsignificance result is due to lack of relationship between the groups or due to lack of statistical power.

    Statistical power is positively correlated with the sample size, which means that given the level of the other factors viz. alpha and minimum detectable difference, a larger sample size gives greater power. However, researchers should be clear to find a difference between statistical difference and scientific difference. Although a larger sample size enables researchers to find smaller difference statistically significant, the difference found may not be scientifically meaningful. Therefore, it is recommendedthat researchers must have prior idea of what they would expect to be a scientifically meaningful difference before doing a power analysis and determine the actual sample size needed.”

    ————–
    Alas, my quick text search revealed that the portion of Suresh & Chandrashekara’s paper that appears to come from the Olbricht and Wang paper (starting with “Statistical power is positively correlated with the sample size…”) also appears verbatim or near-verbatim in at least two additional papers — by different groups of authors — published in 2014:

    1. Celik M Yusuf, Power Analysis for Highlight Clinical Research : How Many Responses Do You Really Need?, International Journal of Basic and Clinical Studies (IJBCS) 2014;3(1): 1-8
    [available at: http://www.ijbcs.com/eski/images/stories/doc/5-Issue-April-2014/1.%20Celik%20MY.%20Power%20Analysis%20for%20Highlight%20Clinical%20Research%20%20How%20Many%20Responses.pdf%5D

    See p. 6, first paragraph appearing underneath Figure 1:
    “Statistical power is positively correlated with the sample size, which means that given the level of the other factors viz. alpha and minimum detectable difference, a larger sample size gives greater power. However, researchers should be clear to find a difference between statistical difference and scientific difference. Although a larger sample size enables researchers to find smaller difference statistically significant, the difference found may not be scientifically meaningful. Therefore, it is recommended that researchers must have prior idea of what they would expect to be a scientifically meaningful difference before doing a power analysis and determine the actual sample size needed (17).”

    (Reference 17 in Celik’s paper is to Suresh and Chandrashekara’s paper, though the paragraphs are copied without the quotation marks. As noted, this particular passage is one that Suresh and Chandrashekara themselves appear to have lifted (through very close paraphrasing) from the 2005 Olbricht & Wang file hosted at Purdue stats, p.2, 1st paragraph. The journal that published Celik’s paper, IJCBS, isn’t indexed by PubMed, but the article does come up in a Google Scholar search. Prof. Dr. M Yusuf Celik is listed as the journal’s Editor-in-Chief. See http://www.ijbcs.com/eski/editorial-board.html)

    2. Habib A, Johargy A, Mahmood K, Humma, Design And Determination Of The Sample Size In Medical Research, IOSR Journal of Dental and Medical Sciences 2014;13(5),v.VI: 21-31.
    [available at: http://www.iosrjournals.org/iosr-jdms/papers/Vol13-issue5/Version-6/F013562131.pdf%5D

    See p. 25:
    “Statistical power is positively correlated with the sample size, which means that given the level of the other factors viz. alpha and minimum detectable difference, a larger sample size gives greater power. However, researchers should be clear to find a difference between statistical difference and scientific difference. Although a larger sample size enables researchers to find smaller difference statistically significant, the difference found may not be scientifically meaningful. Therefore, it is recommended that researchers must have prior idea of what they would expect to be a scientifically meaningful difference before doing a power analysis and determine the actual sample size needed. Power analysis is now integral to the health and behavioral sciences, and its use is steadily increasing whenever the empirical studies are performed.”

    This excerpt is identical to Suresh & Chandrashekara’s 3rd paragraph under the “Power” heading. The Habib et al. paper appears not to have any in-text citations, and the reference list at the end does not include Suresh & Chandrashekara, nor the sources from which Suresh & Chandrashekara appear to have acquired their material (in the case of this excerpt, possibly Olbricht & Wang; the “viz” phrasing makes it appear that Suresh & Chandrashekara’s paper is the more likely proximal source).

    ———————-
    In addition, my search turned up a 2010 paper (pre-dating Suresh & Chandrashekara’s paper), which uses some of the same wording from Whitley and Ball that Suresh and Chandrashekara also reused. The authors do cite Whitley and Ball’s work (as ref 33), but big chunks of text are copied verbatim without quotation marks.

    Kyrgidis A and Triaridis S, Methods and Biostatistics: a concise guide for peer reviewers, Hippokratia 2010;14(Suppl 1):13-22.
    [available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049416/%5D

    See 2nd and 3rd paragraphs under “Power” heading:
    “Power is the probability of correctly identifying a difference between the two groups in the study sample when one genuinely exists in the populations from which the samples were drawn33. The ideal study for the researcher is one in which the power is high. This means that the study has a high chance of detecting a difference between groups if one exists; consequently, if the study demonstrates no difference between groups the researcher can be reasonably confident in concluding that none exists in reality. The power of a study depends on several factors (see below), but as a general rule higher power is achieved by increasing the sample size33. Thus researchers will strive for high power studies. In this case, with “too much power,” trivial effects may become “highly significant”29.

    It is important to be aware of this because quite often studies are reported that are simply too small to have adequate power to detect the hypothesized effect. In other words, even when a difference exists in reality it may be that too few study subjects have been recruited (Type β error)9. In other words, an apparently null result that shows no difference between groups may simply be due to lack of statistical power, making it extremely unlikely that a true difference will be correctly identified34.”

    Compare with Whitley and Ball (2002):
    “Put simply, this is the probability of correctly identifying a difference between the two groups in the study sample when one genuinely exists in the populations from which the samples were drawn.

    The ideal study for the researcher is one in which the power is high. This means that the study has a high chance of detecting a difference between groups if one exists; consequently, if the study demonstrates no difference between groups the researcher can be reasonably confident in concluding that none exists in reality. The power of a study depends on several factors (see below), but as a general rule higher power is achieved by increasing the sample size.

    It is important to be aware of this because all too often studies are reported that are simply too small to have adequate power to detect the hypothesized effect. In other words, even when a difference exists in reality it may be that too few study subjects have been recruited. The result of this is that P values are higher and confidence intervals wider than would be the case in a larger study, and the erroneous conclusion may be drawn that there is no difference between the groups. This phenomenon is well summed up in the phrase, ‘absence of evidence is not evidence of absence’. In other words, an apparently null result that shows no difference between groups may simply be due to lack of statistical power, making it extremely unlikely that a true difference will be correctly identified.”

    ———————-
    In any case, the retraction of Suresh & Chandrashekara’s 2012 paper for plagiarism appears justified, but perhaps the retraction notice itself requires an erratum to correctly identify the works that were copied.

      1. If there is no clear overlap between Suresh & Chandrashekara’s paper and the paper cited in the retraction notice as the source of the duplicated material, then it would seem that way, wouldn’t it?

        I have not gone through the paper by Kang et al. (the one mentioned in the retraction notice) with a fine-toothed comb, but my quick skim (as commenter “genetics” also noted) didn’t reveal obvious evidence of textual overlap.

        1. Gosh – good detective work LK. One has to feel a bit sorry for Whitley & Ball whose work appears to be a very popular source to be copied (especially if they didn’t even get cited for it). Looks like a chain-reaction of plagiarism here…

  3. Journal of Human Reproductive Sciences is published by Wolters Kluwer – Medknow. What does the publisher have to say?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.