Publisher investigating paper a lucrative scale is based on following Retraction Watch reporting

Donald Morisky

The publishing firm Wiley says it is investigating a pivotal paper about a controversial public health tool after Retraction Watch reported on a robust critique of the article which highlighted a number of potentially serious flaws with the research.

We’re talking about the Morisky Medication Adherence Scale (MMAS), whose developer, Donald Morisky, has been hitting researchers with hefty licensing fees — or demands to retract — for nearly two decades. 

One of the key papers supporting the validity of the MMAS-8 (the second iteration of the MMAS) was a 2008 article by Morisky and colleagues in the Journal of Clinical Hypertension

That paper has been cited more than 1,000 times. But as we reported last month, in 2019 Michael Ortiz, or the University of New South Wales, in Sydney, Australia, notified the journal that he’d identified some serious issues with the work, including questions about the reported sensitivity and specificity of the tool and a failure to disclose Morisky’s financial stake in its use. 

Two years later, the journal had yet to act on Ortiz’s complaints — other than to belatedly publish a letter from him outlining his concerns in April 2021 — but more movement might be coming. 

In emails Ortiz shared with us, Christian Bjorbaek, the journal publishing manager for health sciences at Wiley, said the company has: 

initiated an internal investigation of the case and the concerns you have raised regarding the 2008 article in your Letter-to-the Editor from earlier this year. We have not heard from the authors of the original article offering the possibility to respond to your letter.

Bjorbaek promised to update Ortiz on the findings and thanked him for pursuing the matter so doggedly: 

I want to thank you for your patience in this matter and for your dedication to ensure correctness of the published literature. We support such efforts.

Ortiz in an email thanked the editor-in-chief of the journal for inviting him to submit his concerns as a letter and to try — unsuccessfully, as it happened, although Morisky and a colleague replied to us — to get a response from Morisky: 

The dilemma the Journal Editor now faces is caused by the failure of the Authors to respond to some questions about the accuracy of several measures reported in their study.  My calculations indicate that the values described in their article are implausible.

Professor Morisky could easily resolve this matter by sharing their calculations, in particular his ROC curve and “c” statistic. Unfortunately he has chosen not to provide any numbers in support of his published values.

I am confident that the Journal will seek advice from experts in the area. The real issue is the failure to respond to my concerns by the authors.  This goes to the very heart of Academic Integrity and Authors failing to respond to concerns about their manuscripts.   

Ortiz added: 

I hope that the investigation is comprehensive and their decision on the evidence and not on the status of all four authors.

This article illustrates how the review process isn’t perfect and that there needs to be a fair and reasonable process to address concerns with possible serious errors in high profile articles after they are published.

Ultimately, the academic integrity of authors who fail [to] co-operate and respond to concerns, should be judged by their peers.

Like Retraction Watch? You can make a one-time tax-deductible contribution or a monthly tax-deductible donation to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at

15 thoughts on “Publisher investigating paper a lucrative scale is based on following Retraction Watch reporting”

    1. Perhaps he waived the fee for studies where he was also an author? In any case, charging to use a validated scale is not all that uncommon in health research.

      1. That isn’t the COI suggested by the original commentor. The COI is that, in papers supporting the validity of the scale, Morisky didn’t indicate he profited from the scale being considered valid. Which is a pretty clear COI, it seems to me.
        “Here I am supporting the validity of the scale. I am the license-holder of the scale and profit when it is used” is very different from: “Here I am supporting the validity of the scale. As far as you know, I have no financial stake in the scale’s use.”

  1. I’m somewhat baffled by this comment: “Professor Morisky could easily resolve this matter by sharing their calculations, in particular his ROC curve and “c” statistic. Unfortunately he has chosen not to provide any numbers in support of his published values.”
    Morisky published the paper in 2008 and is now retired. Is it really that ‘easy’ to share the requested calculations, as Prof Ortiz states? It assumes Morisky would still have – and have access to – the raw data, statistical analyses, and software underpinning that publication. It seems improbable that those records would still exist now.

    1. It’s quite likely. Why would he not? It’s an important underpinning of his business, after all. Want the data from my 2006 or 2007 or 2008 papers? Sure, give me a few minutes.

      1. I am concerned that Professor Morisky has still not provided details of any summary statistics and their response lacks consistency with the methodology described in their 2008 publication.

        I would like to draw attention to the Moon et al (2017) systematic review of which Professor Morisky was a co author. This study found Sensitivity, Specificity and Accuracy of 0.43 [95% CI: 0.32 ~ 0.53], 0.74 [0.68 ~ 0.79], and 66% respectively.

        I would be less concerned had the Moon, Lee, Hwang, Hong & Morisky’s (2017) Systematic review reflected Morisky et al 2008 claims 93%, 53% and 80% for Sensitivity, Specificity and Accuracy respectively. I recalculated the Morisky et al (2008) values and found a Sensitivity of 38% [95%CI: 34–41%] and a specificity of 75% [72–79%], with an accuracy of 53% [51.3–56.5%] (Ortiz 2021)

        Morisky’s 2008 publication claimed Sensitivity, Specificity and Accuracy numbers of 93% 53% and 80% respectively. These values would make his 2008 publication a significant outlier from his 2017 systematic review (see Figure 5) 0.43 [95% CI: 0.32 ~ 0.53], 0.74 [0.68 ~ 0.79], and 66% respectively.

        Can Professor Morisky explain the differences? More importantly, can Professor Morisky explain why his MMAS-8 performs so poorly in Hypertension and Diabetes validation studies? Sensitivity is less than tossing a coin (32% ~ 53%) and the Accuracy (AUC) is poor between 53% and 66%.

        These findings strongly suggest that the MMAS-8 performs poorly in identifying clinical indicators like poor blood pressure control and poor HbA1c control.

        Worse still, Professor Morisky’s scale has yet to be validated with a systematic review of credible adherence measures.

        This suggest that the scale performs poorly in identifying patients at risk of non adherence.

  2. More than two years after first writing to the Journal Editor, there is still no response by the Authors to Letter to the Editor or any action by the Journal to address my data accuracy concerns.

    Professor Morisky is quick to send lawyers letters, but has moved with the speed of a tortoise to demonstrate that his values are correct.

    Is Professor Morisky still demanding a licensing fee on an instrument that has been promoted using possibly incorrect or inaccurate Sensitivity claims?

    1. “Those using the Morisky Medication Adherence Scale (MMAS) without permission (infringers) have been aggressively pursued, despite the original 4-item scale being published in Medical Care in 1986 and most of the information about the 8-item scale appearing in a research journal in 2008 that acknowledged NIH support. Infringers of the MMAS copyright have recently been pursued for very large payments and, in some cases, had to retract published articles.”

  3. Professor Morisky has failed to respond publicly to my concerns.

    Moon et al [1] identified problems, with MMAS-8 scores, resulting from the misclassification of patients at risk of medication non adherence and they concluded that:

    “using the cut-off value of 6, criterion validity was not [good] enough to validly screen a patient with nonadherence to medication.”

    Another US hypertension study (Kruesel-Wood et al [2] showed that low scores (<6) were no better than medium/ high scores (6 to 8) in identifying uncontrolled BP.

    Professor Morisky was a co-author of both these article and is well aware of the problems with MMAS-8 Sensitivity.

    Of greater importance is an obvious mismatch between the “reported” and the “actual” S&S values:
    1. Morisky et al. [3] “reported” S&S values of 93%
    and 53% respectively.
    2. Patient numbers in the “reported” S&S 2 x 2
    matrix were: 858, 209; 65, 236.
    3. These patient numbers were determined using
    four simultaneous equations: n = 1367, Sensitivity
    = 93% , Specificity = 53% and Accuracy = 80% [4]
    4. Krousel-Wood et al. [2] “actual” S&S values of 19%
    and 85%, for uncontrolled BP were estimated
    using a cutpoint of 6 and a BP control of 140/90
    5. The Mirosky et al [3] “actual” S&S values were:
    a. 38% and 75% Cutpoint 6: Numbers in the 2 x
    2 matrix were: 295, 144; 486, 442.4
    b. 88% and 21% Cutpoint 8: Numbers in the 2 x
    2 matrix were: 687, 462; 94, 123.

    There is a serious mismatch between the “actual” number with uncontrolled BP, which was 18% lower than in the “reported” S&S matrix. The number of adherent patients in the “reported” S&S matrix, was a massive 68% lower than the “actual” number of adherent patients. It is implausible that adjusting covariates could explain such large differences.

    The most logical reason is that the “reported” S&S values of 93% and 53% are incorrect.

    Professor Morisky can easily resolve this issue by producing: (a) the ROC curve, (b) corresponding C statistic, with (c) details of the confounding variables and (d) how they adjusted.

    1. Moon SJ, Lee WY, Hong YP , Morisky DE. Accuracy of a screening tool for medication adherence: Scale-8. PLoS One. 2017 Nov 2;12(11):
    2. Krousel-Wood MA, Muntner P, Islam T, Morisky DE, Webber LS. Barriers to and Determinants of Medication Adherence in Hypertension Management: Med Clin North Am. 2009 May; 93(3): 753–769.
    3. Morisky DE, Ang A, Krousel-Wood M, Ward HJ. Predictive Validity of a Medication Adherence Measure in an Outpatient Setting. J Clin Hypertens (Greenwich). 2008 May; 10(5): 348–354.
    4. Ortiz M. Inconsistencies in the sensitivity and specificity values in an Review Paper published in the Journal of Clinical Hypertension J Clin Hypertens 2021

    1. Cause for the error in Sensitivity and Specificity (S&S) in the Morisky et al 2008 article.

      The most logical explanation for the “reported” S&S values of 93% and 53% (respectively) is that the MMAS-8 cut-off score was actually 8 and not 6.

      (See Morisky et al 2008 Table 5 title: “High Medication Adherence” and last sentence in the Results states: “lower [medium and low] levels of medication adherence”).

      1. The total number of participants with Controlled BP should not be affected by adjusting for confounding. The percentages in Table 4 of Morisky et al 2008 were used to derive the total number of participants with BP control (unadjusted) . The total number of participants with BP control (reported) were derived from the “Reported” S&S values. The total number with controlled BP was 141 lower in the “reported” counts than in the “actual” counts, when they should not differ.

        Contrast this with the massive jump in the total number of non adherent patients between the actual and reported S&S values. This can best be explained if there was a different cut off (8 not 6) was applied to the Reported S&S values.

        There seems to be two large unexplained errors which seem to inflate the validity and accuracy of the MMAS-8 as a screening tool. These errors need t be corrected

  4. Morisky et al 2008 Update

    MMAS-8 and the calculation of Sensitivity and Specificity (S&S) of a Diagnostic Test for medication Non Adherence (MMAS-8).
    Three years ago I identified what seems to be a serious error in the calculation of S&S and I wrote to the Journal Editor who requested that I write to him explaining my concerns.
    The Article (Morisky et al 2008) presented the data in a 3 x 2 matrix of percentages of patients by cell (Table 1). There was no details about how the Sensitivity. Specificity or accuracy values (93%, 53% and 80% respectively) were calculated.

    Table 1: Generated from the Morisky et al 2008 – 3 x 2 Matrix.
    MMAS-8 score Uncontrolled BP Controlled BP Total
    Low score <6 295 (67.2%) 144 (32.8%) 439 (32.1%)
    Medium 6 to <8 392 (55.2%) 319 (44.8%) 711 (52.0%)
    High 8 94 (43.3%) 123 (56.7%) 217 (15.9%)
    Total 781 (100%) 586 (100%) 1367(100%)
    Source: Ortiz Letter to the Editor 2021.

    To date, Morisky et al have failed to provide their calculations despite my letter finding that their calculations were implausible. More importantly, I have found a post by Morisky in ResearchGate describing their S&S calculation method from Lee et al ( using the 3 x 2 matrix calculation (Table 2).

    Table 2: Morisky post on ResearchGate.
    Source: (PDF) Reliability and validity of a self-reported measure of medication adherence in patients with type 2 diabetes mellitus in Korea (
    Lee et al (2017) Reliability and validity of a self-reported measure of medication adherence in patients with type 2 diabetes mellitus in Korea

    When I apply the methodology in the Morisky post, it confirms the calculation in my Letter to the Editor:

    The Sensitivity = 295/781= 38% and specificity = (319 + 132)/586 = 442/586 = 75%:
    The values become 295, 144; 486, 442 in the 2 × 2 table of Table 1.
    These data were used to calculate the S&S (calcS&S) values of
    38% [95%CI: 34–41%] and 75% [72–79%], respectively, with an accuracy of 53% [51.3–56.5%].

    That is the post by Morisky used the same method that I used in my 2021 Letter to the Editor. The Researchgate post is different to the publication as it describes the Sensitivity and Sspecificity calculations from the 3 x 2 table which are lacking in the text of both the Morisky et al 2008 and the Lee et al (2017) articles.

    Morisky et al (2008) have provided no details about how their Sensitivity. Specificity or Accuracy values (93%, 53% and 80% respectively) were calculated.

    These findings seems to confirm my observation that Morisky et al 2008 contains a serious error which must be corrected by the Authors.

    Since the Editor of the Journal of Clinical Hypertension seems unwilling to act, all we can do is wait for a response which provide support of their findings in the form of the ROC curve and the C statistic from Professor Morisky.

  5. Where I sit on the other side of the world, it looks like Professor Morisky has “taken the fifth amendment to the US Constitution” that contains a number of provisions relating to criminal law, including guarantees of due process and of the right to refuse to answer questions in order to avoid incriminating oneself.
    If Morisky et al had conducted the research as described in their article, then there is an obvious error which needs to be corrected.
    Ultimately the question to be answered by the investigating committee relates to intent. It would seem that all four authors were advantaged academically and/or financially from this error.
    What concerns me now is that, had the authors reported the values that five independent experts calculated from the Morisky data, then the MMAS-8 should have been consigned to the WPD as implied by Moon et al 2017.
    It is not in the authors best interest to make their data available. If they do, then the accuracy of all the research using the MMAS-8 will be thown into question.
    The truth will come out eventually. It borders on the criminal if the editor fails to act on this error.
    This saga highlights that the peer review process should not a one time event, but one that can be revisited in a timely manner in the future.
    It should be an embarrassment to the Journals in question that it has taken more than 3 years to address simple errors or academic dishonesty?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.