Nearly four years after a critic pointed out flaws in a paper about a controversial research tool involved in nearly 20 retractions, the owner of that instrument has lost the article after he failed to overcome the editors’ concerns about the work.
The owner is Donald Morisky, of the University of California, Los Angeles, whose name should be well-familiar to readers of Retraction Watch.
Morisky developed the Morisky Medication Adherence Scale (MMAS), then began charging researchers up to six-figure sums to license the use of the tool in their own studies. Those who didn’t sign agreements in advance were ordered to retract their papers that used the MMAS, pay Morisky’s company retroactively, or risk legal action. (We wrote about all this in Science back in 2017. We also wrote about how Morisky and his former business partner, Steve Trubow, have been engaged in litigation over ownership of a spin-off “widget” Trubow says belongs to him. That case is ongoing.)
In fall 2021, we reported the Journal of Clinical Hypertension had launched an investigation into a 2008 paper about medication adherence by Morisky and his colleagues. The article built on Morisky’s initial MMAS-4, expanding the instrument into its current form, the MMAS-8.
But in 2019, Michael Ortiz, of the University of New South Wales, in Sydney, Australia, had identified flaws with the reported sensitivity and specificity of the MMAS-8 as reported in the article. Ortiz also noted that Morisky had neglected to disclose his financial interest in the MMAS.
According to Ortiz, “the MMAS-8 scores may be no more accurate in detecting patients with uncontrolled [blood pressure], than tossing a coin to decide.”
Ortiz took his concerns to the journal, which effectively ignored him until April 2021, when it published a letter in which he outlined his findings, the gist of which was:
In conclusion, the [sensitivity and specificity] and accuracy values reported in this paper are mathematically implausible. The [sensitivity and specificity] values are inconsistent with the patient data and they overstate the diagnostic properties of the MMAS-8. Unless these inconsistencies are resolved, it would appear that, based on the study results in Point 2 and the meta-analysis2 the MMAS-8 scores may be no more accurate in detecting patients with uncontrolled BP, than tossing a coin to decide.
Several months later, we learned that Wiley was investigating the matter. The fruit of that inquiry was slow-growing: Nearly two years after we reported on the investigation, the JCH has retracted Morisky’s paper, stating:
The above article, published online on May 02, 2008 on Wiley Online Library (wileyonlinelibrary.com), has been retracted by agreement between the journal’s Editor-in-Chief, Dr. Ji-Guang Wang, and Wiley Periodicals LLC. Following publication, concerns were raised by a third party regarding the statistical analysis presented in the article. The Journal conducted an independent statistical review of the article and concluded that the results were misleading due to issues regarding the sensitivity and specificity of the medical adherence scale used. The authors responded to the Journal’s request to address the findings of the independent statistical review, but were unable to adequately address the concerns. As a result, the Journal no longer has confidence in the reported conclusions and is issuing this retraction.
The article has been cited 1,889 times, according to Clarivate Analytics’ Web of Science. Of those, 259 have come since October 2021.
Ortiz told us:
I am disappointed that the Authors of Morisky et al 2008, failed to respond to my concerns in a timely manner. This situation was compounded by a Journal Editor who failed to follow Guidelines and then took more than two years to act to the Authors failure to respond to my concerns.
What will be even more interesting is how other articles ( that used the MMAS-8 to assess medication adherence) are treated. I have noted similar erroneous results in a number of articles. which claimed to have found similar Sensitivity & Specificity results following the Morisky et al methods. I hope these authors consider withdrawing their articles
He added:
What upsets me the most is the fact that a questionable instrument was used by so many intelligent researchers when all the signals pointed to a problem with the validity of the instrument.
Morisky did not respond to a request for comment.
Trubow said the retraction was “not good publicity for the scales,” but that it wouldn’t affect Morisky’s copyright of the instrument. However, he noted, the news might be alarming to the drug companies that have cited the 2008 paper in their research about medication adherence for their products. He told us:
I know of at least 100 articles by PHARMA for BIG clinical trials for medications that used the article as the evidence basis for using the MMAS-8 to measure adherence. This is huge.
Meanwhile, Ortiz is not finished looking into Morisky’s work.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Wonders never cease. This is the best news I’ve read on RW in some time. Granted the paper wasn’t retracted for Morisky’s ethical behavior; failing to declare his substantial financial self-interest in the paper and then running a shakedown operation on unawares. Still good news.
A couple of interesting remaining questions. First, will the retracted paper continue to draw citations and use? My prediction is yes. The fact that Morisky was still trapping unsuspecting authors years after the perils of using the MMAS were widely known suggests more with be trapped in his tattered web. After all, how many bother to check if any of their lit cited has corrections or retractions? If only there were a database and apps 🙂
Second, why does it take two years to even publish a letter? The EIC just thought if they ignored it it would go away? They sent it to Morisky for response and got none so it went off in the ether? Wiley’s scientific integrity office get wind of it and stepped up? Kudos to all the Michael Ortiz types who don’t let editorial lassitude have the last word
The moral lesson is that in your research career you should remain humble and wont attack and retract other people. People who live in glass houses shouldn’t throw stones
Since his tool has been proven not to work is Morisky refunding the licensing fees he charged?
1. Congratulations on your persistence in shooting Morisky’s mercenary assessment scale down in flames (finally.) The fact that he used bad statistics to support his money-making venture proves that his motives were evil from the beginning– maybe even, dare I say, narcissistic? What is more, I second K’s motion that the licensing fees should be refunded for a tool that gives no better than chance confidence in patients’ compliance. This tool, while free for clinical use, is a waste of time– which is limited for practicing clinicians, not to mention retired ones.
2. When you send your daily emails can you please indicate for each link which ones go to your original writings? I would much rather read those in the limited time I have left than to read a bloated, self-interested, paying business-soliciting article by a lawyer’s group like the one that ran recently. Frankly, I don’t give a damn what the lawyer says, I still think it was appropriate for Stanford University to run their own investigation of their own president.
3. There is a bug in your cell-phone version of your reply protocol that prevents replying on the cell phone. It appears that the box to check one’s acknowledgement of the privacy policy is missing. On the plus side, while searching for the checkbox, I finally read the privacy notice.
Re #3: On my cell phone, the micro dot that looks like a degree sign in front of ‘By using….” actually is clickable. Took me some time to figure that one out.
Got it. PS the box is pretty small on the desktop version too.
By the way, I abandoned the phone version of my comment when I got frustrated after pecking out a long, critical rant– so this one is significantly different; there’s even a third, more evolved version that I sent privately to Ivan. I finally have that “abundant leisure time” on my hands that they ironically told me about when I was a first-year medical student.
It didn’t occur to me until too late that Morisky might sue me for calling him a narcissist… I mean, a really GOOD narcissist would sue, right?
I know a good defense to such a suit would be that I’m merely expressing an opinion under my 1st Amendment… but my former clinical position would make it more actionable, not to mention tempting because I might have a lot of money. Imagine his dismay when I reply (in pro per) that I can’t even afford a lawyer!
Thank you for your comments.
I have to declare a conflict of interest in that I am a member of the editorial boards of four Journals and review around one article a month.
This experience has shown me that the peer review process is broken. The good news is that no matter how hard some authors try, it is impossible to get away with data manipulation unless there is collusion or obfuscation.
I have just found some new techniques to explore authorship manipulation. I was astounded at how many authorships were gifted or guest and how many citations were self citations by some unethical authors.
There are a different set of strategies used by manufacturers to manipulate their study findings. Last but not least, editors need to be more responsible for the quality of the contents in their journals.
Mr. Marcus:
The statistical methodology used by M. Ortiz challenging the specificity and sensitivity measures of our seminal 2008 clinical study is seriously flawed and downright misleading. Further, the Journal editors mismanaged this issue by publishing a one -sided presentation of Ortiz’s claims, ignoring its own protocol of a side-by-side rebuttal of claims. Adding insult to injury the Journal claimed we never responded when we have numerous emails to disprove such claim.
This is the crux of Ortiz’s failed analysis: Our original methodology in constructing the scale used raw patient data as continuous variables (ex, blood pressure control, time (bp readings in 3, 6 month intervals); responses to the 8 items questionnaire, etc.) with values ranging from 0 to 8. M. Ortiz did not use this important aspect of his analysis but collapsed the raw scale into a binary yes/no outcome, thus losing critical info from raw data, at the same time falsely claiming it to be the same as the original MMAS-8 scale. Ortiz’s binary yes/no outcome in no way captures the essence of the construct validity and reliability of the 8-item scale, which showed, among other things, that patients with high adherence were significantly more likely to have lower morbidity and mortality rates. This finding had been proven clinically true in various clinical trials and studies, and in fact being used now in medication reconciliation of polypharmacy.
If Ortiz’s claims were true, the utility of MMAS 8 scale as a psychometric medication adherence scale, and now for medication reconciliation in polypharmacy, would have collapsed long ago on faulty foundation. Instead, this scale continues to grow in popularity for use in treatment care for the good of patients. Since its publication in 2008, 15 years ago, MMAS-8 scale has been clinically validated in hundreds and hundreds of clinical studies conducted around the world and published in scientific journals including high tiered journal such as the recent New England Journal of Medicine recently:
Castellano JM et al. DOI: 10.1056/NEJMoa2208275
We can prove Ortiz’s claims wrong in any setting, as is Redaction Watch’s misleading blog promoting Ortiz’s flawed analysis- in the court of public opinion, or in the court of law.
This issue is far from over. The Journal’s mismanagement of our responses/rebuttals, its false claims we never responded will be appealed because of scientific, business, and legal ramifications . Retraction Watch and other blogs will be reversing Ortiz’s faulty analysis.
Thank you for publishing this rebuttal in your blog.
Donald E. Morisky, ScD, ScM, MSPH
President, MMAR, LLC
Dear Professor Morisky,
I was looking through my files recently and I uncovered a presentation by you at the American Public Health Association 133rd Annual Meeting & Exposition on December 10-14, 2005. 4333.0: Tuesday, December 13, 2005 – 5:24 PM.
You reported on a study examined medication adherence among 1,367 African American (75%) and Hispanic American (25%) patients An 8-item medication adherence scale was developed for the study. The eight-item medication adherence scale was found to have a reliability of 0.83. By using receiver operating characteristic (ROC) curve analysis, the sensitivity of the measure estimated to be 83%, and the specificity was 70%. The medication adherence measure was found to have good concurrent and predictive validity. A total of 74% of individuals scoring high on the adherence measure had their blood pressure under control compared to 48% of individuals scoring low (p < 0.001).
Can you explain how the values reported in 2005 are different from those reported in 2008?
You claimed the $&$ values were 93% an 53% in 2008.
Can you provide the ROC that you used to to estimate the Sensitivity value of 83% and Specificity value of 70%?
This extortionate scientific mafia racketeer has made millions by running a shake-down using this tool. When will those he extorted sue him? Is he still employed?
Hello Professor Morisky.
Remind me ? your published sensitivity was 93% and your specificity was 53%, but I could not see your detailed method or calculations anywhere?
Now you want to talk about your methods and forget about the numbers. Lets talk about the numbers. All my numbers are transparent and my methods are clearly defined.
You only need to do provide three things: an ROC curve showing (1) the 93% Sensitivity and (2) 53% specificity cutoff at score 6 and (3) “c” statistic of 0.80. If you really analysed your data using the method described Dr Ang, then you should have these numbers.
I used the 3 × 2 percentage data in Table 1 from your publication to calculate number of people in each medication adherence category (page 4 of the text) and blood pressure control (Table 4 on page 13), it is possible to reconstruct the study patient numbers by category. These values: 295, 144; 486, 442, are shown in the 2 × 2 table of Table 1. These data were used to calculate the S&S (calcS&S) values of 38% [95%CI: 34–41%] and 75% [72–79%], respectively, with an accuracy of 53% [51.3–56.5%].
These values are remarkably similar to those you published in the 2017 systematic review of the accuracy of the MMAS-8 (pooled S&S of: 43% [33–53%] and 73% [68–78%], respectively). I hope can still remember the article by Moon et al in (2017). (includes yourself as the last author)
I have collapsed the 3 x 2 table into a 2 x 2 table which is how Moon et al 2017 analysed sensitivity and specificity.
These issues can be easily resolved by producing your ROC and c statistic! Show us how you calculated the 93% and 53% or have you made an error that you can correct?
It doesn’t really matter what numbers I found. All that matters are the numbers you published and where they came from !
Hayaan ang iyong numero sdo iyong pakikipag usap
Dr Morisky – did you read this study ?
This was an unblinded RCT of 2500 patients with a history MI taking a polypill for secondary prevention or usual care.
They were aged more than 65 years
After 6 months less than 10% of patients in both treatment arms classified as non adherent (<6) by the MMAS-8 and less than 5% were non adherent in both arms at 2 years. There was no baseline measure of adherence.
Your 2008 study compared:
low (< 6) vs Medium + high (6 to 8).
The Polypill study compared:
high (8) vs low or medium adherence (<8)
I am sorry to say that you made a serious error when you compared the 2022 Polypill study with your 2008 study. This because Medication adherence was defined differently when they chose a cutoff of 8 instead of a cutoff of 6 that you used.
Please check your numbers next time.
Article was Castellano JM et al. DOI: 10.1056/NEJMoa2208275
Correction:
The Polypill study compared:
high (8) vs low (<6) or medium adherence (6 to <8)
You did want to comment to Retraction Watch rather than "Redaction" Watch?
Note from RW: We have reason to believe this comment was sent by the same person who used other names in this thread.
Dr. Ortiz:
Attempting to collapse a 3×2 configuration into a 2×2 configuration with only two intervals when analyzing the MMAS-8 scores is not advisable. Such an approach risks oversimplification of the data and potential loss of vital information. By maintaining the original structure, you ensure a more accurate representation of the diverse adherence behaviors present within distinct contexts. This decision fosters a nuanced understanding of medication adherence patterns, allowing for targeted interventions and comprehensive insights that might otherwise be obscured by reductionist categorization.
Embracing the complexity of the 3×2 configuration preserves the richness of the data, facilitating a more meaningful exploration of adherence dynamics and contributing to more informed decision-making processes.
Running a logistic regression analysis that controls for demographic variables and computing sensitivity and specificity is a comprehensive way to explore the relationship between MMAS-8 scores, demographics, and adherence outcomes. Let me help you.
Organize your dataset, ensuring it includes MMAS-8 scores, demographic variables (e.g., age, gender), and an adherence indicator (1 for adherent, 0 for non-adherent).
Variable Coding:
Encode categorical demographic variables using dummy variables (0s and 1s) for regression analysis.
Model Building:
Choose the demographic variables to include in the model based on theoretical and empirical considerations.
Run a logistic regression model using appropriate statistical software (e.g., R, Python with libraries like scikit-learn or statsmodels).
Calculate sensitivity (true positive rate) and specificity (true negative rate)
These metrics quantify the model’s ability to correctly identify adherent and non-adherent individuals.
Interpret the coefficients of demographic variables to understand their impact on the odds of adherence.
Interpret the odds ratios associated with variables in the model.
Model Evaluation:
Assess the model’s goodness-of-fit using relevant statistics (e.g., likelihood ratio test, AIC, BIC).
Consider model performance through techniques like cross-validation.
Report the logistic regression results, including significance of variables, odds ratios, and insights gained.
Present the computed sensitivity and specificity values along with their implications for the model’s predictive power.
Interpret the sensitivity and specificity within the clinical context, considering the consequences of false positives and false negatives.
Remember that logistic regression assumes certain assumptions and considerations, such as multicollinearity and potential confounding. Additionally, discussing the clinical relevance of the findings and involving healthcare professionals can provide valuable insights into the practical implications of the analysis. Consulting with a statistician or an expert in logistic regression can help ensure the accuracy and validity of your analysis. I would not consider yourself an expert by any means.
I pray for you, sir.
thank you for your advice,
I like your advice but it should be directed to the authors of the article. I have taken the time to consult with a highly regarded UK Statistician in addition to NZ, US and Australian colleagues.
The authors need to provide a ROC Curve and a c statistic to explain their results. See how it was done by Professor Morisky in 2009
New medication adherence scale versus pharmacy fill …
Krousel-Wood, Islam , Webber , Re , Morisky & Muntner Affiliation 1 Center for Health Research, Ochsner Clinic Foundation, 1514
The problem is Krousel-Wood et al 2009 get a much lower cutoff and, a lower sensitivity and higher specificity and a lower c statistic with their MMAS-8 study, than those claimed by Morisky et al (2008) and to add insult to injury, the MMAS-4 out performed the MMAS-8.
So Professor Morisky knows how to generate a ROC?
So you can keep criticising my skills, but it is not about my skills it is all about your numbers and whether they can be verified. You are all talk and no numbers!
Hello Xiao Li
I took your advice and I prayed that I would find how the authors had used a ROC curve in their study. My prayers were answered when I found a poster by the same four authors. Donald Morisky, Alfonso Ang, Harry J. Ward and Marie Krousel-Wood. Assessing and Improving Medication Adherence among African American APHA 2005 Philadelphia
This study examined medication adherence among 1,367 African American (75%) and Hispanic American (25%) patients recruited during regularly scheduled appointments at a large medical center as part of a four-year community clinical trial. To my surprise it was the medical center that I went to when I lived in New Orleans. What a coincidence!!
The eight-item medication adherence scale was found to have a reliability of 0.83. By using receiver operating characteristic (ROC) curve analysis, the sensitivity of the measure was estimated to be 83%, and the specificity was 70%. A total of 74% of individuals scoring high on the adherence measure had their blood pressure under control compared to 48% of individuals scoring low (p < 0.001).
The retracted study by Donald Morisky, Alfonso Ang, Marie Krousel-Wood, and Harry J. Ward. Predictive Validity of a Medication Adherence Measure in an Outpatient Setting J Clin Hyper 2008
The characteristics of the 1367 participants in the study: mean age of 52.5 years, with 61.5% older than 50 years, 40.8% male, 76.5% black. Highly adherent patients were identified with the score of 8 on the MMAS-8 scale, medium adherers with a score of 6 to <8, and low adherers with a score of <6. Using these cutpoints, this study population had 32.1% low adherers, 52.0% medium adherers, and 15.9% high adherers.
Correct classification with blood pressure control was based on a dichotomous low versus high/medium level of adherence, which had a rate of 80.3%. sensitivity and specificity of the 8-item scale were reported to be 93% and 53% respectively.
A total of 56.7% of individuals scoring high on the adherence measure had their blood pressure under control compared to 32.8% of individuals scoring low (p < 0.05).
Unfortunately the same 1367 patients reported 3 years later, had much higher S&S values and rates of BP control:
2005 poster
S&S: 83% and 70%: High adherence 74%
Low Adherence 48%
Morisky et al 2008 J Clin Hypertension.
S&S: 93% and 53% High adherence 56.7%
Low Adherence 32.8%
I hope that Professor Krousel-Wood as the Principal Investigator can explain this dramatic improvement in the S&S values between the poster and the published article .
I look forward to her explanation.
I also found an ROC curve which compared the MMAS-8 with the MMAS-4 and it appears that the MMAS-4 may be more accurate than the MMAS-8 in screening for non adherence.
My reading of the documents suggests that a 9 item MMAS was used in this study and one item was excluded from their analysis to arrive at the MMAS-8. I hope that Professor Krousel-Wood can resolve my concerns.
Note from RW: We have reason to believe this comment was sent by the same person who used the name “Xiao Li.”
Hi doctor Ortiz. I have a question for you.
Why would you reference the Moon et al 2017 meta-analysis article when it includes diabetes, osteoporosis, myocardial infarction and other diseases (64.3%) with only 34.5% on hypertension- LESS THAN HALF? It shouldn’t be used as a basis to compare hypertension patients aside from the fact that the sample reporting S&S for hypertension in the study is small. That can be considered unethical and misleading.
Thank you for you comment about ethics. I would rather let the reader decide what is ethical or not.
Your argument may have some merit if the use of the MMAS-8 was restricted to Blood Pressure Control studies in Black Americans with Hypertension.
Since Professor Morisky was the last and the most senior author, then he endorses the result as it reflects the use of the MMAS-8 with a wide range of medications.
Unfortunately Moon et al derived almost identical $S&S values to values I found when I recalculated S&S from first principles.
The publication described a “sensitivity of 93%” with a specificity of 53% (pubS&S) and an accuracy of 80% that “indicates that the scale is good at identifying patients who have low medication adherence and have uncontrolled blood pressure”.
Using the 3 × 2 percentage data in Table 1 to calculate number of people in each medication adherence category (page 4 of the text) and blood pressure control (Table 4 on page 13), it is possible to reconstruct the study patient numbers by category. These values: 295, 144; 446, 482, are shown in the 2 × 2 table of Table 1. These data were used to calculate the S&S (calcS&S) values of 38% [95%CI: 34–41%] and 75% [72–79%], respectively, with an accuracy of 53% [51.3–56.5%].
These values are remarkably similar to those published in a 2017 systematic review of the accuracy of the MMAS-82 (pooled S&S of: 43% [33–53%] and 73% [68–78%], respectively).
If my values are incorrect then provide a ROC and the c statistic so I can correct them.
Note from RW: We have reason to believe this comment was sent by the same person who used the name “Xiao Li.”
I actually agree with Xiao Li.
Opting for a logistic regression model to analyze MMAS-8 scores while considering demographic variables, instead of collapsing a 3×2 model, offers a more sophisticated and informative approach. By employing logistic regression, we can retain the inherent richness and complexity of the data contained in the 3×2 configuration. This allows us to explore how different demographic factors interact with adherence behaviors, avoiding the oversimplification that collapsing entails.
Unlike collapsing, where data is combined into fewer categories, the logistic regression model accommodates the inherent variability within each subgroup. This means we can examine how adherence behaviors vary across different demographic groups, potentially uncovering significant interactions that might otherwise be obscured. The results can unveil subtle trends and relationships that offer nuanced insights into how different factors contribute to medication adherence.
Furthermore, using logistic regression permits the inclusion of multiple demographic variables simultaneously, thus capturing their joint effects on adherence outcomes. This approach acknowledges that demographic factors often work in tandem to influence patient behaviors, providing a more realistic representation of the complex interplay at play.
The advantages of this approach extend to predictive accuracy as well. Logistic regression models can provide probabilities of adherence based on demographic variables, offering a continuous and granular prediction rather than just classifying patients into broad categories. This nuanced prediction can guide personalized interventions and treatment plans, ensuring more targeted and effective adherence support.
In essence, the logistic regression model for MMAS-8 scores with demographic variables represents a methodological leap forward compared to collapsing. It preserves the authenticity of the data, uncovers intricate relationships, and facilitates more precise predictions and interventions—ultimately advancing our understanding of medication adherence and enhancing patient care. Considering Michael Ortiz et al defined MPR as the proportion of prescribed medication actually consumed by patients persisting with treatment in his 2008 publication, I am not confident in his assessment.
Hello Geoffroy the reference to my study is below. It was a medication persistence study and you chose to quote an adherence measure (MPR). It seems to me that you made a very basic mistake when you confused adherence with persistence. The MMAS-8 combines Adherence and persistence but divides adherence into intentional and unintentional adherence. But you knew that already!
Simons, L, Ortiz, M & Calcino, G.
Persistence with anti-hypertensive medication: Australia-wide experience 2004-2006
M.J.A. 188 (4): 224-227. 2008 Impact factor 6.11
Morisky et al 2008. The Journal of Clinical Hypertension Volume 10, Issue 5 p. 348-354 Impact factor 2.88
Marie Krousel-Wood, Cara Joyced, Elizabeth W. Holt , Emily B. Levitane, Adriana Dornelles, Larry S Webber, and Paul Muntner. Development and Evaluation of a Self-Report Tool to Predict Low Pharmacy Refill Adherence in Elderly Patients with Uncontrolled Hypertension. Pharmacotherapy. 2013 August ; 33(8): 798–811. doi:10.1002/phar.127
Figure 1: Example of a ROC curve for MMAS-8 and MMAS-4
The MMAS 4-item tool showed comparable discrimination when compared with existing longer tools (p=0.110 for the difference in AUC of the Hill-Bone Compliance Scale vs the 4-item tool, and p=0.201 for the difference in AUC of the MMAS-8 vs the 4 item tool; Figure 1). The difference in C statistic by race was notable for the 4- item tool;
S&S values calculated from Morisky et al 2008 (Table 1) were similar to Hill-Bone
.Table 1: S&S values from the 2 x 2 matrices using Cut point s of 6 and 8 (Morisky et al 2008)
Cut point = 8
MMAS-8 No Control Control Total Sensitivity Acccuracy
Low + Med 687 463 1150 88% 59%
High 94 123 217 Specificity AUC
Total 781 586 1367 21% 0.545
Cutpoint = 6
MMAS-8 No Control Control Total Sensitivity Acccuracy
Low 295 144 439 38% 54%
Med + High 486 442 928 Specificity AUC
Total 781 586 1367 75% 0.565
ROC Curve generated from Morisky et al 2008 S%S values in Table 1. Accuracy was less than 60% for both cutpoints and much lower than the 80% claimed in Morisky et al (2008).
I know I am late to the celebration but I found the multiple personalities (Morisky, Li, Arriagada, Legarde) to be entertaining. It is clear that the 2008 article used the 2×2 table referred to by Ortiz: “Correct classification with blood pressure control was based on a dichotomous low versus high/medium level of adherence…Sensitivity and specificity of the 8-item scale were …,” but the statistics reported by Morisky et al. don’t fit what is reported in the rest of the 2008 article that made it possible for Ortiz to reconstruct the 2×2 cell frequencies.
Ron Hays is a Professor of Medicine and a Professor of Health Policy at the UCLA Fielding School of Public Health. Kudos to Professor Hays for informing this publication that Dr. Ortiz the statistics reported by Morisky et al. don’t fit what is reported in the rest of the 2008 article that made it possible for Ortiz to reconstruct the 2×2 cell frequencies.
Hi, isn’t this person Steven Trubow the same one who tried to sue the Charite and lost?
https://www.robinskaplan.com/resources/publications/2022/12/lessons-from-mmas-research-about-dispositive-pitfalls-in-copyright-litigation
Hello everyone,
I just recently found out about this whole ordeal. I am deeply concerned about all the research that has been conducted based on the MMAS-8. What is going to happen to it? Is it invalid now? Shouldn’t we trust those results either? For example, the validation studies carried out in other countries and for other diseases aside from high blood pressure.
I know of someone who just finished collecting their data regarding medication adherence and used the MMAS-8 and is about to start the analysis, but they were not aware of the retraction of the 2008 paper. Should they stop the research? Will their results be invalid or could they cite the validation study conducted fot the specific country they are working in?
I would very much appreciate guidance in this matter since it’s the first time I come across a situation like this.