In July of 2017, Mohamed Rezk, of the department of obstetrics and gynecology at Menoufia University in Egypt, submitted a manuscript to the journal Anesthesia with a colleague.
The manuscript, “Analgesic and antiemetic effect of Intraperitoneal magnesium sulfate in laparoscopic salpingectomy: a randomized controlled trial,” caught the attention of John Carlisle, an editor at the journal whose name will be familiar to Retraction Watch readers as the sleuth whose statistical analyses have identified hundreds of papers with implausible clinical trial data.
The baseline data appeared unremarkable, Carlisle told us, but the same wasn’t true of the outcomes data. Of 24 values that could have been odd or even numbers, all of them were even.
The probability of that was “0.000000000000000000000 something,” Carlisle said.
Days after receiving the submission, Carlisle emailed Rezk and asked to see individual patient data from the trial. Rezk responded the same day that his co-author had the data and was out of the country and unreachable for three months. Carlisle emailed the co-author and didn’t hear back.
Three months later, in October, Carlisle followed up with Rezk and asked for the data again. Rezk replied the same day and asked to withdraw the manuscript.
After discussing the matter with Andrew Klein, Anesthesia’s editor-in-chief, Carlisle refused. He wrote to Rezk: “We will retain your manuscript for consideration, during which you may not submit it to another journal. We will contact your employers to ask for individual patient data and evidence of ethical approval.”
Klein emailed the head of Rezk’s institution a couple of days later, but never received a reply.
Carlisle and another sleuth, John Loadsman, looked at some of Rezk’s other published trials and found two with results values that were all multiples of 5 or 10, which Loadsman later posted to PubPeer.
The two Anesthesia editors seem to have kept the questionable paper out of the literature, as retractions and expressions of concern have begun mounting for Rezk, and a systematic analysis of his work by other sleuths found “prima facie evidence of data fabrication.”
In the years that have passed, Rezk and his co-author do not appear to have published a paper with the same title as their submission to Anesthesia. Carlisle told us: “I do not know why Dr Rezk did not submit this trial to another journal. Authors of trials we think were fabricated have usually published their papers elsewhere.”
Years later, beginning in October 2021, some of Rezk’s published papers began getting flagged with expressions of concern and retracted, due to the work of data sleuths including Ben Mol, an ob-gyn researcher at Monash University in Australia.
After noticing two unusual randomized controlled trials on which Rezk was the lead author, Mol and some colleagues analyzed all of his publications describing clinical studies.
Jim Thornton, an ob-gyn researcher at the University of Nottingham, posted some observations on PubPeer, and the group described their findings in a preprint, first reported by Jezebel, that has been accepted with minor revisions at Archives of Gynecology and Obstetrics:
Dr Rezk authored 51 studies, 17 RCTs and 34 cohort studies. Two pairs of RCTs (four trials) showed extensive data copying of baseline and outcome data. Another set of four trials and two cohort studies each recruited identical patients from the same hospital over overlapping time periods. The reported recruitment rates in two of those RCTs were implausible, and there were frequent examples of identical baseline data between the same two RCTs and a third trial. In 15 of the trials, we were able to compare the number of participants allocated to each group. In two, the method of randomisation (shuffled cards) would result in exactly equal-sized groups. In eight of the other thirteen, exactly equal-sized groups were achieved, and in two further trials, differential loss to follow-up led to exactly equal-sized groups for analysis.
Nineteen of 34 cohort studies were reported to be prospective or to include a prospective component, but 11 of these were received by the journal before the last participant could have been followed up. One cohort reported a biologically implausible rate of disease and another an implausible recruitment rate. Two cohorts of women with hypertension in pregnancy reported identical summary statistics on multiple occasions in the tables displaying baseline characteristics. Two other cohorts of women with rheumatic heart disease in pregnancy, with identical recruitment criteria and overlapping recruitment periods, reported implausible differences in baseline BMI and neonatal mortality. Finally the probability of observing the excess of even numbered categorical variables reported in Dr Rezk’s papers overall is infinitesimal.
Mol and colleagues concluded that an investigation was warranted, and Rezk’s papers should be marked with expressions of concern in the meantime:
Our assessment of the work of Dr Rezk shows prima facie evidence of data fabrication. We call for an investigation of these studies, including assessment and re-analysis of the original data. Until then, the studies of Dr Rezk should neither directly nor through meta-analysis be used to inform clinical practice.
In an Aug. 31st email to editors and publishers of various journals, Mol requested that they mark every paper by Rezk and other authors at his institution with expressions of concern by the end of September, then give the authors a chance to defend their work. If, after two months, they could not prove their data were trustworthy, the papers should be retracted, Mol wrote:
Any further delay is unacceptable to us. While we appreciate the time and effort that each of you individually invests in this process, I think we all agree that the process is flawed and puts patients at risk.
This cannot continue this way; we trust you understand that.
To all of the editors and publishers, if , for whatever reason, you decide to keep this papers unretracted and without any warning out there, we expect that you explain to the academic community why you do that.
Four of Rezk’s papers have been retracted and eight have expressions of concern, with a total of nearly 100 citations for the 12 articles.
Eight of the flagged papers were published in Taylor & Francis journals, and the expressions of concern and retraction notices were nearly identical – and vague. One representative notice:
We, the Editors and Publisher of The European Journal of Contraception & Reproductive Health Care, have retracted the following article:
Mohamed Rezk, Tarek Sayyed, Alaa Masood & Ragab Dawood (2017) Risk of bacterial vaginosis, Trichomonas vaginalis and Candida albicans infection among new users of combined hormonal contraception vs LNG-IUS, The European Journal of Contraception & Reproductive Health Care, 22:5, 344–348, DOI: 10.1080/13625187.2017.1365835
Since publication, significant concerns have been raised about the integrity of the data and reported results in the article. When approached for an explanation, the authors have been unable to address the concerns raised and have not been able to provide their original data or sufficient supporting information. As verifying the validity of published work is core to the integrity of the scholarly record, we are therefore retracting the article. The corresponding author listed in this publication has been informed. The authors do not agree with the retraction.
We have been informed in our decision-making by our policy on publishing ethics and integrity and the COPE guidelines on retractions.
The retracted article will remain online to maintain the scholarly record, but it will be digitally watermarked on each page as ‘Retracted’.
The retraction notice for “Nicorandil versus nifedipine for the treatment of preterm labour: A randomized clinical trial,” which was published in the European Journal of Obstetrics & Gynecology and Reproductive Biology in 2015 and retracted on August 10, went into more specifics:
This article has been retracted: please see Elsevier Policy on Article Withdrawal (https://www.elsevier.com/about/our-business/policies/article-withdrawal).
This article has been retracted at the request of the Editor-in-Chief.
The editors were alerted to the following concerning features of this trial:
The submission date is impossible. Patients were recruited at 24 to 34 weeks (mean 31 w). 18 % of participants delivered after 37 weeks. Average recruitment 26 per month. Recruitment ended September 2014 but the paper was received by journal on 23 October 2014.
The second author, Sayyed T, is co-author of related retracted papers in BJOG.
In view of these concerns we wrote to Dr Rezk who had no satisfactory explanation and declined to share the data. We have therefore decided to retract.
Some of Rezk’s articles with allegedly fabricated data have been included in reviews that influence patient care, Jezebel reported. Mol told the website:
We’re talking about families who lose their mother, who lose their baby because of this problem.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
I hate to be pedantic, but the odds of 24 numbers all being even are (assuming even and odd are equally likely) is 0.00000005960464477539
Roughly.
… 5390625
Or, more humanly, about one in 16 million. This way you don’t have to spend time counting zeros about the instant scan limit.
When you put it like that, the paper doesn’t even seem that bad. Consider that there are roughly a million additions to PubMed each year, and each paper has multiple tables (on average). A 1-in-16 million event probably occurs naturally every year or two.
Perhaps this could happen across the universe of scientific literature, but that is too much of a “coincidence” when it happens to multiple papers from a single lab.
Yes, there is plenty of evidence that these papers are not reliable. But Carlisle damages his credibility when he misreports the probability of something occurring. The simple facts would have been sufficient; when making a serious accusation of misconduct, there is no place for hyperbole.
Hi, I’m a Pubmed-curating librarian and data analyst, and I’m curious: where are you getting your data that there are multiple tables per article? Not that it doesn’t sound plausible–I’m just curious, because I would like to know if there is an easy way to get those figures!
As long as we are being recreationally pedantic about the odds of the case, I think it’s only fair to point out that there are a lot of articles published every year without ANY tables, including the majority of comments, case studies, editorials, letters, etc. Also the number of tables or papers that have 24 outcome values is probably a still smaller subset of papers. So I’d be surprised if that is truly a once every two years occurrence.
It was just a Fermi estimate, so I didn’t get the figure from anywhere. I just made a–as you say–plausible guess. I suppose you could take a sample of papers from pubmed and get a true estimate of tables per paper.