Just 48 hours after publishing an article by Graham Cole and Darrel Francis last week alleging that Don Poldermans‘ scientific misconduct led to the deaths of some 800,000 Europeans over the past five years by tainting clinical guidelines, the European Heart Journal unceremoniously pulled the paper from its website Friday.
Larry Husten at CardioBrief has been on top of the story. According to Husten:
In an highly unusual move the editor of the European Heart Journal has removed the article by Cole and Francis from the journal. No notification or explanation appears on the website. though the headline is still present. I asked Thomas Lüscher, the EHJ editor, for an explanation. Here is the response I received:
Thank you for your mail. As the editor-in-chief of the Eur. Heart J. I have to inform you that this piece, although published online in CardioPulse contains scientific statements that do require peer-review. Unfortunately, this was bypassed by the handling editor and thus I had to act appropriately and correct this mistake.
The authors have been informed about this measure and will hear from us in the near future as soon as the reviews are in. This does not in any way preclude any decision on the article.
Thank you for your understanding, the administrative mistake is on our side. Please note that the Eur. Heart J. is not a newspaper and hence has to follow the outlined rules of peer review.
In response to a follow-up question, Lüscher added this:
I do hope that you understand that the EHJ is high impact journal with stringent peer review. CardioPulse publishes also non-scientific features on societies, countries and alike which are exempt for that process. In this very case, however, as in some others we had in the past, I instructed the editor in charge to discuss with me first whether this needs peer review. Unfortunately, this slipped his attention. I strongly feel that this is required here and one of the authors already communicated his understanding for this. Peer review has nothing to do with censorship, in fact we do this with 3500 manuscripts per year.
In this case, it also appeared necessary as in the meantime also other articles have appeared on the topic. Furthermore, we cannot discuss this issue without referring to the ongoing process within the ESC.
Francis and Cole sent us this comment:
We are delighted to be invited to comment in Retraction Watch on a disappeared article that narrated the saga of the ESC guideline on perioperative beta blockade. Our only sadness is that the article was ours. By simply multiplying published numbers, we had fallen unknowingly into the trap of making a “scientific statement” which apparently triggered the temporary retraction.
Vigorously espousing reliable approaches to clinical research for examining effect sizes or correlations, our group are all fans of Retraction Watch. We develop high-precision clinical measurement techniques and try to design studies to resist bias. When we see reports that appear in error, we assist the “self-correcting” nature of science, for example by asking for clarification, suggesting improvements, or providing better estimates.
Ironically, while our efforts to have unreliable science retracted face seemingly insuperable hurdles (of which four we can show publically), we have nevertheless somehow unknowingly stumbled upon the hidden trigger that explosively vanishes “scientific” material leaving no trace.
Our short-lived articles
Our first EHJ article is only a narrative of events with a timeline and a figure to give context. It used basic arithmetic on 3 values in the public domain. It is certainly not a meta-analysis: that we published last year under very careful peer review.
This first EHJ article explains that clinical research has vast capacity for good, or for harm. The danger lies when it is converted into guidelines: the “leverage of leadership”. We explained that for every estimate used, other estimates are possible. If the number of excess deaths are fewer than estimated, the scale of potential harm should never be forgotten.
Our second article moves on to how we can all act to improve integrity in clinical science. We focus not on the principal researchers, but on their many unwitting accomplices: co-authors who take an untimely vow of silence, co-workers knowing a trial to be nonexistent, universities desperate to proclaim “no patient was harmed”, teachers who forget that “focus invites fraud”, journal editors who feign impotence, and crucially readers who hurry past a catastrophe looking the other way.
Looking forward
The Editor in Chief very generously became personally involved in our articles, and has written us a very kind letter. Both articles are now undergoing thorough re-evaluation, because they were discovered to contain scientific information.
We are pinning our hopes on Prof Lüscher. His unique outspoken support of reliable medical research make him Europe’s pre-eminent guardian of clinical scientific integrity. Our millions of patients can depend on him as an unimpeachable advocate for their safety.
The biggest irony here is that the EHJ article is no longer available while hundreds and hundreds of Poldermans’ articles remain unretracted and uninvestigated. Sad.
“Please note that the Eur. Heart J. is not a newspaper and hence has to follow the outlined rules of peer review.” What the EIC is rightly saying is that the rules of publishing ethics in journalism are radically different (in some aspects) to those in science publishing. He is issuing a veiled warning to not get the two confused, and he should be applauded for this by overriding the bad decision by his junior editorial management). Regrettaly, Bohannon and Science still don’t quite “get it” yet. Submitting false documents (in the form of data, text, or otherwise fabricated content), false names, or using false e-mail addresses should be an ethical no-no (zero tolerance policy) and should bar Bohannon from any future scientific publication because he violated the first step of the publishing ethos: honesty. Stings and dishonest operations have no place in science, and even if they do reveal some information, are best left to the newspapers and to murky journalistic waters. Very unfortunately, journalists could now begin to start to say that the scientific publishing waters are also starting to look equally murky with the scandals being uncovered here at RW. Yet, for now, I suggets that we keep journalistic codes of conduct and publishing ethics separate to those used in science publishing. In fact, I am surprised that Bohannon’s paper has not been retracted yet. The signal sent by Science is totally wrong. On one hand they embrace scientists to put food on their table, and then allow their “journalists” to rag science using unethical means so as to draw applause from society and science’s critics. Publishers of Science do not help restore trust in science, but the swift actions by Thomas Lüscher do.
Claiming anyone is responsible for 800 000 deaths is a big claim and deserves strong evidence before asserting. I am not surprised it has caused some reservations. If true, it would point to structural weakness in the way clinical guidelines are formulated beyond the failings of any one individual.
Indeed. “800 000 surgical patients were impacted by possibly flawed studies guiding their care” is a far cry from “800 000 people died”. The latter claim would require rigorous evidence.
FYI, here appears to be an English translation of the Dutch report. I only saw links to the Dutch version in previous RW posts.
http://cardiobrief.files.wordpress.com/2012/10/integrity-report-2012-10-english-translation.pdf
Dear fellow Retraction Watch fans,
Thank you for enquiring. Please feel free to contact us directly via our Imperial College emails. The hidden rapid-retraction system we mentioned seems to have been triggered by a single simple multiplication, even though it follows the instructions published by European experts for calculating the impact of intervention on numbers of perioperative deaths.[Ref 1]
First, the annual number of surgeries in Europe which, according to the same authors [Ref 2] (http://eurheartjsupp.oxfordjournals.org/content/11/suppl_A/A9.full) who are also authors of the current ESC guideline, is 40 million.
Second, from the same expert group of authors [Ref 1] using “stable” data in their own European country, the mortality rate for patients undergoing surgery is 15,200/800,000, which is 1.9%.
Third, if the hazard ratio for initiating a therapy in high risk surgery is 1.27 [Ref 3](http://heart.bmj.com/content/early/2013/07/30/heartjnl-2013-304262.full), the proportion of deaths which would not have occurred if beta blockers had not been given is 0.27/1.27 = 0.21.
The potential number of patients harmed annually is
– the number of annual surgeries,
– multiplied by the mortality rate,
– multiplied by the proportion of deaths that would not have occurred if the treatment had not been given
What annual figure do readers obtain? And for 5 years?
By using different data, other figures are of course possible. If the figure is above the acceptable level, then perhaps lessons could usefully be learned.
What level do readers consider acceptable?
References
1. Schouten O, Poldermans D, Visser L, et al. Fluvastatin and bisoprolol for the reduction of perioperative cardiac mortality and morbidity in high-risk patients undergoing non-cardiac surgery: rationale and design of the DECREASE-IV study. Am. Heart J. 2004;148:1047–1052.
2. Poldermans D, Schouten O, Bax J, Winkel TA. Reducing cardiac risk in non-cardiac surgery: evidence from the DECREASE studies. Eur Heart J Suppl (2009) 11 (suppl A): A9-A14
3. Meta-analysis of secure randomised controlled trials of β-blockade to prevent perioperative death in non-cardiac surgery. Bouri S, Shun-Shin MJ, Cole GD, Mayet J, Francis DP. Heart. 2013 Jul 31. doi: 10.1136/heartjnl-2013-304262.
I am not a medical researcher. But I am curious to know a few questions that could help average Joe’s like me understand this case more clearly. Where exactly is the cause of death determined? In the hospital or in the autopsy room? I ask this question because surely there must be an official record associated with every dead person that indicates clearly the cause of death. In my line of study, if we do a theoretical study, on modeling for example, this is usually also based on, and accompanied by real data sets to test the veracity and the fit of the theoretical aspect. Therefore, I would assume that the reason why people have retracted your paper may be because even though the number 800,000 is, as you indicate, easily calculated by three equations, who is to say that not more factors are involved? I would hazzard a criticism to say that if you are talking about the death of chickens or fish, then 800,000 is probably a drop in the ocean. But 800,000 humans in a culturally-dense part of the world? In my area of study, if someone retracted the paper because the value just sounded so incredulous, I would immediately go out and look for 800,000 official documents (death certificates?) to validate my claims. Either that, or accept the rejection gracefully (even though disgruntled that people had not apprecaed your simplified 3-reference support). Would my logic be appropriate here even though the suggestion of physically going out to verify the 800,000 may be physically impossible? But imagine you do this, in your passion as a scientist to prove your critics wrong and your theoretical data right? Who knows, you may even land up a paper in Nature or Science with such a hotly debated topic. Regarding dead animals, who will take responsibility for these massive deaths? http://www.end-times-prophecy.org/animal-deaths-birds-fish-end-times.html and what possible man-created scientific studies could have resulted in this abomination? My advice: back-pack on and a trek to Europe’s mortuaries. (I should add that I have not read your original paper or Polderman’s original work)
Dr. Francis,
If there are 40 million surgeries in Europe annually, 1.9% all-cause mortality, and 21% excess mortality due to beta blockers, that would be 159,600 excess deaths per year or 800,000 excess deaths over 5 years.
However, your simple equations assume that all surgeries performed in Europe have the same risk of adverse cardiac outcome as the surgeries in the beta blocker trials. The trials were conducted in people “at high risk,” “at intermediate risk,” or “having major surgery.” What proportion of those 40 million surgeries are “major” and performed in “high risk” patients? (If, for example, 20% of surgeries are major and 50% of patients are high risk, your number drops 10-fold.)
Second, the confidence interval of your meta analysis is barely significant (1.01 to 1.60). So conceivably, the true percentage of excess deaths could as low as 0.9% (or as high as 37%). (If 40 million surgeries are performed annually, the the total sample size of the meta analysis is 5264, or 0.01% of the annual total, is that enough for a valid sample?)
Third, while I respect your meta analysis and understand the need for meta analyses, it’s worth pointing out that of the 9 secure studies you included, 8 had confidence intervals that spanned the 1.0 line.
So, taken all together, I understand why the editor felt that a higher level of review was needed before claiming almost a million excess deaths due to Polderman’s misconduct.
Respectfully.
Is it just me,butt doens’t that reply by Francis and Cole in the post reek of satire?