It’s time to get serious about decreasing bias in the clinical literature. Here’s one way to do that.

Tom Jefferson

Recently, we wrote in STAT about the “research integrity czars” that some journals are hiring to catch misconduct and errors. But are there other ways that journals could ensure the integrity of the scientific record? Tom Jefferson, a physician, methods researcher, and campaigner for open clinical trial data, has a suggestion, which he explores in this guest post. (Jefferson’s disclosures are here.)

Readers of Retraction Watch know that the quality control mechanisms in the publication of science, chiefly editorial peer review, are not infallible. Peer review in biomedicine in its current form and practice is the direct descendant of the bedside consultation. In a consultation the object or person under observation (patient/the journal submission) is observed and analyzed by the doctor (editor) who decides what the best course of action is. If unsure, the physician/editor may call on the help of outside specialists (the hospital physicians/referees) to help make a final decision on the therapy and fate of the patient/submission.

Such a wonderfully genteel paradigm of scrutiny and scholarly activity cannot be expected to identify problems caused by the contemporary rampant commercialization of biomedical research and its dissemination. In fact it does not. In fact, the system as designed does little, if anything, to detect these issues.

In the 1990s, pioneering editors like Drummond Rennie and Richard Smith inspired a period of study and assessment of editorial peer review in all its current forms. Those of us who looked at its conceptual basis, methods and practice found that the method had not been seriously developed and key aspects — such as what is intended by the word “quality” — remained unaddressed and unclear.

Since then there have been attempts at improving the practice and effectiveness of peer review propelled by breakthroughs in technology, transparency or sources of data. These efforts have gradually improved the process, probably subconsciously aided by the stark fact that the cast of alternatives to peer review is rather thin and the number of credible alternatives even thinner. Examples of improvement are the contemporary publication of information underlying a clinical trial (as its protocol), which enhances transparency, and understanding of the trial and open peer review, which promotes accountability.

A renewed focus on post publication peer review favors the exchange of knowledge and open discussion. Both are the oxygen of science. The advent of electronic publishing has had a major effect on speeding transaction and facilitating instant exchanges which took months in the pre-web era of letters and faxes. Who remembers the weighty Index Medicus sitting on a dusty shelf in the hospital library?

Despite these notable advances we now know that the biggest threat to the integrity of the biomedical record, especially of clinical trials, comes from all forms of reporting bias. By that I mean the selective release in the public record of information with the effect of presenting a distorted picture of the effects of an intervention. Reporting bias affects all aspects of clinical trials, from their design to their reporting of benefits and harms, and consequently frames the way an intervention is seen and used in medicine. Reporting bias is subtle and difficult to detect unless you have access to the underlying data and you have patience. It also helps to be endowed with an obstinate streak.

All interventions, such as drugs, biologics, prostheses and public health programs induce harms. Yet we know that harms are rarely reported fairly. In some cases they are not reported at all. In the last decade, however, a series of landmark decisions has provided access to some of the data underlying the registration of pharmaceuticals, gradually allowing greater scrutiny of the corresponding publications of biomedical trials. The data are mainly in the form of clinical study reports and other information contained in the applications to register pharmaceuticals and in the responses of regulators. Clinical study reports are documents of standardized format which are often thousands of pages long and contain a wealth of detail which is rarely, if ever, visible. For the curious, an exhaustive glossary with visual examples is available here.

The other thing about biomedical journal articles is that they compress in one page up to 8,000 pages’ worth of information contained in clinical study reports. Nine published trials of the antidepressant paroxetine on average compressed 1,021 clinical study report pages into a single article page.  How can this type of compression take place without a rigid selection of what to publish? In other words who decides and on what basis which information contained in 1,021-page to release into 1 page?

Although progressively available for a decade from forward thinking regulators such the European Medicines Agency and Japan’s PMDA, researchers have been slow to grasp the potential applications of clinical study reports.

I propose a new application to clinical trials: Using clinical study reports to enhance the accuracy of peer review, a badly needed improvement to our scholarly quality assurance process. Journals would require preliminary agreement from sponsors to submit a full clinical study report, if required. After the initial screening phase, the editor would then decide if the submission was suitable for the journal and establish its interest in the offer. At this stage, and prior to sending the submission for peer review, the clinical study report would be made available and the designated peer reviewers would be given access to it. A prior distance learning exercise would have selected the referees willing and capable of using the procedure.

The busy referee does not need to read the complete clinical study report, but should focus on its synopsis, a five-page (median 1-15 pages) highly structured summary packed with details of the trial. The remainder of the report can be used as a source of detail and clarification of the synopsis content, if needed. The protocols too are to be found in the reports, along with any amendments with dates. Next the referee reads the submission and compares it with its detailed basis.  The final referee report then draws on two or more sources for the assessment of the trial. The risks of missing significant weaknesses and distortions are minimized.

The method works, as testified by the numerous reviews comparing publications with their clinical study reports. Its editorial process feasibility should be tested in a pilot study.

Familiarity with clinical study reports is an acquired taste and editors reading this piece may ask themselves why they should follow such an apparently new route. The answers are simple, depending on viewpoint: defending the journal brand or safeguarding science, life and limb. Or both.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

3 thoughts on “It’s time to get serious about decreasing bias in the clinical literature. Here’s one way to do that.”

  1. Unfortunately, Tom Jefferson’s proposal doesn’t get to the core issue, namely, lack of access to the anonymized primary data points and the verifiability of the research procedures by independent researchers. P-Harking and changing of end points would still be possible and go undetected.
    There is also the problem of the use antiquated non-Bayesian statistics in the analysis of results. Peer reviewers are constrained to accept the results of statistical models that reflect common usage that may be out of date. Baysian statistics for the results of an RCT would be calculated on the basis of their difference from the best estimate of known prior probabilities of effectiveness and safety. The best estimate can be discerned from verifiable prior research–NOT the opinion of potentially biased “experts” or “authority.”
    Nass and Noble address these issues in the context of the recent Cochrane melt-down. See: http://IJME.in/articles/whither-Cochrane/?galley=html

    1. Thomas Bayes (1702 – 1761) lived well before Karl Pearson, Ronald Fisher and other luminaries of modern statistics, so how are Bayesian methods not antiquated? 1 + 1 = 2 is really antiquated. Suggesting that an analytic technique is somehow inferior due to the time period of its development is not a valid line of reasoning.

      Randomized clinical trials are typically conducted in situations where two treatments are in equipoise. If one treatment was known to be superior to another, it would not be ethical to give the inferior treatment. In situations of equipoise, a Bayesian prior represents some opinion of potentially biased “experts” or “authority.”

      1 + 1 = 2 will remain fresh and useful for many years to come, as will current statistical analysis techniques of clinical trial data using methods that do not prejudge any of the items under assessment with endlessly debatable Bayesian priors (subjective? objective? other??).

  2. “Peer review in biomedicine in its current form and practice is the direct descendant of the bedside consultation.”
    Seriously?
    That is not the origin of peer review.
    If you start with a flawed premise, especially a strained analogy, then subsequent arguments are suspect.
    This being said, getting rid of reporting bias is a good thing – providing more information to reviewers might help.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.