To guard against fraud, medical research should be a profession:  A book excerpt

Warwick Anderson

We are pleased to present an excerpt from Trust in Medical Research, a freely available new book by Warwick P. Anderson, emeritus professor of physiology and biomedical sciences at Monash University in Victoria, Australia. 

It has always been difficult for me to admit that we have a genuine and substantial problem of fraud and rubbish science in medical research. I suspect this is true for most scientists. We want to think of science as being free from half-truths and fake news. We hope that the high moral purpose of medical research will guard against wrongdoing, that it will weigh on our minds so heavily that we all take care to work and publish honestly and competently.

We know that scientists sometimes make unintentional mistakes due to ignorance, but we also know in our hearts that some people are so ambitious that they push the envelope, stretch the truth and take shortcuts. We know, too, that a few others go further and get carried away by the prospects of scientific and financial rewards and so cheat, commit fraud and lie in publications. This is what some humans do in all walks of life.

We know all this, but it is fair to say that we generally do not want to face up to it. Jennifer Byrne at the University of Sydney put it well when she wrote that we tend to overlook the research fraud issue “because the scientific community has been unwilling to have frank and open discussions about it”:

Fraud departs from community norms, so scientists do not want to think about it, let alone talk about it … This becomes a vicious cycle: because fraud is not discussed, people don’t learn about it, so they don’t consider it, or they think it’s so rare that it’s unlikely to affect them, and so papers are less likely to come under scrutiny. Thinking and talking about systematic fraud is essential to solving this problem.

When challenged about the incidence of fraud in medical research, many scientists tend to take defensive positions. We might contend that the usual self-correcting methods of science, replication of experiments and peer review will solve the problem. But we know that peer review is not honed to detect fraud reliably (though it can do), that replication of experiments is something that most of us are not very interested in doing (and it rarely gets supported by funders anyhow) and that a negative result from replication research will struggle to get published. All medical researchers should talk more about research misconduct because it is we who have most at stake: our reputations, the reputation of medical research itself, and our time and resources when we spend months or years on a project based on what turns out to have been fraudulent research.

So, what should we do as scientists to better own the problem and guard medical research? One way, I contend, is for medical research to become a true profession.

Medical research is not a profession, even though it demands a high level of professionalism. Anyone can call themselves a medical researcher. There are no processes that affirm an individual has reached some agreed level of expertise, proficiency and reliability. There is no specific training and no accreditation program. There is no requirement to learn a designated set of skills and knowledge, such as the proper use of statistics, what good research practices are or what the ethical obligations are. There is no equivalent of the Australian Health Practitioner Regulation Agency and the national- and state-based medical boards. There is no need for registration and demonstration of training. There is not even a set of stated principles that we expect every medical researcher to share.

Other groups of people who train to be expert, whose jobs involve individual responsibilities and can be hazardous to others, have professional bodies that manage accreditation or are accredited by government. Such fields have training and competency entry requirements, and they usually require ongoing training and education. They have formal ways to withdraw recognition and accreditation when their members act in ways that harm their customers or patients and tarnish the reputation of the profession itself. Why should medical research be different to doctors, dentists, physiotherapists and vets? Why don’t we have a professional certification system in medical research that requires achievement and maintenance of competency and ethical standards and could remove accreditation when misconduct is proven?

I have written this chapter with some trepidation. Some of my colleagues fear that any internal criticism of the methods and procedures of medical research will be seized upon by critics, especially politicians, to attack scientists and medical research itself and potentially even take control of funding.

I understand this concern, but the bigger risk in the medium to long term is to not address the problems ourselves. After all, if scientific training teaches us anything, it is how to critically examine everything – methodology, results, applications for funding, proposed publications, PhD theses, seminars and conference presentations – and then to find solutions.

The danger signs are already flashing. Richard Smith, a previous editor of The BMJ, wrote recently about clinical trials: “We have now reached a point where those doing systematic reviews must start by assuming that a study is fraudulent until they can have some evidence to the contrary.” When someone as experienced as Smith makes such a statement, it is past time for us to put our house in order.

Warwick Anderson is emeritus professor at Monash University. He was chief executive officer of the National Health and Medical Research Council of Australia (2006-2015) and secretary general of the International Human Frontier Science Program Organization (2015-2021). He chairs the Global Biodata Coalition.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

8 thoughts on “To guard against fraud, medical research should be a profession:  A book excerpt”

  1. Whether or not medical research becomes a separate profession from just being a medical doctor, my observation based on many articles is that researchers take many shortcuts around being theoretically correct and careful in their use of statistics, especially regression analysis. They omit key independent and theoretically important variables from their statistical analyses, sometimes just because they forgot to get the data for those variables, and sometimes because they did not think they would be important. But without running tests, one cannot know. Secondly, almost all researchers using regression analysis use ordinary least squares based methodologies, which cannot be scientifically justified. They should be using least absolute deviations based methodologies which are now available as part of software packages. Squaring differences cannot be justified even though almost everyone does it. In addition. statistical significance at the 95% level should not be considered the correct measure for cause and effect relationships, which might exist at much lower levels of statistical significance. The flip side to this issue is that statistical significant results at the 95% level or higher may just be coincidence for some variables in an equation. Finally, almost all medical researchers using statistical methodologies neglect to analyze the extent to which multi-collinearity exists in their data, and, thus, they do not attempt to correct for it even though that might be very difficult to do. Because of these issues and problems with most medical research, results are not nearly as certain and clear cut as the authors report.

    1. Interesting thoughts. As a biostats person, I agree with some. Multi-collinierity is often a problem. However, it acts to lower the evidence for a relationship. Therefore, to “correct for it” requires that fewer variables be included, or that variables be combined in some sensible manner. The existence of multi-collinierity thus acts to conceal relationships. It can be determined by running multiple combinations of variables.

  2. I’d say the issue is the poor statistical knowledge and procedures people use ritualistically, which exists in so many fields. Medicine just do happens to affect people’s lives more directly. Frequentism has had these shortfalls and statisticians have known this since they were invented.

    Statistics and science has needed to move away from these methods for quite a while and examine things are probabilities, make probabilistic statements, based on the data collected, model used, and other assumptions we make. Then evaluate that model in the real world to see if it is useful. More inductive reasoning and less of trying to pigeonhole inductive things into deductive using an alpha level cutoff. People are too in-love with their models and not the actual outcomes of the model.

  3. We already have professional researchers – statisticians. Unfortunately, too many studies don’t use these trained researchers nor get reviewed by them.
    As a biostatistician working in clinical trials, I would suggest that what would be most helpful would be to have statistician involvement in more studies, especially at the design and analysis stages, and for journals to ensure that statisticians are involved in the peer review of all papers they publish. Taking a couple of stats/methodology courses doesn’t not provide the depth or breadth of experience and knowledge required to create well-designed studies and choose the proper analytic methods, nor to evaluate the work of other researchers.

  4. I appreciate the idea of professionalizing medical research more formally as helping to address some of the problems we see with medical research. In addition, should we consider incentives for professionals that reward reliable results that hold up over time and accrue additional evidence of the veracity of their claims vs. exciting new discoveries which may turn out to be less promising on further inspection? Or worse, harmful?

  5. Medical research is a hoby for bored physicians wannabe scientists. Too often I meet such individuals who have a pooor science training, but are privileged because they are physicians and therefore are assumed to be competent in medical research. We already have trained researchers. How would you like it if I, a biologist, start attending patients and prescribing medicine?

    1. I largely agree with this. I worked for several MD’s and all of them were too busy with clinical responsibilities to do much of anything meaningful. They depend on pHd post-docs to do all the work and some of the grant writing and much of the thinking. I think we need to scale back om the NIH funding to medical school departments and give it to natural science departments instead. Also, I think most of the fraud in science occurs in medical school departments where angry/bitter PhD postdocs desperate for a job fake data to get a better life away from there lazy MD tyrant boss.

  6. Is it possible to use a mixture of experts trained on types of fraud such as image manipulation and such to help reduce the load on reviewers

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.