Who reports more misconduct: Scientists in industry or academia?

Simon Godecharle

Who will admit to keeping poor records, gifting authorship, or even more obvious forms of misconduct such as plagiarism? Simon Godecharle at University of Leuven and his colleagues asked 2000 scientists from academia and industry in Belgium, and reported their findings in a recent paper for Science and Engineering Ethics. We spoke to Godecharle about the fact that most respondents admitted to engaging in at least one of the 22 items designated as misconduct — and why he thinks people in academia were more likely to ‘fess up than industry scientists.

Retraction Watch: You didn’t limit misconduct to fraud and plagiarism, and instead included problematic behaviors such as cutting corners to save time, gift authorship, and poor record-keeping. Still, you showed that 71% of respondents from academia and 61% of respondents from industry admitted to engaging in at least one of the 22 forms of misconduct. Did those numbers surprise you?

Simon Godecharle: We adapted an original USA survey based on the findings of our previous research, namely a review of the European guidance documents and our qualitative research. Hereby a list of 22 items was generated. We were interested in a broader scope, because previous research indicated that research misconduct and questionable research practices in the daily practice goes far beyond the ‘traditional’ fabrication, falsification and plagiarism.

The reporting of research misconduct we found is similar to previous studies. We were surprised though to find that research misconduct occurs to a substantial amount in industry as well, despite the fact that biomedical research within industry has to uphold to strict rules and regulations. Plagiarism, one of the three ‘capital sins’, is even admitted and observed more in industry compared to universities.

RW: You found that people working in industry were less likely to report misconduct than people working in academia. Do you think that the rate of misconduct is lower in industry, and if so, what factors might be at play? Or, are people in industry simply less forthcoming about misconduct?

SG: Our research indeed does not explain why research misconduct was generally reported less frequently within industry. As stated in our paper, we cannot exclude a selection or a reporting bias. It is possible that research misconduct might actually be less common in industry. However, it is noteworthy to underline that the differences in reporting of some items are small.

Further research might reveal whether a strict setting, with rigorous rules and frequent audits, might indeed diminish or even prevent research misconduct. As an ethicist, I’m inclined to believe that the research culture and the role of the supervisors is vital. If you for example have a policy which requires everyone to follow research integrity training, but the supervisor of the lab stimulates questionable research practices or even research misconduct, the training will have little effect. It also raises the question if the respondents themselves consider the 22 items to be research misconduct. Maybe they consider them more questionable research practices, or even neutral.

RW: Some of the questions are somewhat subjective – such as “Inappropriate or careless review of papers or proposals” or “Inadequate record keeping or data management related to research projects.”  Some researchers have higher standards regarding record-keeping than others; the same lab notebook might be rated as sloppy by one but adequate by another. Could that influence your findings?

SG: A self-reporting survey is vulnerable to subjective interpretation. We were interested in the perspectives of the biomedical researchers and research managers themselves, which includes a subjective interpretation. Concerning the admitted behavior, one might even conclude that this approach provides a rather conservative estimate. People might not be (strongly) inclined to consider their behavior to be inadequate or unacceptable. Nonetheless, several actions were admitted to rather frequently, such as ‘gift authorship’ (42% within universities, 25% within industry; p=0.009).

RW: What do your findings say about the prevalence and impact of training in responsible conduct of research?

SG: We need to be prudent about making claims concerning the effect of training based upon our research. We did find a relation between research integrity training and the reporting of research misconduct. When respondents indicated they had received informal research integrity training (for example ‘discussion with instructors, mentors or colleagues’), they reported more research misconduct. Respondents who had received formal research integrity training (‘section on research integrity within other courses’) reported less research misconduct. However, our population was rather small and we only observed some relations, we cannot conclude causal links. In addition, other studies indicate that it remains uncertain whether research integrity training might effectively reduce research misconduct.

RW: The questions rely on responses from scientists based in Belgium who choose to answer questions, and admit to misconduct — do you have any concerns that they aren’t representative of the scientific community as a whole? Why or why not?

SG: We see no reasons why the situation in Belgium would differ from other industrialized Western countries.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

7 thoughts on “Who reports more misconduct: Scientists in industry or academia?”

  1. Simon Godechale makes an important contribution to our understanding of the dark side of the research enterprise. Not knowing the sampling and other details of his research methodology and constraints, I’m disappointed that the 22 items relating to research misconduct had not received a “relevance,” “importance,” or “severity” rating as part of the same survey.

    We must now await a separate survey of the same or different population of researchers to rate the 22 items in this regard–unless, of course, it has already been done as part of the earlier qualitative research he mentions.

    That said, Godechale’s research adds to our knowledge and is consistent with earlier research by Martinson, Anderson and deVries (2005, 2006): Martinson BC, Anderson MS, DeVries R. “Scientists Behaving Badly.” Nature (link is external) 2005, 435:737-3; and DeVries R, Anderson MS, Martinson BC. “Normal Misbehavior: Scientists Talk about the Ethics of Research.” Journal of Empirical Research on Human Research Ethics 2006, 1 (1): 43-50.

  2. Studies in industry usually have to meet Good Laboratories Practices (and for studies with humans, Good Clinical Practices). For both, data integrity is essential. With GLP, you have to validate that all of your instruments are working, all have been calibrated, keep complete records, etc. If you want to change a data point, you need to retain the old data and note the reason for the change. While not perfect, GLP prevents much of the fraud that is seen in non-GLP labs.

    Another part of GLP is organizational charts. This seemingly unimportant issue is important because the QA person shouldn’t report to the person(s) conducting the studies. They need to be independent.

    In addition, creating fraudulent data can really hurt a company and this means there is less motive. Whereas a fraudulent researcher at a university mostly hurts themselves & coauthors*, fraudulent data at a company could lead them to waste money on worthless targets or compounds. The company itself strongly wants accurate data (and publications are usually of minimal importance).

    (*E.g., When a researcher at an Ivy league school commits fraud, there generally seems to be little impact to the school itself.)

    On top of this, in industry if you create fraudulent data and are found to do so, you’d likely immediately lose your job. A company won’t say to an employee “You can keep your job, but we won’t allow you to do research for 3 years.”

    Hence, while fraudulent activity can happen in industry labs, there are many gates that try to stop it. As a result, it’s much much harder to commit fraud than in academia.

    1. Poor equipment maintenance and calibration was one of the many reasons I left graduate school and returned to industry. How could I possibly trust the quality of the research if I couldn’t even verify the quality of the measurement? All too common in academia.

  3. In response to the question, What do your findings say about the prevalence and impact of training in responsible conduct of research, Godecharles is quoted,
    …It remains uncertain whether research integrity training might effectively reduce research misconduct.

    While, he reports that his study found a relationship between research integrity training and the reporting of research misconduct, multiple studies have established that such training has no impact on the prevalence of research misconduct per se.

    DS Kornfeld, M.D.

  4. As an academic philosopher of science, I much appreciate receiving Retraction Watch. But given this article’s academia-vs. business theme, I am reminded of my longtime concern over the seeming absence here of discussion regarding ideological bias in the human sciences, which I strongly suspect is much stronger in academia–and which, for the same ideological reasons, rarely gets retracted. The problem is provably a very serious one for society, let alone for science. I would be glad to illustrate, should someone here be curious about instances.

  5. I admit that I am not up-to-date on the Responsible Conduct of Research (RCR) training literature (though I should be!) so, please correct me if I am wrong, but it seems to me that whether RCR training has any effect on misconduct should depend in large measure on how misconduct is being defined. If we simply define misconduct as falsification, fabrication, and plagiarism (especially plagiarism of data and ideas), then, sure, RCR training should have little to no effect on these behaviors, for in most cases, these malpractices are, or should be, universally known to be wrong. But, if we include a wide range of questionable research practices (QRPs) as part of the definition of misconduct, especially the types of unintentional practices that, for example, are largely known to stem from bad habits that can be modified (e.g., maintaining sloppy research records), or are due to lack of time and/or ignorance (e.g., inadequate supervision of research staff, inappropriate authorship), or to a combination of related factors (e.g., using inappropriate statistical analyses, optional stopping, even p-hacking [I can recall the latter two being semi-standard practice: torture data until they confess!]), shouldn’t we expect RCR training to curb some of these practices? And if, as some suspect, QRPs have a much greater negative impact on the integrity of the scientific record, isn’t RCR training that addresses these and other QRPs worth pursuing even if the end results are, at best, modest?

  6. Having reviewed what my fellow commenters have had to say about the effects of training on subsequent research behaviors, I have a suggestion. Perhaps we should be investigating the relationship between cognitive load on the frontal cortex which suggests an increase causes subjects to become “less prosocial–less charitable or helpful, more likely to lie.” Robert Sapolsky (2017). Behave: The Biology of Humans at Our Best and Worst. New York: Penguin Press. As evidence he cites, among others, N. Neand et al., “Too Tired to Tell the Truth: Self-Control Resource Depletion and Dishonesty,” JESP 45 (2009): 594; M. Hagger et al., “Ego Depletion and the Strength Model of Self-Control: A Meta-analysis,” Psych Bull 136 (210): 495.

    What is my thinking here? The behavior of researchers largely involves the frontal cortex; hence, is subject to overload.

    In some of my own writings I’ve moralistically characterized the misbehavior of researchers as that of “knaves” and not “fools” because they are intelligent and mostly well-educated; hence, know with malice aforethought how to collect and manipulate data using statistics to produce biased results. Noble, JH Jr (2007). Detecting bias in biomedical research: looking at study design and published findings is not enough. Monash Bioethics Review 26(1-2): 24-45.

    Moralistic thinking may erroneously direct reform efforts to increasing the dosage of ethical training to no avail. Maybe instead efforts should be directed to understanding the neurobiology of researcher behavior for possibly more effective ways to proceed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.