Retraction Watch

Tracking retractions as a window into the scientific process

Do interventions to reduce misconduct actually work? Maybe not, says new report

with 14 comments

Elizabeth Wager and Ana Marusic

Can we teach good behavior in the lab? That’s the premise behind a number of interventions aimed at improving research integrity, invested in by universities across the world and even private companies. Trouble is, a new review from the Cochrane Library shows that there is little good evidence to show these interventions work. We spoke with authors Elizabeth Wager (on the board of directors of our parent organization) and Ana Marusic, at the University of Split School of Medicine in Croatia.

Retraction Watch: Let’s start by talking about what you found – looking at 31 studies (including 15 randomized controlled trials) that included more than 9500 participants, you saw there was some evidence that training in research integrity had some effects on participants’ attitudes, but “minimal (or short-lived) effects on their knowledge.” Can you talk more about that, including why the interventions had little impact on knowledge?

Elizabeth Wager and Ana Marusic: Because studies use different measures of success, we grouped them into those looking at effects on behavior, attitude or knowledge. Obviously, it’s easier to measure researchers’ attitudes towards misconduct than to measure actual misconduct. You would expect training should increase knowledge, but many studies showed little effect. This might be because most researchers already know about types of misconduct and it isn’t lack of knowledge that causes them to go astray. A few studies looked at effects immediately after training but found these had worn off after a few months.

RW: You also said that evidence was very “low quality” – what does that mean? And does it mean we should not place much emphasis on the findings of this review, since it’s based on somewhat unreliable evidence?

EW and AM: Many of the published studies weren’t well designed to reduce bias and we excluded many that didn’t have proper control groups. Of those we did include, many didn’t randomize participants or blind raters to the intervention groups. We also found many studies used unvalidated scoring systems, especially those trying to measure attitudes or knowledge.

We were also struck by how little detail was given in the descriptions of the interventions. For example, some of the studies on the effects of running courses on research integrity simply stated the number of sessions and how long they lasted and gave no information about the curriculum, how it was taught, or how teaching quality was checked. So even if the studies had shown a clear effect, it would have been impossible to repeat the experiment or to adopt the teaching methods in other centers.

RW: The one reasonably bright spot appeared to be with plagiarism, where interventions had some effect. Can you talk more about what appeared to work?

EW and AM: Yes, training on plagiarism did seem to be more effective than general training on research integrity although not all the studies showed positive effects. We found several studies that used text-matching software (such as Turnitin) on students’ work, or included practical exercises in paraphrasing text, and some of these showed effects both in students’ understanding of plagiarism and in reducing plagiarism in coursework. We can think of a couple of reasons why plagiarism might respond better to training than other types of misconduct. The first is that definitions of plagiarism are quite technical, so students and researchers may be unaware of them. (In our training, we often come across students who don’t realise that citing a reference doesn’t mean you can copy large chunks from it, so this clearly isn’t instinctive and needs to be learned.) The other reason may be that text-matching tools make plagiarism easy to quantify, so it’s much easier to measure a small change in such behavior than, say, a change in attitude towards guest authorship, which is more nebulous.

RW: Were you disappointed at all by the conclusions of your review?

EW and AM: Yes, I guess we were! It would have been great to find some simple interventions and be able to say to institutions “If you just do this, you will reduce research misconduct.” However, if it really were that easy to reduce misconduct, then probably it would have happened already. Also, we realize that research integrity is a complex phenomenon, likely to be affected by subtle factors in the research environment and incentive systems. So, although we were a bit disappointed, we weren’t all that surprised by our findings.

RW: One issue you note is that the types of interventions varied widely – from online lectures to practical exercises – and the studies that tested them varied, as well, considering different outcome measures. How can we better standardize this field?

EW and AM: That’s a good question. We purposely used a very broad search and didn’t confine ourselves to one discipline (such as medicine) because we hoped that techniques developed in one field might be applicable in others. We were hoping that researchers in psychology, education, social science, or even criminology might have applied their expertise to this topic and come up with creative solutions. Given that none of the techniques seemed particularly effective, I think we need more diversity rather than standardization.

But when it comes to measuring outcomes then we do feel more standardization is needed.  There are very few validated scales (the only one we saw being used a few times was the Plagiarism Knowledge Survey). As we said before, measuring effects on the actual incidence of misconduct is tough, so we need to be realistic about outcome measures, but also ensure they are meaningful. Asking students if they enjoyed a course tells you almost nothing about its real effects, but sadly that’s the most common form of feedback for a lot of training.

RW: You note that the studies were plagued by bias – even in the randomized controlled trials, which are designed to avoid such pitfalls. What kinds of biases did you identify, and how could we avoid them in future trials?

EW and AM: The problem we had with evaluating randomized controlled trials was that the articles did not report important methodological issues, such as the randomization process, whether and how allocation to trial groups was concealed from the participants, or how participants and researchers were blinded. That’s why our assessment of biases was “unclear” in nearly all cases. Maybe the trials were conducted adequately, but they were not reported clearly enough so that the methodological quality could be assessed. The articles were often not from the biomedical field and were published long ago, so it was not possible to contact the authors. How to avoid such problems in future? By adequately reporting the studies, for example following relevant reporting guidelines, such as CONSORT for randomized controlled trials.

RW: There’s a lot of money being invested in these interventions – such as courses being offered by private companies and universities. Do you fear some (or all) of that money is being wasted on interventions that don’t work?

EW and AM:  Frankly, yes. Some training on Responsible Conduct of Research (or RCR) seems to have become a bit of a “tick box” exercise and, when such courses become compulsory, there’s an obvious market for providers to step in. Possibly, commercial providers have unpublished evidence that their systems are effective (which they don’t publish for commercial reasons), but we’d encourage much more transparency and discussion about this.

We’d also like to see investigation of approaches other than training. For example, one of the studies we included looked at the effects of a journal using different forms to collect authorship information. The design of the form had an impact on the truthfulness of the authorship declaration. So there may be simple, low-cost changes at all stages in the research process which might, together, have more effect on reducing misconduct than compulsory training.

RW: Besides these interventions, what else can institutions do to help reduce misconduct?

EW and AM: One reason we know so little about effects of interventions designed to reduce misconduct is that we don’t have reliable information about the frequency of misconduct. Sadly, in most countries, information about investigations at institutions is kept secret, so we have no way of telling what’s happening. We have anecdotal evidence that institutions don’t always investigate suspected misconduct properly, but, even when they do, the results are all too often buried. As well as investment into further research on factors contributing to research integrity, we’d like to see greater transparency.

Lastly, our results indicate that perhaps the research community doesn’t adequately address responsible conduct of research and research misconduct. Maybe we need more qualitative studies to understand what responsible research really is, and how it can be fostered, and what leads to research misconduct. When we understand these phenomena, maybe we will then be more successful in developing and testing interventions to promote good behavior and prevent bad behavior in science!

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

Written by Alison McCook

April 12th, 2016 at 2:00 pm

  • Chauncey M. DePree, Jr., DBA April 12, 2016 at 4:05 pm

    At the end of the discussion, EW and AM hit on, in my view, the inevitable problems of answering, at this time, “what is good behavior in the lab?” “…[W]e realize that research integrity is a complex phenomenon, likely to be affected by subtle factors in the research environment and incentive systems.” Doing interventions “designed to reduce misconduct” or conducting or assessing randomized controlled trials seems premature. Systematic, detailed, case studies of research integrity and dishonesty may provide the knowledge to begin making recommendations and studying their effectiveness.

  • Ken Pimple April 12, 2016 at 4:44 pm

    I am so tired of hearing (or reading) that teaching research ethics/RCR is a failure because there’s still misconduct. Look at the NIH requirements and compare them to any degree program – a handful of hours on the one hand and hundreds on the other. Which has the greater influence? If a biologist, chemist, what-have-you commits misconduct – fabrication or falsification of data, or plagiarism – who’s teaching failed? To me (at least), it’s clear that there are many risk factors for misconduct, including lab and university culture. I’m also confident that the problem could be reduced (along with other problems, like harassment) if all of the moving parts were at least in accord on the acceptable and unacceptable behaviors in science and academe. When deans and chairs and PIs are bullies, what can you expect? When university policies and practices create huge increases in tuition that beggar graduates, what do we say about human decency and respect?

    I could rant on, but if you have any sense you stopped reading about halfway through. If you didn’t – your problem, not mine. 🙂

    • Miguel Roig April 12, 2016 at 6:06 pm

      It probably does not help that in many science programs research ethics and research integrity are often taught as separate courses or adjunct course modules. Moreover, my bet is that this type of instruction is often perceived by students -and worse, by some instructors- as these pesky requirements that must be completed in order to fulfill some regulatory mandate. No, these fundamental aspects of the conduct of scientific research must be recognized and presented as critical components of the various aspects of the research process to which they are applicable.

      • ELF April 13, 2016 at 7:04 am

        Recently I completed a mandatory, online, self-pace research integrity course as part of my PhD enrolment at an Australian university. It was one of the most frustrating, inaccurate, inconsistent, and poorly developed online courses I’ve ever had the misfortune of completing. The course was developed by a US-based consortia, and some information was not consistent with Australian law or National Statements on ethical conduct of research. The exam contained errors, and in many cases penalised me for providing answers that were correct in terms of Australian / university policies.

        For example, one question asked “What is generally accepted as defining a person as a “human participant / subject in research?”

        I selected these two possible answers (among others):
        – They are included in a database that contains identifiable public information, ie information collected in a setting where an individual would not expect privacy.
        – They provided blood or tissue during a clinical visit that is later used in a research study.

        The automated marking system declared that these responses were incorrect, and awarded me zero marks for this question. Yet under Australia’s National Statement on Medical Research, federal and state privacy laws, health service policies, and university policies, research involving such data would absolutely be viewed as research involving human subjects. The research protocol would therefore require some sort of ethical review by a human research ethics committee, before the study commenced. I believe most biomedical / health would take the same view, and require evidence of ethical oversight / review before it could be published.

  • Donald S Kornfeld April 12, 2016 at 5:45 pm

    Research misconduct defined as, fabrication, falsification and plagiarism are science-speak for lying ,cheating and stealing.The ethical standards proscribing such behavior are established long before one becomes a graduate student in science. As Wager and Marusic reported, RCR courses do not ( and cannot) influence such behavior.
    These young people, for the most part, are motivated by the fear that the failure to publish will destroy their opportunity for a career in academia. The most effective remedy would be to provide them with true mentors, as defined by the National Academy of Sciences,: Advisor,Teacher Role Model and Friend. Such a relationship would provide an opportunity to address these concerns constructively.

    Don Kornfeld

    • ELF April 13, 2016 at 5:37 am

      You’ve highlighted (perhaps unwittingly?) one of the problems: society’s ethics standards regarding “lying, cheating, and stealing” can also be ambiguous and ill-defined. For example, debates and legal cases over online access to copyrighted content (e.g. Naptster, Sci-Hub) show there is little consensus on what is stealing and what is not. I know many people who say they never lie or cheat, but will quite happily admit to giving inaccurate or incomplete information on their tax returns (and perhaps even consider it a virtue to do so).

    • Dean April 13, 2016 at 9:46 am

      Agree – lying, cheating, and stealing as acceptable behavior are ingrained earlier in life, either by the parents or by the culture. We cannot fix that by forcing people to take workshops or online tutorials. Harsh consequences may be a deterrent, but won’t work as long as students keep seeing universities give a pass to tenured professors who commit ethical atrocities.

  • Michael E. Marotta April 12, 2016 at 7:22 pm

    We know in criminology that of the three prongs of punishment as a deterrent – swift, certain, and severe – only “certain” really keeps perpetrators away. When researchers learn that plagiarism software catches copyists, they are deterred. Other protections fail largely because these perpetrators think that they are smarter than everyone else. Prisons are full of such people. It might be argued that some researchers are ignorant of the rules, but that is not at all borne out by any qualified study. An infant in a shopping cart who grabs a candy bar at the checkout is an innocent perpetrator. A student in college is not. This is especially underscored by results from Europe. Here in the USA anyone can go to college; not so in Europe: they are pre-selected. Thus, as planfully competent perpetrators, they are culpable for their transgressions.

    • Chauncey M. DePree, Jr., DBA April 22, 2016 at 11:11 am

      The three prongs of punishment–swift, certain, and severe–seem like principles a dictator would apply to gain and maintain control of unruly, factious, diverse populations. For example, in our time, Saddam Hussein internally applied the three prongs of control (punishment) successfully. In a culture like ours, they seem extreme.

  • John Hilton April 14, 2016 at 9:11 pm

    The Cochrane Review is here:
    The link in the introduction goes to the protocol for the review, not the review itself.

    • Ivan Oransky April 14, 2016 at 9:45 pm

      Fixed — thanks.

  • Nancy Owens April 20, 2016 at 7:33 am

    Please join us for a Twitter chat to discuss this review with Liz Wager – Wednesday, 20 April, 2pm BST – by following the #cochraneauthor hashtag

  • The bottom line is that integrity cannot really be taught …it must come from within. With the pressure to obtain authored publications and grants to survive in an academic career…..the culture incentivives individuals to particpate in misconduct…especially if their research is mediocre. Unless the community polices itself and unless there are real consequences for those participating in misconduct the culture will not change. The ORI is weak and their mandates under 42CFR do not allow for any real consequences to those committing these crimes. Teaching integrity … insufficient to solve this exploding problem.

    • Chauncey M. DePree, Jr., DBA April 22, 2016 at 10:00 am

      Where does “within” come from?

  • Post a comment

    Threaded commenting powered by interconnect/it code.