How can institutions prevent scientific misconduct?

There has been plenty of interest in scientific fraud and misconduct lately — and not just on Retraction Watch — from major news outlets and government agencies, among other parties. The rate of retractions is increasing, and some fraudsters are even setting new records. That has focused attention on how institutions can prevent misconduct — not something anyone thinks is easy to do.

To try to figure it out, Columbia University’s Donald Kornfeld decided to review 146 U.S. Office of Research Integrity (ORI) cases from 1992 to 2003, “based on 50 years of clinical experience in psychiatry and 19 years as the chairman of two institutional review boards.” (Of note, these only represent cases in which ORI concluded there was misconduct, as the agency doesn’t report on negative cases.) Here’s what he found, reported last month in Academic Medicine:

Approximately one-third of the respondents (the accused) were support staff, one-third were postdoctoral fellows and graduate students, and one-third were faculty. Accusations of fabrication represented 45% of the offenses, falsification 66%, and plagiarism 12%. The first two offenses frequently occurred together. Approximately three-quarters of the respondents admitted their guilt or did not provide a defense. None claimed that the offense of which they were accused should not be considered research misconduct. They frequently attributed their behavior to extenuating circumstances.

Those extenuating circumstances? An example:

A technician admitted that the times of day he recorded for blood samples were not the actual times that the samples were collected. He said that he could not follow protocol schedules and also provide as many samples as were required. The ORI investigating committee concluded that he had been assigned responsibility for more protocols than he could reasonably have been expected to perform. The technician also stated that he was not made aware of the significance of the timing of the blood sampling to the research objectives.

Two more examples:

One respondent acknowledged that he had falsified data “to make it fit the hypothesis.” He had recently been notified that he was to be terminated and believed that he needed additional publishable research to get another appointment.

Another respondent acknowledged that she had fabricated data in an article which had been accepted for publication. She stated that she had been under pressure from a superior to generate data and felt that her action was justifiable because she had observed a senior scientist in her laboratory “clean up” data to make them more acceptable for publication.

Kornfeld’s analysis — which he acknowledges isn’t surprising — doesn’t exclusively blame “bad apples,” nor does it only blame “a system.” Instead, it’s a combination:

These acts of research misconduct seemed to be the result of the interaction of psychological traits and/or states and the circumstances in which these individuals found themselves.

That means federally mandated Responsible Conduct of Research (RCR) training may not be doing very much, Kornfeld concludes, but such efforts still have a place:

RCR instruction cannot be expected to establish basic ethical standards in a classroom of young adult graduate students. However, variations on such a course might be effective for the nonprofessional staff, for whom such training is not now required. Members of this group might be less likely to fabricate or falsify data if they have a better understanding of the goals of the research in which they are involved. They should know how their findings could contribute to advances in science and/or improved medical care and the serious consequences of publishing fraudulent data.

(We should note that a recent paper in the Journal B.U.O.N., the official journal of the Balkan Union of Oncology, found RCR effective.)

Kornfeld continues:

However, establishing remedies for the psychological characteristics and the life circumstances of potential respondents poses a much more difficult problem. Grandiosity, perfectionism, and sociopathy cannot be eradicated from the scientific community, or any other, and little can be done to reduce the reality of the need to publish or perish.

So he makes two recommendations:

  1. Improvement in the quality of mentoring in training programs, and
  2. A policy that acknowledges the important contributions of whistleblowers and establishes truly effective means of protecting them from retaliation.

We can’t argue with those recommendations, and we are particularly happy to see the second — one that the FDA, which has apparently been spying on its own whistleblowers — might want to consider.

We also wanted to know what Ferric Fang, the editor of Infection and Immunity who has been very outspoken about misconduct in science, thought of the paper:

This is an interesting contribution to the literature on research misconduct, although attempts to obtain a psychological profile of scientists who have committed fraud are not new (ref. 27, for example), nor are the author’s findings, as he acknowledges, particularly surprising.  His recommendations to improve mentoring and protect whistleblowers are certainly reasonable and might help to deter or identify some instances of misconduct, but these are also not new.

For Fang, preventing misconduct will require better funding:

Like many others, the author simply accepts the stresses of the current research environment as a given: “little can be done to reduce the reality of the need to publish or perish.”  Here, to a certain extent, I disagree.  As the author himself acknowledges, science today is inadequately supported, resulting in a “heightened competition for. . . limited dollars.”  This has not always been the case, and I don’t think this situation should be accepted as inevitable in the future.  Adequate resources to support the scientific enterprise would not only reduce incentives for misconduct but improve the lives of all scientists and allow them to spend more of their time searching for answers to research questions instead of funds.  This is not going to be easy, but probably more realistic than trying to eradicate “grandiosity, perfectionism and sociopathy. . . from the scientific community!”

41 thoughts on “How can institutions prevent scientific misconduct?”

  1. Fang has the right of it. Contingencies *matter*. Scientists know they aren’t supposed to cheat, making “more training” a naive and useless gesture.

    1. Training has an advantage in that it creates an implicit contract that the institution will take cases of scientific misconduct seriously. It doesn’t mean that it will honour that contract.

      It should probably been seen as more training the institution rather than training the scientists.

      1. Indeed. When reporting that a violation has occurred, the whistleblower can point to the institution’s own training program and emphasize that administrators CLAIM this sort of behavior is not acceptable.

    1. I don’t think fraud can ever be justified, but the expectations being put on the shoulders of young and not-so-young scientists are very unreasonable. Success rates for funding applications hovering at or below 10%, and a constant pressure to publish in the very best journals.

      Perfectly valid papers or grant application can be (and frequently are) rejected on a whim by reviewers, too often for wrong reasons – most of us had to face unfair or idiotic comments at one point or the other, frequently without the chance to argue a point or to appeal. These actions do have unfair and disastrous consequences on the careers of young scientists.

      Taking the above into account, is it really so surprising and unexpected that so many people choose to cheat (I wonder how many more cases remain undetected), to stay into a career into which they have already invested many years of study and hard work (with a low pay as well)?

      The academic system has unreasonable expectations, and THESE are the main factor breeding unreasonable behaviour. Until the working conditions of scientists are improved at least to the level of other comparable professional categories (such as those of clinicians for example), it is unlikely that academic and regulatory measures will be significantly effective in preventing fraud.

  2. Revisiting the aphorism “Society prepares the crime, the criminal commits it” (Henry Thomas Buckle) may answer some of the causes of cheating in Science

  3. As part of the paper discussions we do in an advanced undergrad class, we include one that has been retracted due to photoshopping of gels. It gets a good discussion going!

  4. Science is some sort of genetic algorithm to accumulate knowledge. It is implemented by the scientific community as a whole. The scientific community as a whole should be supported by society, not only the “winners” who happen to luck out on certain findings. Dead ends are part of how the algorithm works, and they are the most common outcome. It has to be that way because empirical science is not a deductive process, it relies on exploring unknown territory where the known rules do not apply.

  5. I think the official route of reporting research misconduct to university administrators is often hopeless and painful for whistleblowers. This is certainly the case in Canada.
    Expose the cheats and their institutions. Give them a chance to publically defend themselves.

  6. Perhaps its just been a bad week, but this is literally the fourth time in as many days that I’ve read someone making the same frivolous argument: “give me more money/power/perks and I’d have less incentive to cheat.” It doesn’t sound any more attractive coming from scientists than it does from DC politicians, students, or mortgage bankers (hard to believe, but true — the last from a history of the WaMu disaster). I’ve never been a fan of the “culture of entitlement” doom-mongers, but it’s beginning to look like a more realistic description all the time.

    Go. The rant is ended. I apologize for it, but please think about how the various comments to this article might sound to an outsider.

    1. As a scientist who gladly would like more funding, I agree that the proposed solution by Fang (give more money!) is a non-solution. It may even make it worse, as we get even more young scientists trying to fight their way into the system.

      Nor do I agree with Kornfeld’s notion that little can be done about the publish-or-perish principle. In fact, it is exactly that principle that causes more problems than just scientific misconduct. For example, I have found myself contemplating at times whether I could split a publication in two, so I get two publications rather than one. We really need to rethink this!

      Perhaps we need fewer scientists?

    2. The epidemic of retractions is coincident with historically low grant application success rates and bleak job prospects for young scientists. It does not seem frivolous to question whether these events might be related. This is not to say that measures to discourage, detect and punish misconduct are unimportant, nor is it an attempt to rationalize misconduct. However, I feel it would be a mistake to ignore the issue of incentives. I don’t see a fundamental difference between increasing resources for science or reducing the number of scientists competing for resources, other than we will have fewer opportunities for scientists and get less science done with fewer resources.

      1. Is there really a connection between success rates and retraction rates? That is, is there really a significant increase in e.g. the US?

      2. Urgh, let me rephrase that: is there really a decrease in the success rate versus an increase in the retraction rate for e.g. the US?

      3. Yes. Over the past decade, the NIH success rate for research project grants fell by 44% while the number of retracted papers from U.S. laboratories rose by more than 500%. I realize that correlation does not equal causation, but I do not believe this is entirely coincidental.

      4. Without agreeing or disagreeing about the correlation/causation, I useful question to ask is about a possible confounding factor: online access. Some kinds of misconduct (plagiarism, duplication) are far easier to find, just by fact of huge expansion of online access.

        1) Much scientific research is supported by public funds.

        2) I’d suggest that in any given area, the value($ spent) does not look linear or exponential, but ~logistic (S-curve):

        a) Below some amount of $, there isn’t enough critical mass to be useful. That $ varies wildly by discipline and are, i.e., some work can be done by individuals. At the other extreme, big physics is expensive. A half-built CERN LHC is really not very useful.

        b) Then, as you add $ and people, you get an inflection where useful stuff is going on.

        c) Then, you reach a 2nd inflection, where adding a lot of people and money probably isn’t helping much.

        This is like Silicon Valley startups. People often build (small) teams consisting of A-players, but as companies grow, it is impossible to keep that level forever.

        3) I of course support funding for science … but neither individual researchers nor schools should expect blank checks from the taxpayers.

        a) Science still has one of the best reputations among the public for getting to the truth. For all the complaints, it is certainly more self-correcting than, say, comments by politicians, some thinktanks and many newspapers/magazines.

        b) That reputation needs to be guarded.

        c) Universities, journals and individuals need to take misconduct seriously.

        Having been involved (or knowing nonpublic info about) a bunch of misconduct complaints over last few years:
        -I know offhand of at least 4 schools that have reacted swiftly and (I think) well. One has not.

        – I know of one publisher that handled a complaint well, and one who has not.
        Editors have varied, including some who have simply ignored complaints.

        By the nature of all this, it is nontrivial to find public examples.
        For instance, suppose a junior person is found to have committed misconduct, under circumstances in which one might think they were badly trained. They might have to resign, or their contract not renewed, but should a university publicize that? Depending on the circumstances, maybe, maybe not. Likewise, given inquiry/investigation/ajudication, the last step may well have room for negotiation, possibly involving lawyers. In some cases, “You are out of here, but quietly” may not be unreasonable.
        Of course, for Federally-funded work, extra rules kick in.

        The NSF cases do not specify those involved, usually, but at least describe them.
        The ORI cases are probably the best source in one place of detailed examples.

        Efforts like RetractionWatch are of course quite useful.

        4) I would suggest that consistent training is not a bad idea. I have seen some truly astonishing interpretations of the standard misconduct rules, even by professors or department heads who one might think would know better.

        5) It would probably have a salutory effect if ORI or NSF found cases where a university did its best to cover up real misconduct, that they consider a debar of the institution for a while, not just the researchers. To my knowledge, that has never been done, but speaking as a taxpayer, I’m not very keen on schools getting substantial Federal funds if they prove themselves unable/unwilling to follow the rules. Good professors could take their grants elsewhere, but even a year’s visible institutional debar on new grants would make the point, as per Candide.

  7. You get what you measure, whatever the reward system for which you apply the measure. The problem is that the measures for good science are not, strictly speaking, measures of good science. They might have been good measures when the people being measured were unaware of the methods, but once people are aware of the way they are being judged, it is easier and more effective to focus on raising the scores rather than to focus on delivering good science and counting on the scores to rise as a result.

    When a PI’s judgment about the abilities of his grad students are based on how closely their graphs fit his or her preconceived notions of what they will look like, then he/she will see graphs that look like what he expected. When a researcher gets tenure based on how many journal articles s/he gets published in the appropriate journals, s/he will get more articles published in those journals. When the value of publication in a journal is determined by how many times those publications are cited, they will be cited more in the same journals. Once the measure is in place, the significance of the measure can become lost in the race for high numbers.

    1. Yup. the ironic thing is that cheats like Marc Hauser and Stapel set the bar for everyone else, and the bar is not lowered even when it is clear that their output was not real!

  8. I totally agree in that we need fewer scientists. Their augmented numbers have only generated more crappy papers without any real benefit. Every fraud costs a lot and lasts too long, doing too much damage. Science is NOT for everyone, as telling the truth is hard both technically and for taking guts.

    Also whistleblowers have to become more aggressive and more numerous. We need more watchdogs and to eliminate the scum. The following kind of action is to be appraised and incentivated:

    http://www.science-fraud.org/?page_id=6

    1. Clearly, science is not for everyone. But who decides who will stay in science and who will not? Unfortunately, the best scientists are not they ones who are selected by the system. Those who are selected are often the charlatans and the manipulators. The same could be said of politicians.

  9. This caught my eye and held it like a vise. “RCR [Responsible Conduct of Research] instruction cannot be expected to establish basic ethical standards in a classroom of young adult graduate students. However, variations on such a course might be effective for the nonprofessional staff, for whom such training is not now required. Members of this group might be less likely to fabricate or falsify data if they have a better understanding of the goals of the research in which they are involved. They should know how their findings could contribute to advances in science and/or improved medical care and the serious consequences of publishing fraudulent data.”

    Say what? Graduate students are immune to instruction in ethics? But the “nonprofessional staff” can be taught ethics?

    And who are these “nonprofessional staff”? Are they the holders of master’s degrees and Ph.D. degrees who translate the vague promises of the PI into workable, do-able projects? Who do the bulk of the work and stand ready to fill in for every graduate student who slacks on his/her research? Who are asked to come up with fund-able ideas for faculty members to propose, to review and correct manuscripts before they are sent out to journals, to edit posters and slides before the graduate students head off to conferences? When there is a research goal beyond “bring in money”, members of this group have a firm understanding of what it is.

    In my opinion, staffers know only too well that the real objective is to find something publishable and to generate papers so the PI can get tenure/promotion and future grants. There would be fewer manuscripts submitted if the goal were science. But as “mike” pointed out, you get what you measure, whether or not the thing you measure is a good proxy for what you want to measure.

  10. How can institutions prevent scientific misconduct?

    Step 1: Want to prevent scientific misconduct.

    I have yet to come across and institution that really took it seriously except in the very unusual possibility that bad publicity might ensure if they didn’t. At present, if institutions come across an example to misconduct and they think they can get away with it, they will cover it up. Or, and this is a very important exception, if they catch a student plagiarising from Wikipedia, then the indignation of academics knows no bounds.

    Every other suggestion is pointless until the culture of institutions changes, and since institutions really just the scientists that make them up – until the culture of scientists changes..

    As it stands this is a little like a guild of foxes asking themselves how they can protect the hen-houses better.

    1. I agree that institutions must want to prevent scientific misconduct before this goal will be achieved. Institutions are just one more level of the system in which counting has taken the place of evaluating. Number of degrees awarded, number of papers published, number of dollars received in grant money, number of patents — the money comes in and the products go out.

      But I disagree with your idea that the university is really just the scientists who work there.

  11. I think that bringing in more money might work in the short-term, but it probably isn’t a long-term solution to the problem. If tomorrow grants got, say, 50% bigger, were given e.g. 7-10 years to complete, and success rate went up to 25-30%% – at first it would certainly relieve some of the pressures discussed above, no doubt. But relief is experienced usually as a break from pressure, right? So after some time you would get used to it, no relief would be felt, and competition (for grants, tenure etc.) would still be in place. You would consider increasing the pace of your work to get an edge over the others and, since you have more money, you would hire more people and buy more equipment. Which brings us back to square one, doesn’t it?
    In order for this kind of thing to work you would need to keep the amount of people working at about constant level, with little competition, more of less comparable salaries and guaranteed funding. Bleh…it sounds more and more like a centrally-planned economy, and this approach does not really stimulate growth and innovation.
    The thing is that if more money is fed into a system/industry (through whatever channel: government, NGOs, private initiatives, whatever) it is only natural that it would just grow bigger, right?

  12. There is nothing new on this matter. Research misconduct will be reduced if those who get the tools use them. Scientist that commit research miscondcut must know that the time and delayed actions is not their “friends” and actions will be taking against them very fast. Otherwise the long an investigation for misconduct takes the better for them and the worst for those as whistleblowers. Obviously education under ethical principles but also if you do not comply with them tou must konow something strong could happen.

  13. As the author of the paper being discussed , I am pleased that it has elicited these thoughtful comments.

    The Watch summary of my paper notes that I believe that improved mentoring would serve as one way to address the problem..The paper itself contains two recommendations to achieve that:

    1. The quality of mentoring should me made a factor in the evaluation of training grants for funding. One measure of quality would be the ratio of trainees to mentor, which might vary within specific disciplines,

    2. Mentors should be made to share responsibility for any fraudulent work published by a trainee.

    I would be be very interested in comments.

    Don Kornfeld

    1. I appreciate your suggestions to improve mentoring, but I am afraid that mentoring has gone the way of all things in today’s scientific environment. Mentoring is all the rage now, so much so that practically everything is now classified as mentoring as PIs struggle to inject a “mentoring component” into their grant proposals.

      A first step might be to require that all mentors be qualified. A person who does not know how to do a procedure should not be teaching others how to do it, since this usually results in the trainees being taught incorrectly or being given unhelpful and contradictory instructions. I have some sympathy for trainees who have been inadequately taught and yet are expected to produce valid results. Yes, it is wrong of them to cheat, but I can see that more people besides the cheating trainee are to blame for the situation.

      The ratio of trainees to mentor is important, but so is the actual time spent by the mentor with each trainee. Yes, an unrealistic ratio is a tip-off that no true mentoring is being done, but a reasonable ratio does not guarantee that the mentor will be actually engaging in a reasonable amount of mentoring. I was incredibly lucky to have the world’s best and most engaged mentor, and I saw the difference between what I got and what other students got. But how are you going to distinguish between true mentors and PIs who just want the money and will lie to get it? So many of them talk the talk but don’t walk the walk.

      Holding mentors partially to blame for the bad outcomes of their students could be a good idea if there is enough money to allow the mentor to review in detail the output of the students. Many PIs avoid responsibility for results by refusing to look at raw data. They want to see numbers in a table or output from a computer program. They don’t go to the research location. They don’t observe whether the procedures are being done correctly. They don’t record a few data points themselves so they can compare their data with the data of the students. This is both a convenient way to evade responsibility and a necessity when PIs are stretched too thin by the unrealistic demands of their institutions. High throughput and high productivity and so-called efficiency are antithetical to a high-quality education. It takes lots of time and money to produce excellent results. Time and money are what we don’t want to invest these days.

      In addition, I wonder how you will handle the case of a truly bad student. I have never taken credit for the students who get super high A grades in my classes, and I have never accepted responsibility for the students who get Fs. It’s not my doing. Super-good students come in that way and they do well. Students who flunk are poorly prepared (bad at math, bad at science) or don’t attend lecture or are preoccupied with other things during the semester. I have encountered graduate students who cannot be taught no matter how much time is invested in them. They exhaust the patience of mentor after mentor and still they are careless and lazy and their research results are crap. Should the PI dismiss such students when it means that valuable tuition from a foreign country will be cut off, that the program will get a bad reputation with the foreign country and future foreign students (and their dollars) will be directed elsewhere, that a project will be delayed while a new graduate student is recruited? PIs are under pressure to forge ahead with the students on hand in order to meet their research goals and the financial goals of the university. As always, true education and real science are the lowest priorities, while money and deadlines and artificial measures of production take precedence. Should the PI pay for the sins of the student now and forever when the student is unsalvageable and yet the PI is under pressure to make the best of the situation? When universities will accept responsibility for a person who demonstrates on his/her first job after college that s/he is spectacularly unqualified despite having been awarded a degree, when universities will accept responsibility for their students who go on to graduate school elsewhere, then PIs can be asked to take the blame for the moral failings of their graduate students.

      Just my opinion. Yup, I’m cynical.

    2. Donald, what do you mean by “Mentors should be made to share responsibility for any fraudulent work published by a trainee”? Do you mean to suggest that the mentor should be found gulity of research misconduct when a trainee is found guilty? Or the mentor should be punished by the institution when a trainee falsifies data for a publication? Certainly ORI never has done this to a mentor — while mentors and other coauthors are damaged by having to retract or correct papers with falsified data by trainees, many have been praised by the community for making public such fraudulent work when it was found and for retracting the papers immediately (e.g., Leroy Hood at CalTech and Francis Collilns at Michigan and NIH).
      Alan Price

      1. Alan,

        The implementation of my recommendation that mentors share responsibility does require clarification.
        I don’t believe all mentors need necessarily bear the same degree of culpability as their trainees.
        There are situations where it is the mentor who identifies the trainee’s misconduct, but why not prior to publication ?
        Therefore, the basic question to ask is : What could this mentor have known (prior to publication) and why didn’t he/she know it?
        Would closer oversight have prevented it? If so,was the mentor irresponsible or assigned an unrealistic number of trainees ? I am not familiar with the two cases you cited where Hood and Collins were praised for identifying published fraudulent work. What were their roles prior to publication ? Could you send me info or references

        To the best of my knowledge, ORI has never sanctioned a mentor but it should have the authority to do so.
        The mentor’s institution can also invoke an appropriate penalty.

        The institution should also consider to what degree its own policies have contributed to the problem.
        For example, Is there inadequate training of mentors? Are there too many trainees per mentor?
        Are there unrealistic expectations of laboratory productivity ?
        If so,the institution should be required to address those issues.

        Don

  14. Interesting discussion. One of the efforts Elsevier has made lately is an educational program aimed at students and early career researchers, see http://www.ethics.elsevier.com (We have also been offering free workshops on publishing & research ethics for a few years now). The hope is that no one can claim they didn’t know better….DrugMonkey’s views on training notwithstanding!

    1. Nice try elsevier, but your “Quiz” fails on a number of levels. For example…

      Question 6. Let’s say Cell accepts your paper. Is it okay to submit a version of that paper in a language other than English to a journal in a different country. Does that count as a duplicate submission?

      This is a yes/no answer, but the question is actually 2 questions. The answer to the first one (is it okay) is no. The answer to the second one (is it duplicate publication) is yes. How can you expect people to answer such a double question correctly when it is written so badly. Also FYI the first question is missing the correct punctuation.

      Question 7 You have worked long and hard on a research project. You feel your research is applicable to a variety of disciplines and you can envision the paper appealing to a range of audiences. Is it ok to ‘slice up’ the core data to create several unique papers, based on your core research, that can be submitted to a variety of journals in different fields?

      No, it’s not ok. While it sounds like a good way to maximize your research and potential for getting it published, slicing up research into several papers for different publications is considered a manipulation of the research and publishing process. This should be avoided at all costs.

      Several people would disagree with this. There is nothing wrong per se with “salami slicing”, provided you never ever try to publish the same slice of salami in 2 different journals. If a story can be broken into components and put out in least-publishable-units, with each standing alone as an academic contribution, why should the authors be forced to write up the whole story in one big paper? An example would be a clinical trial – one paper for the results based on planned end-points, but then during analysis they discover some other interesting effect which was not a planned end-point. What’s wrong with spinning another paper out of that additional data? Why should they be forced to include it in the main paper, especially if it dilutes the flow of the paper, and detracts from the point of interest? If you take Elseviers’ viewpoint here, people would only ever publish complete/whole/integral stories, which is not how science works.

  15. Let me step in and try something a little less polite and scholarly.

    As most observers of human nature (outside of the lab, anyway) know, if there are no or low consequences for bad behavior (and positive rewards), you’ll get more of that behavior. Basically, fabrication, falsification and plagiarism (FF&P) are parrt of an overall risik management strategy. “Can I get away with this?” and “If I get caught, what’s the worst that can happen?” Since there are usually low chances of being caught, and even then nothing REALLY bad happens to the culprit, the behavior increases. Throwing money at the problem (as in more funding for more scientists) will just boost the N of bad players. If you want to minimize FF&P, in addition to properly training students and staff to “do science right” there needs to be a real cost to malfeasance. Otherwise, cheating becomes a gaming tactic and all other approaches to correct for it become various wastes of time and energy with, as we have seen, little payoff. The problem justs gets bigger each year (and has for a LONG time). NIH “punishments” for cheating are a 3-year stint in the no-funding zone…how about “you’re fired!” from any career in science, and pay back any taxpayer funds you were paid (such as personal research salary). If there were a true cost to cheating, it might not be so attractive. Sure, there will always be some gamers in the system, but their numbers would be lowered enough to manage in other ways. The challenge is to get the problem shifted from “runaway” to “under control”.

    But, hey, that’s just my opinion. And as Dennis used to say, I could be wrong.

    1. HomieG: I think you are making an implicit assumption that no other penalties accompany, for example a 3-year NIH debar. That says nothing about any academic misconduct penalties imposed by institutions.

      Here is a good experiment to try:

      1. Got to ORI case summaries. Open all.

      2. Sample those, especially where people have less common names.
      Picking established researchers is also helptul

      3) Check their institution and see if they are still there.

      4) Via Google Scholar (or other) check publications and see if anything happened after the debar.

      [I’ve looked at the case summaries, have not done the whole experiment … but I’d guess this didn’t help some people’s careers. But these are like court cases: I’d guess people take into account circumstances. For instance, if serious problems are found in work by professor+student, and it is clear both knew about it, I’d guess the professor would be in more trouble than the student. Likewise, I think undergrads may get treated more leniently than PhD students. I used to fail students for copying final (programming) projects, but I don’t recall ever finding that from grad students. If I had, I suspect the consequences would have been more serious than just an F in the course. For what it’s worth, many (honest) students told me over the years that they were very happy to see this. They worked really hard for their grades and knew they meant something.]

      Really, all this isn’t just to punish cheaters, or to remove junk from the research record, but to encourage honest people to think there is fairness at least some of the time.

      1. Dear John, I wish I could say the same for Brazil. However here it is quite on the contrary, cheaters are encouraged, and the bigger they are the less they suffer when caught…

  16. Are there any practical measure Institutions could take to mitigate the risk of misconduct in the first place (scanning and signing off lab books etc)? Or would any such measure take up too much time?

    1. Time expended by mentors should not be an issue.They are receiving federal funds to train scientists.
      They are therefore obligated to do so. A review of ORI findings by Wright and Titus found significant
      evidence of inadequate supervision of trainees found guilty of misconduct.
      I believe that were the NIH agencies which provided the training funds made aware of these deficiencies,
      the institutions involved would take whatever action was needed to remedy the problem before their next training grant application was submitted.

      Don

  17. It has occurred to me that another suggestion I made in my paper has not been discussed.
    I recommended that investigators be made aware that the submission of false data in
    a grant request submitted to a governmental agency is a felony punishable
    by imprisonment and/or a fine. In 2006 an investigator was sentenced to a year and a day for that offense.

    Would greater awareness of this risk serve as a deterrent? Would a well publicized prosecution
    have that chilling effect ?

    Don Kornfeld

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.