Are you liable for misconduct by scientific collaborators? What a recent court decision could mean for scientists

Richard Goldstein

Retraction Watch readers may have followed our coverage of the case of Christian Kreipke, a former Wayne State researcher who was recently barred from U.S. Federal funding for five years. That punishment followed years of allegations and court cases, along with half a dozen retractions. The case has been complicated, to say the least, and led to a 126-page decision by a judge last month. Here, Boston-based attorney Richard Goldstein, who represented the scientist in Bois v. HHS, the first case to overturn a funding ban by the U.S. Office of Research Integrity (ORI), tries to explain what it could all mean.

Can you commit research misconduct if you fail to detect false data from another scientist?

The answer is yes and here’s how it can happen.

You work in a well-regarded laboratory that receives government funding. You are frequently a principal investigator (PI) and a lead author. The lab suffered from some disorganization so when you took over, you demanded quality work and hired a new lab administrator.

Things are generally good but life in the laboratory is demanding.  The size of the lab makes it impossible for you to validate every piece of data.  So, you often have to trust that a colleague’s work is reliable and truthful, including from collaborators at other facilities.  Funding, as always, is a problem, which means you can’t buy enough equipment and data security software; tracking who did what is difficult.  Some lab employees (inherited from your predecessor) have professional or ‘personnel’ issues and you suspect some will leave the laboratory. And of course, there is growing pressure to publish, attend conferences, make new findings, and to keep the funding stream going.  There is never enough time.

All of that probably sounds familiar, but here’s where our story takes turn for the worse.

One day, you are summoned to a meeting with the institution’s research integrity officer (RIO) and told some data in a paper are falsified and they began a misconduct investigation. You are shocked.  You know the data but didn’t validate them personally before publication and you don’t know precisely who did the work.

One always worries about errors, but deliberate falsification and misconduct?  You are angry but soon become uneasy as you wonder why you weren’t the first to hear about the problems with the data.  Your shock and unease turn to dread when the RIO looks you square in the eye, hands you a sheaf of documents, and says you are being charged with misconduct, being placed on leave, and must immediately turn all over all of your files, data, and laptops.

As the weeks go by, you learn you are the only person in the lab under investigation. You maintain your innocence and no evidence ever emerges that you falsified the images or knew they were false.  You have your suspicions as to who falsified the data, but the dean doesn’t seem interested. No one else is charged. It is starting to feel like you’re being railroaded.

After hiring a lawyer, you protest this “selective prosecution.”  You also alert the administration to the waste of research funds and financial mismanagement. Your institution finds you – and only you – committed misconduct. You are terminated.

Years pass, lawsuits and hearings pile up.  You win a victory when your employer is found to have retaliated against you — but the misconduct finding stands. The U.S. Office of Research Integrity (ORI), adopting the institution’s report, brings formal misconduct charges and bars you from receiving Federal funding for 10 years.  How is this possible if you had no knowledge the data were false? You believe it is wrong and appeal, hoping a neutral judge will rectify this injustice.

The administrative law judge (ALJ) agrees to hold a hearing, the first such ALJ hearing in over a dozen years, and after months of waiting and more briefs and legal fees, he issues a 126-page decision.

Your debarment is upheld, although the judge reduces it to five years.

Not just a hypothetical

If you’ve been a regular reader of Retraction Watch, you will recognize this as the case of Christian Kreipke, a researcher formerly at Wayne State University in Detroit.  Although the long-running Kreipke case has been discussed previously, the ALJ’s May 2018 decision has something important for everyone. I suspect that – for many years to come – it will be referred to by ALJs, by ORI, by university administrators, by RIOs, by investigation panels, by science journalists, and by lawyers.

However, there is something especially important in the decision for senior scientists, especially PIs and lead authors.

As a lawyer who represents scientists in misconduct cases, one of the most difficult issues is the responsibility of lead authors and PIs for work done under their supervision.  Although the Kreipke decision is important for many reasons, this part of the decision could affect lab practices everywhere.

Perhaps the most significant fact in the case is that Kreipke did not falsify or fabricate any data.  Nor did he know any data was false. The first time he learned of false or fabricated data was when he was told by the RIO after the commencement of the investigation. Kriepke’s problem was that he didn’t detect the falsification done by another scientist.

Nevertheless, the ALJ concluded, without much difficulty, Kreipke committed misconduct.  Nor did it matter to the ALJ that Kreipke was the only one charged with misconduct.

How is this possible?

What it means to be “reckless”

Followers of research misconduct cases will know that a scientist commits misconduct merely if he or she “recklessly” allowed the inclusion of false or fabricated data.  But what does it mean to be “reckless?” The Kriepke decision is important because it puts some “meat on the bones” of this legal term.  The ALJ said that including false or fabricated data without validating its accuracy is reckless if one “used materials without exercising proper care or caution and disregarded or was indifferent to the risk that the material were false, fabricated, or plagiarized.” It is not a defense if you assumed others performed research reliably and truthfully reported those results to you. As far as the ALJ was concerned, Dr. Kreipke didn’t do enough to validate the data.

After Kreipke, can one rely on the work of others?  One of the witnesses said “[We’ve] known each other for 25 or 30 years.  And because of that association, I would have accepted what came out of that laboratory without question unless I saw something.” Even though this kind of trust was a common practice, the ALJ decided it was not a defense because it may result in false or fabricated data. A PI or corresponding author cannot just accept “on faith” the data were correctly labeled and were accurate representations, even if it was coming from a longstanding collaborator or a trusted scientist in one’s own lab.

The ALJ said that an author, editor, other contributor is not presumptively liable for false material just because their name is on a grant application or article.  However, if one is a PI or first author, that person is presumably responsible for the content of the work.  

Higher validation standards?

What does it mean?  More importantly, what do PIs and lead authors now have to do to protect themselves from allegations of misconduct?  Can any data be trusted or does the PI or lead author have to verify everything personally?

Some would say the Kreipke decision sets an unreasonably high standard for validating research on a collaborative project and upsets long established norms.  Others might say the ALJ got it right and Kreipke, as a lead author and PI, had a personal responsibility to police the work of others and to ensure it was accurate and verifiable.  

What is now clear is that senior researchers, lab supervisors, PIs, and lead authors can no longer accept the work of collaborators at other labs or, for that matter, the work of scientists in their own lab, without insisting on (or conducting) some level of validation.  More importantly, even if there has not been a hint anyone’s work is suspect, the senior scientist, PI, or lead author must demand the work be validated. In a misconduct case, the lab’s procedures for verifying data will now come under close scrutiny. The Kreipke case establishes that a PI, lead author, or lab head who fails to validate data or employ adequate validation procedures may be personally liable for misconduct, even if the scientist had no knowledge of falsification or fabrication.

Accuracy in reported research is a core value.  Scientists entrusted with government funds should take all necessary steps to ensure research issued under their name is correct.  The possibility of a research misconduct finding creates a powerful incentive for scientists in a supervisory capacity to demand valid and accurate data.  The Kreipke case shows that a scientist is at risk for a research misconduct allegation if he or she does not validate data, even if it comes from a long time collaborator or a trusted colleague.

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

26 thoughts on “Are you liable for misconduct by scientific collaborators? What a recent court decision could mean for scientists”

  1. I’m not keen on Goldstein’s narrative at the start. Specifically this bit…

    You work in a well-regarded laboratory that receives government funding. You are frequently a principal investigator (PI) and a lead author. The lab suffered from some disorganization so when you took over…

    Either you’re the PI of the lab, or you’re not. You can’t “frequently be a PI”, there’s no such thing in academia! There’s senior authorship, if that’s what Goldstein is getting at, but that’s not the same as being a PI (the PI).

    Is Kreipke claiming that he inherited all the bad stuff from his predecessor when he “took over” the lab? Again, this is not really how things work in academia – labs don’t just get given to someone with all the (salaried) people included when a senior scientist leaves. If Kreipke was really put in charge, then surely he would have had the ability to make hiring/firing decisions, and keep only those people he trusted? Instead, he chose to keep them and (it seems) put his own name as PI and senior author on their data when he published it.

    Then there’s the glaring issue of authorships – many of the papers retracted by Kreipke are with him as first author, not senior (last) author, and not as PI. Generally in academia, the first author is the person who generates the data. This does not fit with the notion of coming into a lab and taking over as PI.

    Missing from the narrative is how the old PI came to leave, the succession of power from the old PI to Kreipke, and who the “read bad actors” were. The main PI associated with Kreipke appears to be Jose Rafols, but he’s disappeared from Wayne State’s website.

    Perhaps someone from there can chime in on how Rafols came to leave? It seems from Goldstein’s narrative that Kreipke is trying to blame his predecssor for all this, but maybe he can’t say it directly because that might open up defamation liability.

    Once again, lawyers get involved and everything gets confusing!

    1. “Once again, lawyers get involved and everything gets confusing!”

      Exactly! Best Comment of the Year!

    2. Sometimes that is how things work. It’s certainly not the most common scenario, but I’ve seen it happen multiple times. And unfortunately, hiring/firing decisions aren’t always that straightforward.

      What you think is “generally” true in academia isn’t entirely accurate. Consensus about authorship order and responsibility varies widely by discipline (and sometimes by location), and there is limited formal guidance on the topic generally. ICMJE to my knowledge gives no specific guidance on author order beyond defining the responsibilities of corresponding author, for example.

      It’s so easy and tempting to apply our own experiences to situations to justify our own proclamations of what “should” be the ideal case, but we should avoid doing so. Even your PI comment isn’t accurate — yes, in many cases the lab’s PI is the PI for any given project being done in that lab, but again, this is not always the case. I can pull any number of IACUC or IRB filings at my institution where the named PI is not the same name on the lab door.

      I will also point out that one cannot simultaneously advocate for severe, career-ending consequences for misconduct and bemoan the increased role of attorneys in these processes. The latter is to a significant degree a consequence of the former.

      1. “Sometimes that is how things work. It’s certainly not the most common scenario, but I’ve seen it happen multiple times.”

        Could you give some examples? It sounds like you must be near the top of the management tree to have “seen it multiple time”.

    3. Of course you can “frequently be a PI”. Whether or not you are the PI depends on a study-by-study basis… It doesn’t depend on “do you run a lab or not”.

      Because most of my research is collaborative, I am usually not the PI on most of the research that I do. But I am frequently a PI.

        1. No, Boboramus is not. The principle investigator is head of a grant (the “study-by-study basis”). The PI may or may not be the lead author on any papers produced through the research funded by the grant. They may or may not be the senior author on papers produced through research funded by the grant. They may or may not have their own lab – there are a number of post-doc PIs out there, because to be a PI you again just have to be the head person in charge of a grant.

    4. Mr. Brookes: The source of my knowledge of the case comes from the ALJ’s 126 decision, which I had to summarize given space limitations. I urge people to read it but, unfortunately, despite it’s length, there are some ‘facts’ it doesn’t discuss in much detail, such as the state of lab when Dr. Kreipke took over. My posting also had to gloss over there was a lot of conflict between lab members, including that one scientist got a ‘protective order.’ In any case, I am not defending or exonerating anyone; my objective was to point out that ‘senior scientists’ (be they PIs, lead authors, or whatever) are at risk if they don’t validate the work of others. Judging from comments, that is a sentiment many seem to share.

    1. Implement practices from industry:
      1. Institute and document training programs for any protocols that are standard throughout your lab.
      2. Institute a calibration and preventive maintenance program for all equipment from the basics (e.g. pipettes, scales) to the complex (e.g. dynamic material analyzers, flow cytometers).
      3. Review and sign lab notebooks.
      4. Look at raw data with your trainees and ensure the appropriate statistical analysis was done.
      5. Set foot in the lab on more than one day per year.
      6. Consider setting up an audit system. Industry labs and manufacturing facilities are audited on a regular basis – at least annually if not more. No reason we can’t audit research labs other than no one wanting to foot the bill.

      The number of times that I witnessed poor lab practices in grad school at a research university left me rather jaded. It’s difficult for me to trust findings from a good number of academic labs because of this. Grad students training other grad students is like the blind leading the blind. Unfortunately, due to the nature of scientific academia, this can mean that the errors of grad students are passed down and even PIs don’t know how to properly conduct their research.

  2. So from which lab did the bad data originate? What is to prevent yet another PI from trusting data from this same source and falling victim?

  3. ‘What is now clear is that senior researchers, lab supervisors, PIs, and lead authors can no longer accept the work of collaborators at other labs or, for that matter, the work of scientists in their own lab, without insisting on (or conducting) some level of validation.’

    Excuse me, isn’t this what everybody does when coauthoring a paper? Or are we talking about ‘Do this and put my name on the paper. Add X and Y because they need papers and Z because I owe one’?

  4. I too would like to read more articles on how to validate data, or are the very least what would be the core principles to ensure that this story does not repeats itself with other innocents.

  5. “After hiring a lawyer, you protest this “selective prosecution.” You also alert the administration to the waste of research funds and financial mismanagement. ”

    I take that to mean that Kreipke alerted the administration to the waste of research funds and financial mismanagement after he hired a lawyer and after the misconduct investigation started.

    Could attorney Richard Goldstein clarify that point?
    If Kreipke alerted the administration to waste and mismanagement after the misconduct investigation started it sounds like clumsy virtue signalling and distraction, verging on retaliation.

    1. Mr. Pessoa: The facts of the Kreipke are detailed and had to be condensed for the posting. If you read prior posts on the case, you will see had a dual appointment (at the VA and at WSU) and the retaliation occurred at the VA, not at WSU, which is where he was found to have committed misconduct. The selective prosecution issue is different from retaliation and, as the ALJ noted, troubling. Ultimately, neither concerns about selective prosecution nor proof of retaliation prevented the ALJ from affirming the ORI debarment. I’d be happy to discuss this in greater detail if you wish.

  6. Disgruntled is absolutely right. In industry, auditing is done both by internal auditing groups and by regulatory agencies as well. And in addition to the auditing, lab notebooks & other data sources have 2nd person sign-off. There is a very high level of accountability.

    When outside labs are used (i.e., outside the company), it’s common practice to audit them as well.

    Good Laboratory Practices (GLP) are common & often required in industry. This includes validating that all instruments are functioning as required. While fraud can still happen in industry, with all of these checks in place, it’s very hard to do so (particularly at large companies that have these many controls in-place).

    Our lab in grad school had nothing like this. (virtually no checks & balances).

    It’s ironic that the public trusts academic labs more so than industry labs.

  7. The problem might go away if the latest IT capabilities were to be employed to keep everybody honest from the get-go. Block chain smart contracts might do the job. See: Nugent T, Upton D, Cimpoesu M. Improving data transparency in clinical trials using blockchain smart contracts [version 1; referees: 3 approved]. F1000 Research 2016; 5:2541 (doi: 10.12688f1000research 9756.1). https://f1000research.com/articles/5-2541/v1.

    All of the traditional approaches that depend on assumptions about the trustworthiness of people haven’t worked, so why not take a chance on trying something new? Moral hazard puts everybody at risk of dishonesty. It is part and parcel of the human condition. Rationality argues for a change in behavior. To resist is insane by one definition of “insanity”: “Repeatedly doing the same thing expecting different results.”

  8. @DTX

    I have worked in industry and in collaboration with academia, and I’m afraid all the checks and balances you discuss are thrown out of the window when data “analysis” occurs. This includes clinical trial data whereby professors/ Medical PIs, Grant PIs and an “external” auditor checks, and even including the final report sent to the MHRA. As a 30+ years researcher I don’t believe 99% of published research (this is the medical field, UK) preclinical and clinical, and I have done both.

  9. I think maybe the dude could pull off “I dunno where this data came from I just published it” for *one* paper. I might have some sympathy for that; miscommunication is a possibility.

    This is like… 6 papers? 7? If your entire career is built on data you can’t even source and try to disavow when it turns out to be fraudulent… come on. This article doesn’t even pass the most basic sniff tests for plausibility. I think that if every paper I’d published turned out to be full of data I couldn’t explain, people might rightly conclude that I’d committed career-long fraud.

  10. “The ALJ said that an author, editor, other contributor is not presumptively liable for false material just because their name is on a grant application or article. However, if one is a PI or first author, that person is presumably responsible for the content of the work.”

    I’d go farther.

    Call me old-fashioned, but I still believe that if you put your name on a paper as an author that you are responsible for the entire paper, even if you are buried in the middle.

    Yes, the first authors, senior authors and – i would argue seasoned authors – have prime responsibility for inspecting the data and assuring that it is valid.

    That’s why arguments such as – those weren’t “my” papers (even though I was an author) that were retracted – did not ring true to me.

    http://retractionwatch.com/2018/03/05/probe-into-carlo-croce-reached-defensible-and-reasonable-decisions-says-external-review/

    If you are an author, you are responsible. Espicially if data in the paper is questioned (through legitimate channels), it’s your responsibility to ensure that the data is correct or that it is corrected. If the data was falsified or fabricated or seriously in error it’s your responsibility to see that the scientific literature is corrected. Doesn’t matter where you are on the author list.

  11. Come on, people! Much of this discussion is entirely irrelevant. Misconduct is unrelated to position–PI, first author, last author, corresponding author, etc. In research misconduct, as with other misdeeds, the person(s) who did it, did it. There may be other people who were unwitting participants at one level or another and these or others may have some accountability. That doesn’t mean, however, they did the deed.

    If a bank teller embezzles funds from the bank, did the branch manager do it? No, the teller did. Maybe the manager didn’t have effective oversight or compliance checks in place, but the TELLER did the crime, not the manager. And maybe the manager will face some sanctions for lax oversight. But that does not mean the manager did the embezzling.

    It appears crystal clear that the issue in this case/decision is the fact that ORI’s “scientist-investigator” was not qualified (in the legal sense, not the common usage) by the attorney. Kreipke uses this in his statement indicating that the judge said he was unqualified. Again, that’s a legal determination; in common terms, I’d be willing to bet that the “scientist-investigator” in very formidable and unexcelled in his qualifications!

    The outcome of this trial is exactly and entirely because the judge excluded ORI’s investigative results. As stated previously, it’s difficult to imagine the leap of faith required to discount essentially everything a person says, but then accept that their misuse of data is accidental.

    It seems to me that this case has the potential to rewrite the whole story of research misconduct, moving it into the realm of quantum-like phenomena: no more local reality.

    1. Alex Runko was one of the good ones. The failure to adequately establish his qualifications and credentials for the purposes of this case falls wholly on ORI. He no longer works there, but he was excellent at his job and was very highly regarded by the RIO community.

  12. “Come on, people! Much of this discussion is entirely irrelevant.”

    Not really.

    It should be obvious to readers of RW that scientists – more than bankers – have not only a duty, but an obligation, to self-correct.

    IOWS we’re all responsible to self-police data integrity. Ask any grad student who has taken a [good] research ethics course.

    It’s explicitly why whistleblower “gas-lighting” laws were created.

    What ORI has done – correctly IMO – is to validate this basic tenant at least as it applies to first and senior authors. It should apply to all authors and science would be better off if it did.

  13. What does this finding do to cross-discipline collaborations? The whole point of undertaking such a collaboration is that I don’t have the skill or capacity to test this, but I know someone who does. As I am not an expert, I can’t fully verify the data. Maybe I can notice a badly Photoshopped figure, but beyond that I have to trust that my fellow researcher is properly representing the data.

    1. Excellent observation. It has been my experience a scientist on a project is often required to rely on the expertise of another. In my practice, many cases involve a ‘false’ image. Once the problem is identified, the other scientists on the project are often in the position of trying to explain whey they didn’t catch it. As a lawyer, I can only say that there is no one rule that will apply in all cases; each situation is different. However, the Kreipke case makes it clear that a scientist must exercise ‘due care’ for the accuracy of work contributed by others. In practical terms, this means that, as a general rule, one must implement procedures and safeguards designed to guard against errors and fraud. What procedures and safeguards are adequate will differ in each situation.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.