As Retraction Watch readers know, criminal sanctions for research fraud are extremely rare. There have been just a handful of cases — Dong-Pyou Han, Eric Poehlman, and Scott Reuben, to name several — that have led to prison sentences.
According to a new study, however, the rarity of such cases is out of sync with with the wishes of the U.S. population:
[T]he public overwhelming judges both data fraud and selective reporting as morally wrong, and supports a range of serious sanctions for these behaviors. Most notably, the vast majority of Americans support criminalizing data fraud, and many also believe the offense deserves a sentence of incarceration.
The new study was written by Justin Pickett and Sean Patrick Roche, both of the University at Albany (SUNY). The paper, “Public Attitudes Toward Data Fraud And Selective Reporting in Science,” was posted to the SSRN preprint server, and has been submitted to a peer-reviewed journal, Pickett told Retraction Watch. He added that he and Roche are happy to share or post the data for anyone who wants to analyze it.
Pickett and Roche conducted two surveys, one of more than 800 people in what is known as a convenience sample, of those easy to reach (in this case, using Amazon’s Mechanical Turk), and another of a representative sample of more than 950 people in the U.S. The first survey showed that
There is an extraordinary consensus among participants that both falsifying or fabricating data and selective reporting are morally unacceptable.
Respondents viewed falsification, fabrication, and selective reporting somewhat differently, but not dramatically so:
Over 90% of participants believe that scientists caught falsifying or fabricating data should be fired and banned from receiving government funding. However, most participants also believe that selective reporting deserves these same sanctions. The majority of participants believe that data fraud should be a criminal offense. Well over a third of participants hold the same view of selective reporting.
To wit:
[P]articipants absolutely do not view selective reporting as a “questionable research practice”; rather, the vast majority of laypersons in our sample believe this common behavior is morally reprehensible.
In the second survey — the representative sample — respondents felt even more strongly:
In the general population sample, most respondents who support criminalization prefer a sentence of incarceration, rather than a fine and/or probation. The results indicate that slightly over half of all Americans would prefer both to criminalize data fraud and to sentence fraudsters to a period of incarceration.
In other words:
The American public thus appears to be very punitive toward fraudulent scientific behavior, supporting a much harsher punishment approach than is currently in use.
(Incidentally, our recent informal poll produced similar results — 8 out of 10 respondents said scientific fraud should be prosecuted as a crime.)
Pickett answered a few questions about the study for us.
RW: Are there any limitations in the methodology you used, that would affect the results, or make them less generalizable?
Pickett: The main limitations are as follows. First, the sample in Study #1 is a convenience sample (from MTurk), which means there is no basis (from probability theory) for assuming generalization. However, the sample in Study #2 is a national probability sample, which does not have this limitation. That study, however, has a low response rate―29% or 2%, depending on how you define it. At the same time, meta-analytic research shows that response rates are not predictive of nonresponse bias (Groves, 2006; Groves and Peytcheva, 2008). A recent report to the National Science Foundation emphasizes this point: “nonresponse bias is rarely notably related to [the] nonresponse rate” (Krosnick et al., 2015).
One limitation is that we may have obtained different results if we had provided respondents with different examples of fraud and selective reporting. For example, we didn’t provide any information about funding in the examples. We tried to use realistic examples, and we made sure that the examples pertained to different scientific disciplines. Nonetheless, I am sure that if we had focused on cancer studies and told them the studies are often government-funded, we would have gotten even higher levels of support for criminalization.
RW: One of the arguments we often hear against criminalizing scientific fraud is that it would make whistleblowers think twice about bringing their concerns forward. Do these findings bear on that potential concern?
Pickett: …I’m not sure if the findings directly bear on the concern about whistleblowers. Is there actually any evidence that criminalizing fraud would have this effect, or is it just speculation? I would actually think criminalization would have the opposite effect. Many expressive theories of law suggest that criminalization and punishment sends an important signal about the moral wrongfulness of behavior and can actually increase the strength of prohibitive norms. I would think that stronger norms against these behaviors would lead to greater reporting, not less.
RW: Others have pointed out that when it wants to, the Department of Justice can prosecute, once the Office of Research Integrity has referred a case. But that happens extremely rarely. What do these results suggest about the frequency with which this occurs?
Pickett: [T]he results suggest the public would probably want the cases to be referred and prosecuted much more frequently.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Count me with the vast majority: grant fraud is stealing from the taxpayer and the double assault of misleading them with junk information under the guise of science. It doesn’t get much worse than that!
I too am sometimes disturbed by apparently lenient treatment of egregious offenders. But we don’t want to make life even more hazardous for decent scientists tripped up by dishonest staff – a really good cheat in the lab could fool even the most vigilant of PIs.
Large portions of the US public will answer lock-em-up to all sorts of questions if you ask them right. The same people have no problem supporting Trump no matter what comes out of his mouth.
Imagine yourself as a patient of a physician who based her treatment from bogus research data.
Because it’s your life, no punishment is an overkill.
What are we talking about here, the death penalty for fabricating clinical research data?
There has to be something between “slap-on-the-wrist” and “throw them all in jail”.
“don’t want to make life even more hazardous for decent scientists tripped up by dishonest staff”. That’s how it is at present. You are advocating more of the same.
” a really good cheat in the lab could fool even the most vigilant of PIs” goes without saying.
Very clever people can be caught out under cross-examination by less clever people.
The efforts to expose and stamp out scientific misconduct per se (i.e., unrelated to acts that are currently known as crimes) is to keep the efforts out of the courts.
I run a lab. And I am an honest, scrupulous scientist. Fraud disgusts and horrifies me. There are many of us. But your answers scares me. I look most of the gels my troops run (as raw data, not just final figures). I discuss the results regularly. I help process much of the data we collect. I discuss ethics at group meeting. I like to think I would know a weasel in my group if I had one, but I’m neither omniscient nor omnipresent, so my worry is one of honesty and humility, not one of shirking responsibility.
I’ve also learned the hard way in dealing with cheating undergrads that people only look and act guilty when they feel guilty: the really disturbing ones feel no guilt and thus don’t set off alarm bells.
Nobody is advocating criminal penalties for PIs whose graduate students do something unethical. Look for example at ORI investigations. They sanction the person who committed the fraud, not that person’s supervisor.
At a more senior level, I think the difference between honest mistakes and patterns of misconduct are pretty clear. Honest mistakes will happen with a single paper, maybe two. Some people who commit fraud once could pass it off as an honest mistake, but it would be very difficult for someone who commits fraud repeatedly to make a convincing argument for inadvertent errors.
“a really good cheat in the lab could fool even the most vigilant of PIs.”
Why the emphasis on non-PI cheats?
Just my own personal fears bubbling up. I’m all in favor of equal treatment of fraudsters regardless of rank! Horrible consequences for innocent bystanders in both cases – I don’t want to see PIs going to jail for things they couldn’t have known a devious trainee was up to, and I don’t want to see a trainee used as a scapegoat for things the PI did without his or her knowledge or pushed the trainee into.
I also think we incarcerate too many people for non-violent offenses in this country. Yes there should be serious sanctions for proven offenses, and yes I’m shocked that certain people still have jobs in science, but this knee-jerk lock-em-up mentality costs the tax payers a ton of money.
I worry that this blog is becoming accidentally anti-science and anti-intellectual. We need good, honest people going into science for the right reasons, and frightening them the way this initial post did does not help. Dedicating one’s life to science for the public good is already becoming a miserable, stressed-out, life-destroying choice. Fraud needs to be exposed and stopped, but most scientists, despite what you read, are not criminals. The vast majority of scientists I who actually know are hard-working individuals with good intentions.
As I read these comments, I cannot help but think if David Baltimore and what happened to him when people thought he was “hiding” something. In the end, he was vindicated but it was pretty nerve wracking time for all those involved.
I sure hope that RW does not inadvertently result in the type of detrimental effects that you worry about. That said, I really appreciate this and your earlier posts.
Your innuendo hurts. The personal perspective I happen to have, and thus to contribute, is that of an honest PI.
Of course cheating at any rank should have serious consequences.
Supporting consequences for people who are stealing from the government does not makes someone a Trump fan. Supervisors who say I cannot catch a dishonest scientist may be shirking responsibility. It is unpleasant to call someone out on bad behavior; I don’t like confrontation so let’s just say I didn’t know about it.
Saying I didn’t see it seems like poor oversight or poor quality control. Saying we outsourced the lab work and they produced incorrect or out and out false results, seems like the same thing. Especially if those labs are used repeatedly. If you can’t spot deceit and cheating until it’s too late, maybe an investigations 101 course might be in order.
When the PI is guilty of fraud in the lab either by falsification, fabrication or plagiarism and when the graduate student reports the misconduct to the university the graduate student becomes the target and suffers backlash and possibly being punished for blowing the whistle. When it is the other way around the grad student is thrown under the bus. If the universities cannot properly police themselves, protect the whistleblower and punish the guilty PI, how can an example be set for upholding scientific integrity? Case after case that ends up at the ORI ends up with a slap on the wrist for the guilty scientists and no more. Three years probation in applying for a grant…..at the most. The ORI is weak in forcing sanctions against the guilty by the universities…..so there is no example set that is strong enough to thwart the misconduct in the future. As long as survival in science, promotion and tenure is dependent on grants and as long as the ORI and the universities remain weak in enforcement of sanctions against the perps the incentive to commit fraud will continue, unchecked in most cases – because it is so easy to get away with. Whistle blower protection also remains weak in the private sector. We have much homework to do to correct.
Agreed. Proven, purposeful, egregious fraud should at the least result in loss of tenure and is especially egregious when trainees’ careers are damaged, and whistleblowers should be protected.
But I do object to the headline, which is based on two iffy surveys (one non-random and one with a low response) that may be borderline bad science in themselves, although not criminally bad. Surveys are notorious for getting different results based on how the questions are asked.
Also, rather than incarceration (which costs more $$), a much better deal for the taxpayers would be to demand return of all the grant money used to “produce” egregiously fraudulent results, and let the perp pay it off while serving fries for the rest of his/her life.
Dear Beware of Overkill,
What I strongly object to is calling a study “bad science,” just because the opinions of the nearly 1,800 Americans surveyed happen to disagree with yours.
As noted in the article, meta-analytic research finds that response rates cannot be interpreted as indicators of data quality. And both surveys—convenience (study 1) and probability (study 2)—show essentially the same thing—namely, most Americans believe data fraud should be criminalized, even though they disagree on how it should be punished.
Note too that the second survey was conducted by GfK Knowledge Networks. According to Allcott (2011), “Knowledge Networks … maintains perhaps the highest-quality publically available survey platform.” Chang and Krosnick (2009: 641) show that Knowledge Networks surveys manifest “the optimal combination of sample composition accuracy and self-report accuracy.” Specifically, Knowledge Networks surveys yield higher quality data—with less random measurement error, satisficing, and social desirability bias—than random telephone surveys, without sacrificing representativeness.
Not least, we made every effort in designing the survey questions to avoid leading respondents toward any response.
Nobody is saying that the findings alone should determine policy, but they shouldn’t be dismissed either.
References
Allcott, H. (2011). Consumers’ perceptions and misperceptions of energy costs. American Economic Review, 101, 98-104.
Chang, L., and Krosnick, J. A. (2009). National surveys via RDD telephone interviewing versus the internet: Comparing sample representativeness and response quality. Public Opinion Quarterly, 72, 641-78.
Graduate students who turn in their PI deserve special accolades. There have been cases of students several years into research projects who have to go and start their PhD all over again after blowing the whistle on the PI. Making the right ethical choice in a situation like that must be tremendously difficult.
Two things: 1) is there a difference in how basic research and clinical research fraudsters would be judged? Arguably, the latter is potentially worse for patients than the other, but they are fundamentally the same groip of criminals. 2) Here’s a thought: before we, the American people, commit more tax dollars to prisons to hold (and, as usual, not rehabilitate) fraudsters, how about we spend some on research (i.e. NIH funding) to reduce competition and remove a large chunk of the motivation to cheat? While we’re at it, we can pro-actively constrict the supply of new PhDs and the flow of outside researchers to make it compatible with funding rates and market demands, and thereby reduce competition further and spend tax money more efficiently. I don’t think “let’s throw ’em all in dat dere jail!” is really an appropriate response for us to have to problems that we have created and cold fix relatively easily. It would only begin the feedback cycle of incarceration and recidivism that we already see so clearly with other varieties of felon.
I’m happy to see public opinion sides with mine about what penalties should be imposed on fraudsters and data manipulators. A couple of years ago I known my views in the BMJ with reference to an earlier NEMJ blog. See: Should research fraud be001.pdf
Public opinion PLUS professional consensus and support for stiff penalties are needed, IMHO. The integrity of science is at stake. The commercially-driven opposition will defend and justify its practices, willy nilly.