Yesterday, Cornell University told a group of researchers who had petitioned them to release a report of their investigation into alleged misconduct by Brian Wansink, a food marketing researcher who recently resigned his post there, that they would not release that report. As BuzzFeed reports, the university is now conducting a “Phase II” investigation into Wansink’s work. (It’s unclear what a “Phase II” investigation refers to; we’ve asked the university to clarify.)
Unfortunately, Cornell’s lack of transparency about the case puts them in the majority. Here’s a piece by our two co-founders, Ivan Oransky and Adam Marcus, about why this veil of secrecy needs to be lifted.
For more than a decade, Cornell University’s Brian Wansink was a king in the world of nutrition. He published his findings — on everything from why small plates make us eat less to the behavior of obese people at all-you-can-eat Chinese buffets — in top-tier journals and garnered media coverage in prestigious newspapers. His work even formed the basis of U.S. dietary guidelines.
But Wansink’s fortune cookie has crumbled. In September, he resigned in disgrace from Cornell. He has now lost 15 papers to retraction — one, twice — and the university found him guilty of committing research misconduct.
Cornell has acknowledged that its one-time superstar was producing unreliable results, but the institution hasn’t said how, exactly, Wansink cheated. The lack of clarity is far from unusual. Universities are woefully opaque when it comes to the misdeeds of their faculty members. They guard their investigations with a ferocity worth of Cerberus, hiding critical information from the public under the pretense of legal constraints.
Sometimes the stakes are low, such as in the recent case of H. Gilbert Welch, an expert in health policy who recently resigned his faculty job at Dartmouth College after the school concluded that he’d plagiarized data from a junior colleague. No one has questioned the veracity of that work, and Welch insists it was an authorship dispute, anyway.
Other times, however, the stakes couldn’t be higher. A few years ago, a child participating in a clinical trial run by Mani Pavuluri, a psychiatrist at the University of Illinois at Chicago, was hospitalized after displaying a worrisome increase in aggression. Out of concern, UIC shut down three of Pavuluri’s studies and launched an investigation into her work. One year later, the school sent a letter to hundreds of children and families that had participated in her research, explaining that the studies might have put participants at greater risk than they had realized. Her research has now been shut down indefinitely, and as ProPublica reported, UIC had to return more than $3 million in grant funding — an extremely rare event.
Although Pavuluri can no longer conduct research at UIC, her medical license is still active in the state of Illinois. That means she’s free to treat children. And yet we have no idea what, exactly, the investigation into her work showed, nor whether UIC has submitted any information about its investigation of her work to the Illinois medical board. That’s because the university has refused to make public its investigation into Pavuluri’s work.
UIC has said in recent years that it can’t release a report on the inquiry because the case is ongoing. It gets worse, though: Even if the investigation were complete, the university says it wouldn’t release the report because of a law that exempts reports on a “health care practitioner’s professional competence” from public disclosure.
That’s right: A doctor whose studies put children at more risk than they were led to believe is more deserving of protection than vulnerable children she might treat.
Although two Federal agencies — the U.S. Office of Research Integrity and the National Science Foundation’s Office of the Inspector General — do have oversight of university investigations into scientific misconduct, the public has no way of knowing whether such investigations were carried out properly. As we and a colleague wrote in the Journal of the American Medical Association earlier this year, when universities investigate their own — which by Federal statute, they are obligated to do — they have serious conflicts of interest, and the resulting investigations are often deeply flawed.
Sometimes, we do obtain the details of what went wrong because universities release reports of their investigations — such as the 75-page document recently released by The Ohio State University which described a wide-ranging case of misconduct by a cancer researcher that also prompted the university to temporarily shutter a clinical trial. That’s unusual. Most often, we obtain these reports by filing a public records request. In the Pavuluri case, however, our requests have been denied by UIC, a decision upheld by the Illinois Office of the Attorney General.
And even if all of our public records requests were granted, private universities — where yes, misconduct happens, too — would be immune from such entreaties. Ways around that barrier exist, but it’s a war of attrition, with wealthy, powerful institutions well aware that a release delayed is a release denied, to paraphrase a legal maxim. Attention to a report about a ten-year-old case is likely to be scant, the players scattered to the winds and university donors shrugging their shoulders.
As Mark Peplow recently wrote in Chemical & Engineering News, “Many argue that a lack of transparency risks undermining public trust in research and may also hamper science itself.” Failing to release such reports allows bad actors to slip through the cracks, and even earn positions elsewhere.
It’s time for a new rule: If research involves human subjects, or is Federally funded, universities should release the reports of their investigations. Redact the names of whistleblowers, of patients, and of anyone else who might be vulnerable. Redact the names of investigative committee members, if need be — although those accused know who they are, anyway. But release the report.
Tax dollars, and the public health, are at stake.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
You wrote: “If research … is Federally funded, universities should release the reports of their investigations.”
However, the Cornell response (https://www.documentcloud.org/documents/5028990-BrownandFellowSignatories-11-05-18.html) to our open letter seems to suggest that at least part of the reason why Cornell won’t — indeed, according to them, can’t — release the text of the inquiry is precisely because of confidentiality standards associated with Federal agencies. If that’s the case, I have some sympathy with Cornell’s position. Maybe the Federal regulations need to be changed, but Provost Kotlikoff’s letter suggests that, in the meantime, the university’s hands may be tied.
We’ve covered this reading of Federal statutes — we would argue intentional misreading by universities — before:
Are all of these universities violating Federal statutes? If Cornell’s argument is that they can’t release the report until it’s “final,” then we await the release of that final report. But in our experience, many universities leave investigations “unfinished” forever so they don’t have to release their reports.
Interesting. I would expect fairly conservative reading of “people with a need to know”. Anyone know a lawyer with some free time? :-))
The “need to know” explicitly relates to assuring a “thorough, competent, objective and fair research misconduct proceeding.” It has nothing to do with transparency, and doesn’t even address notification to journals (who unarguably have more of a need to know than RW does). This subpart indeed allows institutions to determine “need to know,” but for the purpose of protecting confidentiality during the proceeding itself. To think that it should in any way cover disclosure of the final investigation report to bloggers is a severe misreading.
In many instances, I feel that many interested observers don’t fully understand ORI’s process and the typical outcomes, and what that means for institutions. Institutions feel severely limited in the information they can release while a case is in ORI’s hands. Attorneys often argue (wrongly) on behalf of their clients that institutions shouldn’t even initiate retractions, corrections, or employment actions until after ORI has made a ruling. This misunderstands ORI’s oversight function — relatively few cases are formally determined to require HHS administrative actions, and in the large majority of cases, the institution eventually receives a notification from ORI that amounts to “this all looks okay, but we’re not going to do anything.” The time period between an institution submitting the report and case materials to ORI and hearing back on their determinations is excessively long, probably due to ORI understaffing and their large case load. I feel that many of the complaints re transparency boil down to an institution’s hands being tied during the long ORI slog, and their fear of being sued. During this period, it may appear that the investigation is “unfinished” and those assuming bad faith might assume this is a deliberate attempt to avoid transparency. Given the timeline requirements in the same statute, it’s a little ridiculous to make this assumption — ORI must be notified of any investigation when it is initiated, and they do follow up diligently on progress and require institutions to justify requests for deadline extensions. It’s simply not possible for an institution to leave an investigation “unfinished” forever (at least when the research in question is under federal purview).
If ORI does post findings, then the institution has some cover. Perhaps more reports would be made public if there was some assurance and precedent for protecting institutions that choose to do so, in the absence of a formal ORI finding.
Well, we agree on one thing: The way many universities interpret Federal statutes has nothing to do with transparency.
This and other comments you’ve left as “Dr. Jondoe” suggest that we are coming at this from very different perspectives, which is fine. But do keep in mind that it’s not about whether universities should release their reports to “bloggers.” It’s about whether they should be transparent with the public. If you don’t think the public deserves that kind of transparency, that’s also fine, but we’ll have to agree to disagree quite vehemently.
We would be much more sympathetic to “there’s a process, let it work” arguments if transparency seemed to inform, let alone guide, how universities handled these cases. But that’s quite rare. Instead, we see every possible argument against the release of documents. It’s difficult to believe that public universities, for example, really want to be transparent when they argue for exemptions to public records laws so they can avoid releasing reports.
It’s difficult to tell whether you agree that there is a large number of reports that the ORI thinks are just fine, but won’t pursue, and that should be released. Let’s start there. What exactly would “some assurance and precedent” look like, and why is the lack of such assurance and precedent keeping universities from doing the right thing?
I am concerned that the Dartmouth situation involving Gil Welch is described as “low stakes;” the potential loss to the country and to the health research community of his voice and continued contributions is immense. And I don’t believe that the description of his alleged crime as “plagiarism of data” is accurate. According to his detailed online accounting, he was accused of “theft of an idea”, but never was told what this idea was, although it appears it is a concept that has been a standard screening concept for at least 30 years. He was cleared after investigations by both NEJM and ORI; this is not mentioned. It was they, not Welch, that said this was an authorship dispute, not plagiarism, which ORI is rather experienced at recognizing.
While no retractions were involved, this was anything but low stakes. In keeping with the theme of this column, the failure of Dartmouth to make the evidence and justification for their actions clear, no less public, is a scandal in itself. Retraction Watch should check its statements against the facts of this case and amend accordingly.
Thanks for the comment. We’re always happy to check our facts, but this comment seems to rely heavily on Welch’s version of events, which is incomplete.
First, our description of “low stakes” was not about Welch’s contributions, past or future, to health care research. It was about the fact that no one has challenged the results of this paper, nor was any study participant harmed in the research. If the scientific community reviews this episode and concludes that Welch should be given a new position — note that he was not fired from Dartmouth — and platform, there is nothing stopping them.
Do you have evidence for the statement that the ORI cleared Welch? By policy, the ORI does not “clear” anyone. They may decide not to pursue a case, but that is not the same as clearing a researcher. Welch declined to make the ORI letter available when we asked to see it, and did not include it in the detailed accounting to which you refer. From our original story on this:
What we do have is this:
Welch has not disputed any of those facts. And Dartmouth did make a finding of plagiarism.
Finally, we of course agree that Dartmouth should release its report. We have asked them to do that, and that is why they appear in this post alongside other institutions that failed to do so.
I think there needs to be a new set of rules for scientists.
The committees responsible for guarding these rules need to be independent.
No University committees anymore because they will be too much involved (confilcts of interest are evident).
Transparent scrutiny with low or no penalties for small ‘faults’.
It should be close to a scientific debate in which for example also too tentative conclusions (based on e.g. too small datasets) can be corrected.
The goal is better papers.
It can fill the gap when there is poor or no peer reviewing.
It does not limit scientific freedom but limits speculation and sloppy research.
Scientific misconduct now is just the tip of an iceberg.
Most of the problems lie underneath and never surface.
The scientific community should also think about the way results of studies (and underlying data) are shared.
Are there other ways? The millions of papers published every year (and time and money spent on those) do not seem to be a very efficient way to move forward in science.
I could imagine a private non-profit certifying that institutions are meeting high standards of research transparency. Much the same way that AAALAC gives accreditation to animal research facilities by auditing them periodically.