New York Times pushes for more focus, funding on research misconduct

nytThe New York Times has an editorial today with which we wholeheartedly agree: The newspaper is calling on scientists — and even the government — to pay more attention to misconduct in research. (It also doesn’t hurt that the paper mentions us.)

The proximate cause of the editorial, titled “Scientists Who Cheat,” is the retraction by Science of the gay marriage study by Michael LaCour, which we — and the Times, among others — have covered extensively.

As the editorial rightly notes, the pressures to publish are pushing some researchers to make up data. (Monday’s paper also carries a page 1 article about the dangers of splashy science that’s worth reading.)

In theory, a journal’s peer reviewers are supposed to detect errors, but they often do not have the critical data needed to check the findings, nor the time to do so, particularly since they are seldom paid. Sometimes the cases only come to light when a whistle-blower, perhaps a student or researcher in the lab where the cheating occurs, points the finger. The scientific community clearly needs to build a better safety net.

It can start by ensuring that scientists, especially peer reviewers, are allowed to see the underlying data of a paper, which researchers are typically reluctant to share. The federal Office of Research Integrity should be given ample funds and sufficient independence to investigate all major cases that come to its attention. Another answer to the problem of fraudulent research, though, might be more research. The federal government could sponsor studies to determine how much cheating goes on, how much harm it causes and how best to combat it.

We couldn’t agree more with this sentiment. These are points that researchers and editors frequently make in publications that are targeted to their own communities, but it’s refreshing to see a paper that focuses on global issues prioritize improving the process of science.

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

 

22 thoughts on “New York Times pushes for more focus, funding on research misconduct”

  1. I’m amused by the line “especially since they are seldom paid”. So, the question is, where are these paying review gigs? I get paid to review for NIH, although the amount is ridiculous considering the workload. I have never been paid for a review for a journal. Yet it can take 5-6 hours to review an article.

  2. I’m not clear what the fuss is about. The great majority of articles are never cited so, right or wrong, presumably have no influence whatever. If, on the other hand, an article makes riveting claims, it gets everyone’s attention and its claims are carefully scrutinized as the most recent case demonstrates. I think that it needs to be shown that scientific fraud causes harm, not in isolated cases, as in the vaccination case, but across the board. Otherwise, the remedy will be worse than the disease, and everyone’s time will be wasted to no purpose. Peer review cannot detect fraud, a purpose for which it was not designed.

    1. What is your point? That the majority of scientific publications are irrelevant, hence we should not enforce basic standards of accountability? Sorry, but lies told do matter. They taint the scientific record and waste the time of those who do not fabricate results. For those who fabricate, their egos outweigh their talents, and they need to be weeded out of the system as quickly as possible.

      1. Moreover, each fabricated publication represents the training of subsequent generations of NIH funded award winning frauds by a generation of hacks who think they know how to game the system… until he system catches up with them.

        Frauds don’t come up with brilliant ideas of skirting detection by themselves. This is folklore passed down thru generations by oral tradition…

        1. Do you really think so? I know of more than one case where the student turned in the professor, or vice versa. I would be shocked if teaching students to lie was prevalent, at least in my field.

  3. This problem filters down from the scientific journal to the popular press, including popular science magazines and newspapers. When I was teaching (quite a time ago) I was always bringing in news items by “science editors” that needed correction. Most often they were about demography, radiation, sicklaemia…. But then, I am an anthropologist. If I had been a chemist or astronomer I would surely have found others. Thanks for this great site!

  4. Alan, “Peer review cannot detect fraud, a purpose for which it was not designed.” Therein lies the problem. If the smart fraudsters know that peer review can be cheated in A-Z ways, then they will work the system to gain advantage through cheating. Extremely smart frauds will work the system well enough to also gain great funding, too. And this is why the impact factor needs to be cut because it is serving as the pseudo-academic carrot driving some of the fraud to a certain extent, pushing academic with a lack of scruples to commit fraud. You need only browse PubPeer daily to observe that this is likely the case in which the “unprepared” peer review, or a peer review system that was, paraphrasing your words, not designed to detect fraud, has become the circus ring for academic and scientific fraudsters. And the only ones dancing in the ring are the editors and the other scientists who are doing their best to do things by the book. Well, until the fraudsters get caught, that is… Peer review has got to change.

  5. Research fraud has been a special interest of mine for the last 27 years. I’m as concerned re this issue as the next guy, in my roles as researcher, author, reviewer and editor. However, to call for regulators and governments to get involved is a seriously bad idea. Progress in science and technology has been slowing since the 1970, and we are in the process of throttling it further due to over- regulation and cumbersome, incompetent and excessively meddlesome research governance structures.

    Research fraud has to be dealt with by the scientific community, and that is happening more effectively now than at any other time in the history of human endeavour.Your web site proves the point.

    Shit happens. To stop research and publication fraud entirely one would have to shut down research.

    1. Wait, I’ve seen this movie, with a big computer and maybe robots. “You have asked us to prevent research fraud. The best solution is to prevent research. We will save you from yourselves.”

    2. However, one of the big areas of growth in research is the development of institutional officials who monitor regulatory compliance. Scientific research misconduct is one are of compliance.

      In other words, it’s too late to put that genie back in its bottle. This has been building for 20 years.

      1. So, how well is this system working if fraud isn’t caught til AFTER publication? What’s happening on the institutional level to train, monitor, prevent, catch, sanction…way before the research paper needs retraction?

  6. 1) We have clearly crossed the point beyond which there is any doubt that scientific data fraud is rampant; way more so than many have been willing to admit before now.
    2) Peer review is pretty much FUBAR.
    3) Beyond individual choices, systematic blame is firmly to be placed on the pressure for academics to perform/compete, dictated by a broken higher education economy.
    4) The greed of multi $bn publishing corporations is also to blame.
    5) Lax oversight and enforcement by ORI, COPE and others completes the picture.

    So my question is quite simple…. Given the above relatively established facts (feel free to debate the minutiae), is it really any surprise that this shtistorm we’ve hauled our collective selves into, is finally “leaking” into the mainstream media?

    As a community, we can address items 2-4 above, or sit by and watch the public trust in our field evaporate, along with support for our government funded jobs.

  7. All this talk about the failure of peer reviewers, when the focus should be on the editors of glam science mags. We have seen recent examples (i.e. STAP) where peer reviewers highlighted flaws in studies, only to be over ruled by jumped up editors looking for the next big headline. Stop blaming reviewers.

  8. “Often a young researcher, driven by the academic imperative to “publish or perish,” fudges the data. In many cases, a senior scientist who is supposed to be monitoring the research pays little attention, content to be listed as one of the authors.”

    That’s a very simplistic view, to say the least. No young researcher is actually “driven” to fudge the data, this is a conscious personal choice. But yes, it is the senior scientist, the PI, who will punish honest employees for being “unproductive” and reward the ones with less of moral inhibitions for delivering the “correct” results. Why? Because this is how the PI has likely made it to being PI him/herself. So the next generation of data fudgers is promoted, and the honest ones are weeded out.

    1. Agree. Selection for cutting corners, learning “how it is supposed to be” rather than on clear data. People are either too shy or brow-beaten into keeping quiet rather than saying that something isn’t.

  9. It’s interesting to me that this discussion is sparked by a case where the system worked decently. Shitty paper gets published, someone catches it, shitty paper gets retracted. Seems like a pretty good outcome. Sure, it took a while, but not longer than it takes me to get an article accepted, so…
    I do however think there’s place for some regulation and government, but i don’t think it should be in the form of researching why people cheat. Especially not after this case, which to me screams irrationality. It should be in the form of funding for reproducing research. Maybe give it out to labs, maybe setup an “agency” that does it. If you know nobody is going to read or check your research other than some busy reviewer, then you’ll think that once you’ve tricked them you’re done. And in most cases, you are. It’s out, it’s “true”.
    More eyes are better than less.

  10. A lot of the discussion around scientific misconduct seems to center around punishing wrongdoers. Personally, I think we’d make a lot more progress on the problem by eliminating incentives to cheat. That would include dialing down the pressure to publish in high-profile journals, eliminating impact factors, finding a way to fund more investigators (e.g. cut down on multimillion dollar grants, cut back on indirects), and generally creating an environment that doesn’t encourage manipulation.

    But that would require an entirely new mindset at our universities and federal agencies, and I doubt it will happen anytime soon. This tells me that our decision makers aren’t serious about fixing the problem.

    I swear, research is turning into the X Games. The next paper has to be bigger, Flashier, and MORE INCREDIBLE than what came before it. Some people have always cheated and they always will, but the atmosphere today seems to be particularly conducive to it.

  11. I’m always a bit confused by the “pressure to publish” leads to faking data thing, which is usually stated in the context of “if only us scientists had no pressure to publish, then none of us would cheat! don’t pressure us, just give us money!!! we’ll be productive because we’re honest and just want to do this for the knowledge.”

    Has anyone heard of Lance Armstrong? Bernie Madoff? UNC football trying to do school? THE FREAKING PATRIOTS!??! Bob Dylan? (yes, a bunch of his songs are ripoffs) Why do scientists act as if they are a special case?

    Can’t we just conclude that where there is capitalism there is cheating and where there is communism there is slacking? As long as you have to publish to prove your worth, there will be cheating and if people don’t have to publish to keep jobs and get promoted in science there will be even more of a demand for the jobs and even more laziness.

  12. Any researcher who can not or will not share his data with any who ask should not be published in a peer-reviewed journal. He is no longer practicing science. Advancement of science is based on the ability to replicate another’s results, and in science there is no place for ‘trust me, i wouldn’t lie to you.’

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.