Pressure to publish not to blame for misconduct, says new study

plosoneA new study suggests that much of what we think about misconduct — including the idea that it is linked to the unrelenting pressure on scientists to publish high-profile papers — is incorrect.

In a new paper out today in PLOS ONE [see update at end of post], Daniele Fanelli, Rodrigo Costas, and Vincent Larivière performed a retrospective analysis of retractions and corrections, looking at the influence of supposed risk factors, such as the “publish or perish” paradigm. The findings appeared to debunk the influence of that paradigm, among others:

The hypothesis that males might be prone to scientific misconduct was not supported, and the widespread belief that pressures to publish are a major driver of misconduct was largely contradicted: high-impact and productive researchers, and those working in countries in which pressures to publish are believed to be higher, are less-likely to produce retracted papers, and more likely to correct them. Efforts to reduce and prevent misconduct, therefore, might be most effective if focused on promoting research integrity policies, improving mentoring and training, and encouraging transparent communication amongst researchers.

Some factors were associated with a higher rate of misconduct, of course — a lack of research integrity policy, and cash rewards for individual publication performance, for instance. Scientists just starting their careers, and those in environments where “mutual criticism is hampered,” were also more likely to commit misconduct.

There are policy implications to the beliefs in what drives retractions, the authors add: Some institutions have been adjusting the way they evaluate scientists based on concerns that pressure to publish threatens integrity, and in the U.S. (and elsewhere), universities must implement training in research integrity for early-career scientists, out of the belief they are most prone to misconduct.

We were interested to note that the researchers included corrections in their analysis of scientific integrity — which, as they say, have been “surprisingly overlooked by scholars”:

Unlike retractions, corrections carry no stigma and do not affect the publication record, so they have no direct consequence on a scientist’s career. Unlike retractions, which are often accompanied by litigations and lengthy investigations (for current examples see, corrections are typically a friendly process, often solicited spontaneously by the authors of the erroneous paper.

Since scientists volunteer to correct their papers as a way of righting the record, corrections “may be considered manifestations of scientific integrity,” the authors explain:

It follows that any sociological or psychological factor that increases the risk of scientific misconduct, and therefore the likelihood of retractions, should have, at a minimum, a smaller (null) effect on corrections, and possibly even an opposite effect.

Fanelli, who has published a number of studies of retractions and scientific misconduct, and colleagues finish their paper with the “multiple theoretical and practical implications” of their findings:

In conclusion, our results suggest that policies to reduce pressures to publish might be, as currently conceived, ineffective, whereas establishing policies and structures to handle allegations of scientific misconduct, promoting transparency and mutual criticism between colleagues, and bolstering training and mentoring of young researchers might best protect the integrity of future science.

Although the findings appear to contradict an earlier paper suggesting that men were more likely to commit misconduct, Fanelli and his team noted some hints at a link between gender, misconduct, and career status.

We asked Ferric Fang, some of whose work on retractions is contradicted by the new study, for comments on the paper; he reviewed it with Arturo Casadevall, with whom he’s written many of those papers (such as the one suggesting men commit more misconduct). Fang, who is a member of the board of directors of The Center For Scientific Integrity, Retraction Watch’s parent organization, told us that he and Casadevall had “serious reservations about this paper’s methodology and conclusions:”

The authors express the surprising and somewhat contrarian viewpoints that publication pressure is not a major driver of research misconduct and that males are not more prone to misconduct.  However for a number of reasons we find the authors’ arguments to be unpersuasive.

For one, Fang notes, retracted papers are not homogenous, and treating them that way could lead to a type II error — failing to detect a true existing effect. Similarly, using authors’ first names to study the relationship between gender and misconduct, without first determining whether misconduct had occurred, or by whom, “favors the erroneous acceptance of a null hypothesis,” wrote Fang.

He also noted that corrections are not “typically a friendly process,” as the authors of the study suggest:

As a journal editor-in-chief, I have seen many corrections, and I can assure you that they arise for a host of reasons.  Some are certainly innocent but others are suspicious yet may lack sufficient evidence of fraudulent intent to warrant a retraction.  In addition Arturo and I have encountered a number of corrections in the literature that appear to be fraud masquerading as honest error.

On the most surprising finding of the paper, Fang wrote:

The authors are dismissive of a large body of evidence in which scientists have admitted to fraud or other questionable research practices and have explicitly linked their actions to career pressures.  A recent study not cited by the authors found a strong correlation between publication pressure and scientific misconduct.  There are also well documented case studies in which individuals found to have committed research misconduct have directly ascribed their actions to pressures to obtain publications, jobs, or funding.

Fang added that the paper completely misrepresented his, Casadevall’s and colleague Grant Steen’s views on a key question: Since retractions are now more common, is scientific misconduct on the rise, as well? Specifically, the paper cites Fang, Casadevall, and Steen’s work when making this assertion:

Analyses of retraction notices recorded in Medline have led researchers to suggest that scientific misconduct is growing and is particularly common in high-impact journals…

Fang took issue with that characterization of their work:

Steen wrote that ‘one possible interpretation of these results is that the incidence of research fraud truly is increasing… another possibility is that the incidence of fraud has not increased appreciably but journals are making a far more aggressive effort to self-police’.  Arturo and I were similarly cautious, stating that ‘overall manuscript retraction appears to be occurring more frequently, although it is uncertain whether this is a result of increasing misconduct or simply increasing detection due to enhanced vigilance’.  Scientific misconduct may in fact be growing, but we have not made that claim and are not presently aware of definitive evidence for or against this possibility.

In case you crave more science publishing news, a co-author of the paper, Vincent Larivière, has another article in PLOS ONE today, showing that “five publishing companies control more than half of academic publishing.”

Update, 8:30 p.m. Eastern, 6/10/15: There was a miscommunication about the embargo for this paper. We had been told it was today at 2 p.m. Eastern, but it has been changed to 12:00 a.m. Eastern on 6/17/15 because of the need to add a correction to the manuscript. The link above to the paper
will become active at that time.

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

24 thoughts on “Pressure to publish not to blame for misconduct, says new study”

  1. I never believed misconduct were “linked to the unrelenting pressure on scientists to publish high-profile papers”. Some people I know, incl. myself, were pressured a lot to do certain things as scientists, to no effect. Others I knew needed no pressuring at all to bend data. It is a question of personal ethics and integrity.
    My view is, misconduct is linked to a system which rewards a certain degree of data manipulation, which in turn allows to produce flashy papers which in turn pulls big funding. Everyone involved is happy, but science suffers. The honest scientists in this race are too slow and their findings are less “sexy”, so eventually they lose out on papers, funding and jobs.

    1. this is a fair response. yes, i do agree that pressure to publish may not force people to do misconduct. I have seen people who publish in high profile journals get rewarded. Rewards make them do things!

  2. “Scientists just starting their careers, and those in environments where “mutual criticism is hampered,” — could there be a more precise statement on this? Boy, had I seen such papers while still a student: would have avoided much nuisance.

    “My view is, misconduct is linked to a system which rewards a certain degree of data manipulation, which in turn allows to produce flashy papers which in turn pulls big funding.” (from Leonid, above) — also very much true!.. Up to now I have met researchers who manipulate data completely, those who try to make it more sexy by various means, and those who expose ambiguities as they are. Unfortunately the latter tend to suffer the most.

    Mendel is a popular example on how manipulated data can be mixed with important discoveries; yet he never made it to high profile journals. God knows what pressures were upon him at that time. Had him exposed 100% real data, would his paper be accepted and/or taken so seriously nowadays? This is one very real example few today like to discuss.

  3. The DOI link doesn’t work, and I can’t find the original PLOS ONE article in my RSS feed or on the website.

  4. This is all very interesting. I haven’t read the new paper, so my comments are based only on this post.

    1. To me, the pressure-to-publish explanation for research misconduct explains nothing. All productive scientists are under pressure, and many of them do not commit misconduct. If pressure is important, one might think that if we knew what personal/psychological characteristics are aligned with a tendency to misconduct – but I doubt that any strong correlation will be found. I think that a more focus would be on the situation/environment, which is likely to be easier to study because situations (such as lab structure, policies, etc.) are more homogeneous and less easily or quickly than personal/psychological traits. People have mood swings; situations don’t.

    2. Ferric Fang provides as support for the pressure-to-publish explanation the many people who have been found to commit research misconduct blamed their behavior on pressure. As we know, “anecdote” is not the singular of “data.” And I suggest that a retrospective study with an n of 1 in which the researcher is also the subject is unlikely to be unbiased or accurate.


  5. I suspect that Prof. paper’s* statistics will be quite different now in 2015, especially after the Springer and MacMillan merger, finalized on May 6 2015, and now called Springer-Nature:

    The existence of the publishing oligarchy is not only frightening, it is a wake-up call. IT indicates that we have the responsibility of holding them up to the highest standards possible, and examine their published literature minitiously in post-publication peer review. I think, using an African savannah analogy, we could easily refer to these oligarchs, the “Big 5”. The question is, who is the elephant in the room?

    As for Daniele Fanelli’s study, the link above lead to a “DOI could not be found” statement, while a search at PLOS ONE revealed no June 10 paper by Fanelli, only some of his older papers from April 2015…

    * Larivière V, Haustein S, Mongeon P (2015) The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE 10(6): e0127502. doi:10.1371/journal.pone.0127502

  6. Yes the incidence of scientific fraud is increasing but the number of retractions is a pale representation of the very numerous posts on retractionwatch and Pubpeer where unmistakable data falsification gets highlighted. Why is that? Many institutions do not have an anti-fraud panel dedicated to the investigation of research frauds. As for Institutional Animal Care and Use Committees corresponding authors should always state the name and the composition (including names and emails) of their institutional anti-fraud committee. Institutions without an anti-fraud committee should be banned from grants and from publishing. The anti-fraud policy including an anti-fraud committee iis the best deterrent and outlines that the organisation is safe trustworthy and committed to discover and deter fraud. Investigating all instances of suspected fraud and in positive cases enforcing disciplinary punishment, informing grant agencies and publishers and eventually initiate the required legal actions should therefore be mandatory. Retractions are a very good indication that the institution is trustworthy.

  7. We can immediately dismiss their conclusion that career pressures don’t lead to ethics infractions by simply citing one of the findings mentioned here: early career scientists were more likely to commit misconduct. I would argue that established high-profile PIs have far less pressure to produce than young people. I don’t find the results surprising, but I haven’t read the manuscript in detail.

    1. Is misconduct occurring more often by early career scientists really a higher RATE of misconduct by early career scientists, or is an erroneous use of language? Do they then know the actual number of early career scientists so as to calculate such a rate?

  8. Does anyone have a link to the paper or a pdf of it? The embedded link is not working. Thanks for any help.

  9. Mitch, it may be that early career scientists perceive themselves/are more vulnerable and therefore more likely to succumb to that same pressure or, as psychologist like to say, to ‘the power of the situation’.

    1. I appreciate the distinction, but I think it’s almost hair splitting. The obvious question is “why does that perception exist?”

      1. Mitch, my thought, which was in complete agreement with your position, was this: Consider all members of lab X who have just learned that a competing lab is making rapid progress on the same problem that lab X has been working on for a while (i.e., pressure to get results). By virtue of their junior status, the specific tasks that they do, and whatever other tenuous personal/situational factors operating on them at the time (e.g., their relative position in the lab pecking order, prior success in the lab), these are the individuals who will perceive the greatest amount of pressure to get results. Add to that mix a weak personal commitment to RCR and chances are that you will start to see the scientific misdemeanors. Should pressure be perceived to grow stronger, there will be a greater likelihood that more serious misconduct will occur.

        That’s how I see these types of situations.

  10. BTW, the link to the paper does not work for me and I cannot find it at PLOS ONE when I search for Fanelli

  11. Re: Fang’s claim (“The authors are dismissive of a large body of evidence in which scientists have admitted to fraud or other questionable research practices and have explicitly linked their actions to career pressures”, etc).
    What else should they say when caught red-handed, to try to save themselves? Can’t always blame it on an earthquake:

  12. Classic correlation/causation flaw. What they find, is lack of association (correlation) between pressure to publish and retractions. That does not necessarily mean, that a non-pressuring environment is the CAUSE of the low number of retractions.

    Of course, confounding the analysis is another problem – the things that might endow one with a non-pressured publishing environment (e.g., being a well funded and connected BSD), might co-segregate with factors that allow one to correct papers instead of retract (e.g., being a well funded and connected BSD).

    As others have mentioned, the DOI link is broken, and a full search of the PLoS site (all journals) reveals nothing. There’s also nothing on PubMed either.

    1. Interesting observation. We all are aware, no doubt, that scientists are only able to look where they happen to be shining their flashlight.

  13. i don’t see a historical perspective here, though some of the comments hint at it. Three or four generations ago, most research was done in academia, which, while recognizing its many shortcomings, did imbue young scientists with a loyalty to codes of behavior. (The bigger problem was the advisor wanting to sign onto the young scientist’s more significant paper as senior author!). With research expanding into industry and independent labs, a new environment exists which lacks the aura of commitment to an ideal. Research has become much like the rest of society, alas.
    P.S. Please spell homoGENEous correctly. thanks. Jeffrey

  14. As a psychiatrist who has studied individual ORI Reports in an effort to answer the question of motivation, I cannot imagine how it can be answered utilizing the data available from a review of retracted papers. We’ll find out on 17 June when the paper becomes available.

    Don Kornfeld

  15. I’m really looking forward to reading this paper as the conclusion is quite inconsistent with my own experience in science. I don’t how the author characterized the “pressure to publish”, but in my opinion the pressure doesn’t regard any kind of papers but selectively to the positive/catchy/high-impact ones. There’s for sure a “selective pressure to publish” and this will likely favour scientific misconduct. Were scientists free to publish positive/negative findings with comparable rewards, we would see less misconduct in my opinion. I’m also not sure if significant conclusions can be drawn analysing retractions/corrections only.

  16. Dear RW readers, the paper is now officially published. I apologize for the confusion about embargo dates, and I hope to be reading and discussing more of your thoughts on the study and on the issue in general.
    Daniele Fanelli

    1. Not sure how PLOS metrics works, but I just downloaded the paper, and the stats read:
      0 saves
      0 citations
      0 views (should at least be 1)
      So, I assume that the counters are not activated automatically, but are accumulated and updated every so often?

  17. Responsibility must be fixed of the fraud on the researcher who did the experimental work. The involvement of the Director of the institute with vested interest to blame and fix responsibility on innocent authors must be strongly investigated and action taken against such Heads!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.