How often do economists commit misconduct?

research policyWe haven’t covered that many retractions in economics, and a 2012 paper found very few such retractions. Now, a new study based on a survey of economists tries to get a handle on how often economists commit scientific misconduct.

Here’s the abstract of “Scientific misbehavior in economics,” which appeared in Research Policy:

This study reports the results of a survey of professional, mostly academic economists about their research norms and scientific misbehavior. Behavior such as data fabrication or plagiarism are (almost) unanimously rejected and admitted by less than 4% of participants. Research practices that are often considered “questionable,” e.g., strategic behavior while analyzing results or in the publication process, are rejected by at least 60%. Despite their low justifiability, these behaviors are widespread. Ninety-four percent report having engaged in at least one unaccepted research practice. Surveyed economists perceive strong pressure to publish. The level of justifiability assigned to different misdemeanors does not increase with the perception of pressure. However, perceived pressure is found to be positively related to the admission of being involved in several unaccepted research practices. Although the results cannot prove causality, they are consistent with the notion that the “publish or perish” culture motivates researchers to violate research norms.

Some examples: More than half of economists who responded to the survey said they had “refrained from checking the contents of the works cited,” while about 20% said admitted to salami slicing — ie “Maximized the number of publications by dividing the work to the smallest publishable unit, meaning several individual articles covering similar topics and differing from each other only slightly” — and about a third said they had cherry-picked, or “presented empirical findings selectively so that they confirm one’s argument.”

We asked Daniele Fanelli, who published a highly cited paper on misconduct in 2009, for his take:

The study is methodologically very thorough and accurate and certainly improves existing evidence available in economics. As far as general prevalence of misconduct and QRPs go, survey data are unlikely to surprise us any longer, at least when conducted in western countries. The results of this study fall mostly within the ranges suggested by previous studies.

Other results, however, are remarkably informative. For example, the fact that respondents find “copying work from others without citing” less justifiable than fabricating data or “using tricks” to increase t-values. Such a scale of values might tell us a lot about the priorities that the contemporary scientific system is instilling into researchers, in economics and possibly in other disciplines.

A side note: A line in the acknowledgements caught our eye:

The author is grateful to the editors and three anonymous referees for excellent comments and suggestions. The author is indebted to Lars P. Feld and Bruno S. Frey. Without their support, the project would have been impossible.

Frey, of course, was involved in a duplication — aka self-duplication — kerfuffle, for which he has apologized.

The author of the paper, Sarah Necker of the University of Freiburg, tells Retraction Watch:

The project is part of my dissertation, Bruno Frey provided support with regard to the development of the questionnaire and the distribution of the survey. He commented several times on the manuscript. The research project is unrelated to the self-plagiarism problems, the survey was conducted long before any criticism came up.

8 thoughts on “How often do economists commit misconduct?”

  1. I am sure that Max Keiser at RT.com would probably have something to say about this. Keiser is a firm critic of the corruption in the economics sector having once been on the inside at a trading firm on Wall Street. Hw ould probbaly identify dzens of contradictions by top level economists. For more: http://rt.com/shows/keiser-report/

  2. “the “publish or perish” culture motivates researchers to violate research norms”

    Or rather, given how widespread these practices are, “the “publish or perish” culture motivates researchers to generate *new* research norms”. It’s difficult to say that 94% of people are violating what is ‘normal’ – at that frequency it surely *is* normal!

    And that, really, is the problem.

  3. Salami slicing should certainly not be lumped in with cherry picking. The former is ugly, but it does not lead to bad data or theories in the field. The latter is a real problem as it can lead, and likely has led, to bad policy based on faulty data.
    I agree strongly with David, least publishable units (LPUs) are simply the new normal in academic publishing. The emergence of the h-number instead of raw numbers of publication as a metric has damped this a bit, but also led to new shenanigans in self-citation and groups cross-citing each other inappropriately. Again though, none of this is a problem outside of the field except behaviors that create false or skewed data.

    1. Scott, I respectfully disagree with the position that salami publication is more benign than cherry picking. It depends. Salami publication comes in many forms. In many cases, different salami slices share some of the same data from, say, control groups and, if there is only ambiguous or nonexistent cross-referencing, these cases can distort the scientific record if readers believe that the data from each study are new. But, even when discrete salami slices contain completely different data and these are published covertly, without clear cross-referencing between the slices, such scenarios can be highly problematic. I’ll give you a simple scenario from the social sciences. Suppose, I give subjects a packet of questionnaires, including a 4 personality inventories, some attitude and some behavioral measures. I then decide to publish the outcomes of correlations between a couple of the personality measures and the behavioral measures in one journal and then I publish the results of correlations between the other two personality measures and the other behavior and attitude measures in another article. Again, no single datum is shared between the journal articles, but there is no cross-referencing between them. Anyone who has done questionnaire research should be familiar with the many contextual effects that arise when questionnaires are given together as part of the same package. What are these effects? Very often subjects will adjust their responses to one questionnaire based on their responses to the preceding questionnaire (“if I answered one personality inventory as an extrovert, I better answer this behavioral measure in a way that is consistent with the type of personality I just reported!”). Admittedly, this is typically done unconsciously and in a very subtle way, but it can be significant enough to produce completely different study outcomes. Evidence suggests that these context effects extend to how the study is presented and even to the individual investigator administering the study. That is, subjects will respond one way if the questionnaires are presented in a certain way (e.g., this is a study of personality) or in a different way (e.g., a study of possible correlates of serial killers). They will also respond differently in response to the individual researcher administering the study. The bottom line is that not knowing the actual relationship between the studies can lead readers to erroneous interpretation of the results.

      For a summary of context effects in questionnaire research please the work of James Council.

      Reference

      Council, J. Context Effects in Personality Research. Current Directions in Psychological Science. April 1993 2: 31-34,

  4. Referees often do not weigh negative results or failures favorably, but weigh positive results or success highly favorably. Finding an alternative route to solve a problem may often be an addition to knowledge (knowledge regarding the route, not regarding the destination), but it is often discouraged by the referees.
    Then cherry-picking emerges, increasing t values through dubious practices emerges and so on. Referees are often considered more knowledgeable than the author but anyone who knows how referees are chosen also knows that publication of one or two papers in a journal, or citation an author makes one worthy of being a referee, too. Unless these loopholes are plugged, misconduct will continue.

  5. Some saucy bits from the paper: “Having accepted or offered gifts in exchange for (co-)authorship, access to data, or promotion is admitted by 3%. Acceptance or offering of sex or money is reported by 1-2%.”

  6. Not checking the contents of work cited is lazy, to be sure. But of all the things in a paper that you could fail to check, it’s pretty much the least important, far less important than checking the correctness of your results, for example. Most obviously, it’s less important than failing (from laziness) to find and cite a relevant paper at all. The inclusion of this item under the category of misconduct makes sense if you are talking about undergraduate essays, where the content is presumed to be unoriginal, but not as regards academic research.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.