About these ads

Retraction Watch

Tracking retractions as a window into the scientific process

How often do economists commit misconduct?

with 9 comments

research policyWe haven’t covered that many retractions in economics, and a 2012 paper found very few such retractions. Now, a new study based on a survey of economists tries to get a handle on how often economists commit scientific misconduct.

Here’s the abstract of “Scientific misbehavior in economics,” which appeared in Research Policy:

This study reports the results of a survey of professional, mostly academic economists about their research norms and scientific misbehavior. Behavior such as data fabrication or plagiarism are (almost) unanimously rejected and admitted by less than 4% of participants. Research practices that are often considered “questionable,” e.g., strategic behavior while analyzing results or in the publication process, are rejected by at least 60%. Despite their low justifiability, these behaviors are widespread. Ninety-four percent report having engaged in at least one unaccepted research practice. Surveyed economists perceive strong pressure to publish. The level of justifiability assigned to different misdemeanors does not increase with the perception of pressure. However, perceived pressure is found to be positively related to the admission of being involved in several unaccepted research practices. Although the results cannot prove causality, they are consistent with the notion that the “publish or perish” culture motivates researchers to violate research norms.

Some examples: More than half of economists who responded to the survey said they had “refrained from checking the contents of the works cited,” while about 20% said admitted to salami slicing — ie “Maximized the number of publications by dividing the work to the smallest publishable unit, meaning several individual articles covering similar topics and differing from each other only slightly” — and about a third said they had cherry-picked, or “presented empirical findings selectively so that they confirm one’s argument.”

We asked Daniele Fanelli, who published a highly cited paper on misconduct in 2009, for his take:

The study is methodologically very thorough and accurate and certainly improves existing evidence available in economics. As far as general prevalence of misconduct and QRPs go, survey data are unlikely to surprise us any longer, at least when conducted in western countries. The results of this study fall mostly within the ranges suggested by previous studies.

Other results, however, are remarkably informative. For example, the fact that respondents find “copying work from others without citing” less justifiable than fabricating data or “using tricks” to increase t-values. Such a scale of values might tell us a lot about the priorities that the contemporary scientific system is instilling into researchers, in economics and possibly in other disciplines.

A side note: A line in the acknowledgements caught our eye:

The author is grateful to the editors and three anonymous referees for excellent comments and suggestions. The author is indebted to Lars P. Feld and Bruno S. Frey. Without their support, the project would have been impossible.

Frey, of course, was involved in a duplication — aka self-duplication — kerfuffle, for which he has apologized.

The author of the paper, Sarah Necker of the University of Freiburg, tells Retraction Watch:

The project is part of my dissertation, Bruno Frey provided support with regard to the development of the questionnaire and the distribution of the survey. He commented several times on the manuscript. The research project is unrelated to the self-plagiarism problems, the survey was conducted long before any criticism came up.

About these ads

Written by Ivan Oransky

June 30, 2014 at 9:30 am

9 Responses

Subscribe to comments with RSS.

  1. I am sure that Max Keiser at RT.com would probably have something to say about this. Keiser is a firm critic of the corruption in the economics sector having once been on the inside at a trading firm on Wall Street. Hw ould probbaly identify dzens of contradictions by top level economists. For more: http://rt.com/shows/keiser-report/

    JATdS

    June 30, 2014 at 10:59 am

  2. “the “publish or perish” culture motivates researchers to violate research norms”

    Or rather, given how widespread these practices are, “the “publish or perish” culture motivates researchers to generate *new* research norms”. It’s difficult to say that 94% of people are violating what is ‘normal’ – at that frequency it surely *is* normal!

    And that, really, is the problem.

    David

    June 30, 2014 at 11:02 am

  3. Salami slicing should certainly not be lumped in with cherry picking. The former is ugly, but it does not lead to bad data or theories in the field. The latter is a real problem as it can lead, and likely has led, to bad policy based on faulty data.
    I agree strongly with David, least publishable units (LPUs) are simply the new normal in academic publishing. The emergence of the h-number instead of raw numbers of publication as a metric has damped this a bit, but also led to new shenanigans in self-citation and groups cross-citing each other inappropriately. Again though, none of this is a problem outside of the field except behaviors that create false or skewed data.

    Scott

    July 1, 2014 at 3:25 am

    • Scott, I respectfully disagree with the position that salami publication is more benign than cherry picking. It depends. Salami publication comes in many forms. In many cases, different salami slices share some of the same data from, say, control groups and, if there is only ambiguous or nonexistent cross-referencing, these cases can distort the scientific record if readers believe that the data from each study are new. But, even when discrete salami slices contain completely different data and these are published covertly, without clear cross-referencing between the slices, such scenarios can be highly problematic. I’ll give you a simple scenario from the social sciences. Suppose, I give subjects a packet of questionnaires, including a 4 personality inventories, some attitude and some behavioral measures. I then decide to publish the outcomes of correlations between a couple of the personality measures and the behavioral measures in one journal and then I publish the results of correlations between the other two personality measures and the other behavior and attitude measures in another article. Again, no single datum is shared between the journal articles, but there is no cross-referencing between them. Anyone who has done questionnaire research should be familiar with the many contextual effects that arise when questionnaires are given together as part of the same package. What are these effects? Very often subjects will adjust their responses to one questionnaire based on their responses to the preceding questionnaire (“if I answered one personality inventory as an extrovert, I better answer this behavioral measure in a way that is consistent with the type of personality I just reported!”). Admittedly, this is typically done unconsciously and in a very subtle way, but it can be significant enough to produce completely different study outcomes. Evidence suggests that these context effects extend to how the study is presented and even to the individual investigator administering the study. That is, subjects will respond one way if the questionnaires are presented in a certain way (e.g., this is a study of personality) or in a different way (e.g., a study of possible correlates of serial killers). They will also respond differently in response to the individual researcher administering the study. The bottom line is that not knowing the actual relationship between the studies can lead readers to erroneous interpretation of the results.

      For a summary of context effects in questionnaire research please the work of James Council.

      Reference

      Council, J. Context Effects in Personality Research. Current Directions in Psychological Science. April 1993 2: 31-34,

      Miguel Roig

      July 1, 2014 at 8:07 am

  4. Referees often do not weigh negative results or failures favorably, but weigh positive results or success highly favorably. Finding an alternative route to solve a problem may often be an addition to knowledge (knowledge regarding the route, not regarding the destination), but it is often discouraged by the referees.
    Then cherry-picking emerges, increasing t values through dubious practices emerges and so on. Referees are often considered more knowledgeable than the author but anyone who knows how referees are chosen also knows that publication of one or two papers in a journal, or citation an author makes one worthy of being a referee, too. Unless these loopholes are plugged, misconduct will continue.

    SK Mishra

    July 1, 2014 at 8:07 am

  5. Some saucy bits from the paper: “Having accepted or offered gifts in exchange for (co-)authorship, access to data, or promotion is admitted by 3%. Acceptance or offering of sex or money is reported by 1-2%.”

    Rolf Degen

    July 1, 2014 at 10:10 am

  6. http://www.rofea.org/index.php?journal=journal&page=article&op=view&path%5B%5D=96

    http://www.rofea.org/index.php?journal=journal&page=article&op=view&path%5B%5D=41

    This is part of a larger story involving two of these authors with accusations of plagiarism, self-plagiarism, data-fraud, nepotism, and anonymous accusers.

    twistor

    July 2, 2014 at 10:44 am

  7. Not checking the contents of work cited is lazy, to be sure. But of all the things in a paper that you could fail to check, it’s pretty much the least important, far less important than checking the correctness of your results, for example. Most obviously, it’s less important than failing (from laziness) to find and cite a relevant paper at all. The inclusion of this item under the category of misconduct makes sense if you are talking about undergraduate essays, where the content is presumed to be unoriginal, but not as regards academic research.

    John Quiggin

    July 6, 2014 at 5:57 am

  8. I wish to disagree with Miguel Roig to a certain extent, using a clear example in which salami slicing is induced by the editor and/or publisher. I have had ample occasions in my career where a publisher, or journal editor, has requested a large data set to be slimmed down (or chopped into pieces) because of page limits, or because the data set is too large. In other words, almost under threat of a rejection, the author is left with absolutely no choice but to split the data set to create two (or more) smaller papers. In this case, one can say, quite confidently, that salami slicing was induced by the editor/journal/publisher, yet this is an issue that is almost never discussed, or mentioned by the hardline salami slice critics, simply because it is against the interests of the publishers. It is time for scientists to start standing up against publishers and revealing instances where they have literally been forced to commit salami slicing (either that or facing a rejection, which no logical or reasonable scientist wants). I do fully agree that scientists who commit salami slicing of their own free will simply to generate more papers, or to manipulate their citation index, should be equally criticized, and exposed (although, I would personally be wary of calling the act unethical unless the intent to abuse or deceive can be proved, or unless they have gamed the publishing system to gain financial advantage).

    JATdS

    August 22, 2014 at 10:02 am


We welcome comments. Please read our comments policy at http://retractionwatch.wordpress.com/the-retraction-watch-faq/ and leave your comment below.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 34,759 other followers

%d bloggers like this: