Why aren’t there more retractions in business and economics journals?

jaebrA new paper has catalogued retractions over the past few decades in business and economics journals — and hasn’t found very many.

In “Retraction, Dishonesty and Plagiarism: Analysis of a Crucial Issue for Academic Publishing, and the Inadequate Responses from Leading Journals in Economics and Management Disciplines,” which just went online in the Journal of Applied Economics and Business Research (JAEBR), Solmaz Filiz Karabag and Christian Berggren identified 31 retractions in business journals dating back to 2005, and just six in economics journals, dating back to 2009.

The numbers in business journals are even lower when you consider that ten of those retractions were quite recent, by frequent Retraction Watch subjects Ulrich Lichtenthaler, Dirk Smeesters, and Diederik Stapel.

The authors estimate their searches — through several databases such as ScienceDirect, JSTOR, and EBSCO — cover about 2,000 journals, not completely comprehensive, but certainly more than representative. And while it’s difficult to estimate the rate of retractions in these fields, the total number, over years, is about what we’re seeing in a single month in the rest of science.

The paper is a welcome addition to the retraction literature, which has been picking up steam in the life sciences but sparse in other disciplines. Andrew Gelman wrote about retractions in economics earlier this year, and could only recall two — one by Emily Oster and the other by Bruno Frey. But these two don’t show up in the JAEBR paper, because despite letters from Frey describing his duplication, and a whole new paper from Oster refuting her own results, they aren’t classified retractions by the journals that published the original work.

The RePEc plagiarism committee site has more retractions listed, but again these are typically not classified as retractions.

This — along with the fact that the authors found only a handful had policies on plagiarism or academic dishonesty in place at the journals they examined — could explain, at least somewhat, what Gelman called an “all too low” number of retractions. The fact that economists tend to post working papers, available for scrutiny for years before they’re officially published, may also contribute. It would be useful, for example, to compare the rates of retraction in economics to that in physics, whose researchers have similar practices.

For one of the authors of the paper, the issue struck close to home, as the paper’s introduction notes:

Being an editor of the Journal of Applied Economics and Business Research (JAEBR), one of this paper’s authors was promoting this journal during the gala dinner of R&D Management Conference in Grenoble, in May 2012. One senior researcher asked if we had had any problems with plagiarism, and a young researcher from Germany colleague commented “Come on, who will plagiarize?” The sad answer was that plagiarism is a real issue also for this very new journal. The journal sometimes receives a full paper already published by someone else, where the submitter simply changes the author(s) name; in other cases an author changes the title of his or her previously published paper and submits it. Another example is when a writer consciously copies portions of published articles and passes it on as his/her own work.

And that’s for  a peer-reviewed journal that launched only last year and pays

special attention to emerging markets economics, but it is open to high-quality papers from all fields of applied economics, business and  finance.

26 thoughts on “Why aren’t there more retractions in business and economics journals?”

      1. Funny you say that Jeff. Ivan himself noted some time ago the paucity of retractions in climate science. Not sure why you would get agitated because of a statement of fact, and I never ever spoke of a “ferocious scientific debate about the reality of climate change” (quite the opposite in fact).

        But I like the way you think 🙂

      2. @omnologos I said that because most comments of this type pertaining to climate science are attempts to discredit and misinform. In trying to understand why climate science would be pointed out as low in retractions, the only reason I could come up with is a belief that climate science is somehow less rigorous than it should be, or that dissenting voices are squelched. With no elaboration added, I assumed these beliefs would stem from the oft-repeated drivel that climate science is merely a vast conspiracy.

        Apologies if that’s not what you meant, I assumed the worst in the absence of elaboration. 🙂

        Climate science lacks retractions because of a lack of dissention in scientific opinion. Few papers ever make it past peer review that are not based in facts.

        1. Jeff – I have a different opinion. I do not think there is a relationship between level of consensus and retractions – after all plagiarists might find it easier to show off their abilities in an area where there is less disagreement and therefore the same ideas tend to be banded about.

          There are only so many ways one can say “globe is warming”.

          IMNHO instead the lack of retractions is due to the siege mentality you sadly displayed a couple of comments ago. If the most important thing is to defend the science against perceived attacks, it makes sense not to press editors in retracting a lot, lest “the enemies” find ammunitions in that to further attack the science.

          RW had to survived something similar at the beginning, when it wasn’t sure if it were going to transform into a “scientists are wrong” site or not.

      3. @omnologous An equally valid view of what might be happening. In fact, scientists ARE people, too, so the same kind of protectionism I have when reading these things would certainly translate into the field, probably sooner than later.

        Unfortunate, really, when it’s the dissension that actually enables science to thrive… At least, in my not-so-humble opinion. 🙂

    1. It’s not clear what point you’re making omnologous. However you define it, “climate change science” is a pretty small field, so one doesn’t expect that many retractions. Likewise since it’s an extremely high profile subject, it’s not the sort of field that attracts the low grade plagiarism and other petty shenanigans that are used to attempt to boost CVs. Perusal of RetractionWatch for example shows that that stuff tends to go into the lower echelons of the scientific literature where presumably it’s hoped it won’t be noticed!

      But I can think of several retractions that cover the range from cock up through honourable error through chicanery in the “climate change science” field (see below). There’s more that could be said about this, but this thread isn’t the place…

      administrative error/cock up


      honourable error




    1. ouch! However this is against the reality in Psychology, which is both no formal/empirical Science and also the most corrupted field of research.

      Maybe these areas experience less pressure to stamp-collect papers to boost one’s career?

      1. Psychology may not be a formal science (at least not in its entirety, although there are certainly pockets of highly formalized science in this field — think mathematical psychology), but large parts of it are certainly empirical, and not just in a soft, cargo-cult-sciency way. I’d be careful with making sweeping statements about other sciences.

  1. It’s not an experimental subject, so there’s less scope to falsify results. It would be interesting to compare the retraction rate between experimental papers and review papers in scientific fields.

    1. Agreed.

      For example, in theoretical physics, the fact that someone does an experiment that falsifies a theory propounded in an old journal article would not generally be seen as grounds for retraction of the old theoretical physics article. Similarly, economists feel no obligation to retract papers predicting economic outcomes that reality does not bear out.

      My sense is that retractions overwhelmingly involve problems in experimental data reported in an old article, and not plagarists who get caught.

  2. This does not surprise me and there are many entirely plausible reasons for this. Economics has a tradition of pre-publishing as noted above. Measurement instruments for some of this work, such as surveys, are developed by individual authors. Data are generally not shared openly. Much of the research in these two fields is temporally and spatially specific, i.e. findings can vary over time and per country. Business research is a relatively novel discipline and needs time to mature. Some of the work is qualitative, where you have to rely even more on what the authors (do not) share with you when judging work. Real replication is difficult to do and nearly impossible to publish because of pressures for theoretical / methodological novelty. So it is only in rare cases that blatant problems emerge, Lichtenthaler being a case in point, and then lead to retractions. Arguably the fact that some retractions are now starting to come about is actually a good sign.

  3. I’d second what a few other here have commented – economics isn’t based on rigorous experimental designs and collection of results, and as such, whatever is published is less able to be argued against by replication or repetition of the same ‘experiments’ (as they aren’t really experiments in the first place). Most economic theories are based on analyses of association between different data sets (e.g. inflation rate and GDP growth across countries). Such association data pretty easy to produce – I wouldn’t be surprised if maybe there’s some effect of ‘the publication hurdle’; i.e. since it’s easier to do these association analyses there’s less impetus to make the data up than there is for sciences involved months of experimental work.

    I suspect a large part of the low rate of retractions is also to do with less effort from readers being directed at getting down into the nitty gritty details of a paper (ooh.. is that a bit harsh?)

  4. Aren’t most business ( accounting, management etc.) papers just history papers (look at business papers from 15-20 years ago, most never saw the internet/twitter/facebook/B to B, take your pick) and thus subjected to very liberal/personal reading of the data. By the time a researcher decides to conduct a study, the study taken, the results are looked at and the “numbers” are worked and the paper is written, peer reviewed and then published( in some cases over two years) most of the reasons for the paper have been changed by new laws, consumer patterns and general economies. also most businesses don’t not fully disclose trade practices, secrets, ongoing research etc to any researcher (do you really think apple will freely talk to a researcher about a product in development) i mean do companies really want to give away to the competition the real reason their companies reason for success?

  5. I think the explanation is principally two-fold. First, economists hold their profession to very low standards – in fact, we envy those that make big money so much that we are willing to overlook shoddy, and even intentionally misleading or incorrect results. Second, so much of the data is either proprietary or not openly shared. I’ve had a number of authors, when asked for their data, respond, it is publicly available. That is of no use since the data is ALWAYS manipulated in some fashion – e.g., treatment of missing data, what happens when data from different sources are merged, etc. It is virtually impossible to reproduce somebody’s results (I’ve tried on numerous occasions, never succesfully).

    I don’t have a very high opinion of my own profession (in case you couldn’t tell).

  6. A case of where we already know the answer.

    Richard Tol (@RichardTol) December 10, 2012 at 8:50 am wrote:

    “I’m not sure about your last sentence. Are publishing not common, or are they not commonly reported? Energy Economics, a journal that I edit, bans authors for life. We catch most unethical behaviour during the review process, so that there is no reason to publish such bans. I’m not convinced that privacy laws would allow us to publish this.”

    In particular “We catch most unethical behaviour during the review process, so that there is no reason to publish such bans.”

    If we take it all at face value the reason that we don’t see more retractions in business and economics journals is because they have perfected the system of detecting unethical behaviour. I think people should find out more aobut how they manage this.

  7. Business/Economics papers are written using language that this often oblique and mathematical reasoning where huge assumptions are regularly glossed over. On top of that, the atmosphere in most departments is clubby; which means the peer-review process is undermined.

    When I was in uni– at one of the most respected Business schools in the US– it was common knowledge that professors would write the same paper year after year, and get the warmed-over material republished! This behavior was a standing joke, in fact.

    You’ll never have a real peer review process, or the necessary retractions that follow, when there is academic incest to the degree that one finds in Bus/Econ. The disciplines are nothing more than vehicles for Foundation/Government propaganda.

  8. I haven’t read the paper yet, so may have missed something important in the search methods, but here is a paper in a prominent journal that the authors of the JAEBR paper seem to have missed:

    Gerking, Shelby, and William E. Morgan. 2007. “Effects of Environmental and Land Use Regulation in the Oil and Gas Industry Using the Wyoming Checkerboard as a Natural Experiment: Retraction.” American Economic Review, 97(3): 1032-1032.


  9. I just googled for retractions in econometrics, and found this recent one:

    Torben Andersen & Oleg Bondarenko: “VPIN and the Flash Crash”
    Journal of Financial Markets

    This journal has a retraction policy, however they seem to retract papers only in very extreme circumstances.

    “Article Withdrawal: Only used for Articles in Press which represent early versions of articles and sometimes contain errors, or may have been accidentally submitted twice. Occasionally, but less frequently, the articles may represent infringements of professional ethical codes, such as multiple submission, bogus claims of authorship, plagiarism, fraudulent use of data or the like.”


Leave a Reply

Your email address will not be published. Required fields are marked *