Journal pulls paper by economist who failed to disclose data tinkering

An Elsevier journal last week retracted a paper by two senior economists who used questionable methods to replace large chunks of missing observations in their dataset without disclosing the procedure.

The move follows a Retraction Watch story published in February that revealed the paper’s corresponding author, Almas Heshmati of Jönköping University in Sweden, used Excel’s autofill function and other undisclosed operations to populate thousands of empty cells, or well over 10% of the dataset. 

In a guest post on our blog, economist Gary Smith argued Heshmati and his coauthor had  “no justification” for not describing what they had done. Smith also commented in an article for Mind Matters that “the solution to an absence of data is not to fabricate data.”

Less than three weeks after our report, Elsevier told us it would pull the study, “Green innovations and patents in OECD countries,” which appeared last year in the Journal of Cleaner Production. On May 4, the publisher issued a retraction notice stating:

Concerns were raised about the data in this paper post-publication. The first author confirmed to the Editors that the data contained many gaps across countries and over time. The majority of the missing unit values appeared in beginning and end year years (1990–1992 and 2017–2018). To represent country data over time, the first author imputed the missing unit values using forward and backward trends based on three consecutive values in Excel. The first author claims that this resulted in a balanced dataset inclusive of the sample countries and further states that the imputation of missing values based on variable—specific trend will not change the result and large number of observations will produce more stable estimated coefficients.

However, the first author agrees that he has not explained the imputation in the data section of the article and the effects that the imputed data has on the result has not been tested. Since a total of 36 variables had missing units, few observations would have remained.

The Editors and the authors have concluded that the findings of the paper may be biased.

Heshmati did not immediately respond to a request for comment. 

His paper has been cited twice, according to Clarivate’s Web of Science, including once in February.

In a PubPeer comment on the retraction, sleuth Alexander Magazinov said “the imputation as described in the note is way more benign than the one described in the Retraction Watch blog post. Which description is more accurate?”

Heshmati has used the same dataset in past research. As we reported in February:

In 2020, he and two colleagues published a paper in Empirical Economics, a Springer Nature title, that bore strong resemblance to the 2023 article and relied on the same patched-up dataset. The article mentioned neither the data gaps nor the Excel operations.

The publisher told us at the time it was “looking into the matter carefully.” In a May 9 email, however, Deborah Kendall-Cheeseman, a communications manager at Springer Nature, told us:

I’m afraid we don’t have an update we can share at the moment but we’ll let you know once we do.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly updatefollow us on Twitter, like us on Facebook, or add us to your RSS reader. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at [email protected].

2 thoughts on “Journal pulls paper by economist who failed to disclose data tinkering”

  1. Their main problem is that they are in the wrong field. If the had been climatologists it would have all been good.
    But seriously, stories like this, along with the mishandling of the “pandemic”, are why average people don’t just trust the science. With the current focus on winning grants, the temptation is strong for scientists to only “discover” things that they will get paid for.

  2. Yeah. Maybe this is true. If you do an imputation, which is fine, you do need to document method and I think it is best practices to compare your findings to complete case analysis. I work in the health fields and it is generally expected that you explain methods, justify it based on prior research and then present both imputed and unimputed results to demonstrate any disparities.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.