A pair of business researchers in Pittsburgh has lost a controversial 2017 paper on how institutional stock holdings affect tax strategies amid concerns about the validity of the data.
The article, “Governance and taxes: evidence from regression discontinuity,” which appeared in The Accounting Review, was written by Andrew Bird and Stephen Karolyi, of Carnegie Mellon’s Tepper School of Business.
According to the abstract:
We implement a regression discontinuity design to examine the effect of institutional ownership on tax avoidance. Positive shocks to institutional ownership around Russell index reconstitutions lead, on average, to significant decreases in effective tax rates (ETRs) and greater use of international tax planning using tax haven subsidiaries. These effects are smaller for firms with initially strong governance and high executive equity compensation, suggesting poor governance as an explanation for the undersheltering puzzle, and appear to come about as a result of improved managerial incentives and increased monitoring by institutional investors. Furthermore, we observe the largest decreases among high ETR firms, and increases for low ETR firms, consistent with institutional ownership pushing firms towards a common level of tax avoidance.
The article triggered a 100-plus page long thread on Economics Job Market Rumors poking holes in the analysis, as well as an entry on PubPeer.
It also prompted a lengthy analysis in the January 2018 issue of Econ Journal Watch by Alex Young, then of North Dakota State University and now of Hofstra, who attempted to replicate both the 2017 article and 2015 version of the work:
In October 2015, I attended the UNC/Duke Fall Camp conference for accounting at Chapel Hill, North Carolina, where Professor Bird presented the 2015 version of the paper. I approached Professor Bird and raised some concerns about the specification used in the paper. In January 2017, the paper was published in The Accounting Review, and I noticed that the description of the specifications had changed from the 2015 version, while the numbers stayed the same in all 11 tables.
On 18 April 2017 I emailed Professor Bird, reminding him of our conversation in October 2015, and requested that they share their data and code. On 20 April 2017 Professor Karolyi responded with a brief message that ignored my request. On 5 May 2017 I explained my concerns in an email to Professor Edward Maydew (the editor who was assigned to the BK paper at The Accounting Review). On 9 May 2017 Professor Maydew emailed me to say that he had forwarded my message to the journal’s Senior Editor. After waiting for six months with no reply, I contacted Econ Journal Watch on 9 November 2017. On 15 November 2017 the EJW chief editor, Daniel Klein, emailed Professors Bird and Karolyi requesting that they share their data and code. The next day they responded, without addressing the request. On 21 November 2017, Klein replied and repeated the request; later that day they replied, declining the request. Meanwhile, after my also having sent in November 2017 follow-up emails to Professor Mary E. Barth, the current Senior Editor of The Accounting Review, Professor Barth emailed me on 1 December 2017 to say that The Accounting Review was commencing its process for the investigation of possible misconduct.
Immediately following Young’s analysis in Econ Journal Watch is a rebuttal by Bird and Karolyi:
First, we are excited to see that, like Mozaffar Khan, Suraj Srinivasan, and Liang Tan (2017) and Shuping Chen, Ying Huang, Ningzhong Li, and Terry Shevlin (2018), Alex Young has successfully replicated our main findings on the effect of institutional ownership on tax avoidance behavior. We are thrilled that many interested researchers have pursued follow-up studies to ours.
Second, in light of Alex’s accusation, we have reviewed the series of drafts that we submitted during the publication process at The Accounting Review during 2015 and 2016. In doing so, we identified a potentially confusing description of our methodology, which was introduced during the final iteration of copy editing at The Accounting Review. This description conflicted with our correct and clear description of our methodology elsewhere in the published version of the paper. Nevertheless, this error apparently created confusion about the relationship between tabulated estimates from a 2015 working paper version of our paper and the 2017 published version. The methodology did not change, and, thus, the tabulated estimates did not change either.
In an attempt to mitigate future confusion for especially interested readers of our paper, we have requested to make a clarification in The Accounting Review.
In the end, however, The Accounting Review decided to retract the article. According to the notice:
The authors acknowledged that the published version of their article misstates the use of CRSP-based index membership in the main specifications and Russell-based index membership data as a robustness test. The article asserts that the two index membership definitions do not produce quantitatively different estimates. However, the authors were unable to provide the original data and code requested by the publisher that reproduce the findings, as shown in the article’s tables, supporting this assertion. Accordingly, the article has been retracted.
Neither Bird nor Karolyi responded to a request for comment from Retraction Watch. However, last February they offered an explanation for the discrepancy, in which they acknowledge that they:
incorrectly described our use of rankings in the description of methodology in Section II. Whereas we state that we use end-of-May rankings constructed from CRSP in our main specifications and June rankings from Russell in our robustness tests, we actually used June rankings from Russell in our main specifications and end-of-May rankings constructed from CRSP in robustness tests. We note that in a description of these robustness tests in Section III of Bird and Karolyi (2017), we state “Next, we try using an alternative market capitalization ranking of firms based on data from CRSP rather than that directly provided by Russell…” As a result, the two Bird and Karolyi (2017) descriptions are internally inconsistent.
By comparison, a circulated working paper version of Bird and Karolyi (2017) consistently describes the results as using June rankings from Russell in the main specifications and CRSP-constructed May rankings in the robustness tests. The tables and figures in the working paper version are, with very minor exceptions, identical to those in the published version.
The version of the paper that was accepted by The Accounting Review consistently stated that the main specifications used June rankings from Russell. Unfortunately, in attempting to update the paper to better describe the methodology relative to the evolving Russell index reconstitution literature (i.e., these papers had changed between our first and final submissions), we inadvertently switched the description in Section II in the final version. On December 5, 2017, we asked The Accounting Review to let us issue a correction to address the internal inconsistency in the description of methodology in the published version of the paper. This mistake has caused a lot of understandable confusion and for that we sincerely apologize to the The Accounting Review and its readers.
Young, the author of the Econ Journal Watch analysis questioning the paper, tells Retraction Watch:
I believe that the American Accounting Association and The Accounting Review did the right thing, and that the notice from The Accounting Review clearly summarizes the issue: the article asserted that two different sources of data produce similar estimates, but the authors were unable to provide the original code and data to reproduce the findings in the article’s tables to support the assertion, which led to the retraction.
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
Thanks for excellent work by Alex Young and Dan Klein, and thanks to Retraction Watch for its excellent work.
Having some experience in my professional life with the use of the Russell index data I somewhat sympathize with the authors apparently inadvertent mistakes in the description of their methodology. However, I don’t see an explanation from them (here at least) for their inability or unwillingness to make the original code and data available. The possible reasons for this are all pretty damning for the authors.
Seems like journals could improve the transparency by immediately flagging an article that’s being challenged with a “original data and code not submitted” warning to readers. This would be a non-disparaging factual statement that might still put a fire under authors who are reticent about aiding a journal’s inquiry.
A related paper by Andrew Bird and Stephen Karolyi published in the Review of Financial Studies has received a (paywalled) “Statement” and Corrigendum:
https://dx.doi.org/10.1093/rfs/hhz113
The Review of Financial Studies paper has the same problem as The Accounting Review paper (“the description of the specifications had changed from the 2015 version, while the numbers stayed the same in all tables”).
This can be verified by comparing the paper versions:
https://imgur.com/a/hWIAraw
Despite the same problem repeating itself, and despite Bird and Karolyi acknowledging that “the methodology did not change” in The Accounting Review paper, the Corrigendum makes no indication that “the methodology [also] did not change” in the Review of Financial Studies paper.
Given that Bird and Karolyi “were unable to provide the original data and code requested” for The Accounting Review paper, the Corrigendum also does not clarify whether “the data and code provided by the authors” to the committee were the originals.
Lastly, despite these omissions from the Statement, the Statement nevertheless contains several points which make it a puzzle as to why the Review of Financial Studies paper was not also retracted:
-“the Committee discovered that equations (1) and (2) on page 3254, which describe the two-stage model, do not describe the actual regressions that were used to generate the results presented in the paper.”
-“the committee found that the paper’s main inferences are not robust if the models are estimated as described on page 3254 of the paper.”
-“the Committee concluded that the methodology described in the paper does not generate the results reported in that paper.”