Accounting fraud paper retracted for “misstatement”

The Accounting Review, a publication of the American Accounting Association, has retracted a 2010 paper, but the reason for the move is less than clear.

The article, “A Field Experiment Comparing the Outcomes of Three Fraud Brainstorming Procedures: Nominal Group, Round Robin, and Open Discussion,” was by James E. Hunton, an award-winning accountancy prof at Bentley University in Waltham, Mass., and Anna Gold [updated 1/22/13 to update link], of Erasmus University in Rotterdam, The Netherlands. It has been cited 24 times, according to Google Scholar.

According to the retraction notice:

The authors confirmed a misstatement in the article and were unable to provide supporting information requested by the editor and publisher. Accordingly, the article has been retracted.

That’s not altogether clear — was the misstatement an error? a blatant falsehood? — so we reached out to the review’s editor, John Harry Evans III, for comment. But what we got was something less than a, um, full accounting of the matter.

Evans told us that he wouldn’t say anything more about the paper, and that the notice:

sort of has to stand as stated.

Acknowledging that “best” in this case equals vague, Evans reiterated that

I think we described it in the way we felt was best.

The authors, for their part, stoutly disagree. We spoke with Hunton, who told us that the misstatement involved bad information provided to him and his coauthor by the firm that supplied the data for the study.

In a nutshell, the company misrepresented the number of U.S.-based offices it had: not 150, as the paper maintained (and as a reader had noticed might be on the high side, triggering an inquiry from the journal), but quite a bit less than that. In fact, the 150 figure came from combining U.S. offices with international outposts — an important difference, to be sure, but not one that necessarily would kill the paper.

So why not a correction?

That’s exactly the position we took. We clearly admitted there’s a misstatement

but the data were unaffected, Hunton said.

To our knowledge the study is unaffected by the description of the sample

The editors rejected that argument, insisting instead on a retraction. Hunton and Gold were told:

If you misunderstood this, how do I know that you didn’t misunderstand something else.

Compounding the problem, Hunton said, was the fact that the accounting firm refused to allow the researchers to share the data with the journal.

They said there was a confidentiality agreement.  There were live client files involved and they wouldn’t share the raw data.

So, faced with the realization that they were going to lose their appeal, Hunton and Gold switched tacks and asked for a retraction.

I am very disappointed. We insisted upon a correction, and when that was rejected  we said, the best thing to do is to voluntarily withdraw the article unless or until we could clear this up ion the future.

But Hunton said he was frustrated by the way the notice appeared, with its implication of misconduct.

 This just wasn’t right. They should have stated the circumstances and made it very clear that we couldn’t provide the information based on a confidentiality agreement.

Update, 6:30 p.m. Eastern, 11/27/12: See a comment from Hunton and his co-author with more details.

67 thoughts on “Accounting fraud paper retracted for “misstatement””

  1. So the editors did not ask for supporting data back in 2010? They should have. I completely see the point that if the authors misunderstood one fact there might be more that they misunderstood and the editors would like to take a look at the data to be sure. It would be better for editors to insist on having the data on hand before a manuscript gets into print, in case questions arise about whether authors understand their data. But it does seem a little harsh to make availability of the data a non-negotiable point so long after acceptance of the manuscript.

    1. FYI for respondents to this topic, the Harry S. Markopolos posting to this forum is not the same person as Harry Markopolos who turned in Bernard Madoff to the SEC. I know this to be true because I’m Harry Markopolos (Whitman, MA) and my middle initial is not S. In fact, I don’t use a middle initial figuring that with a last name like mine, what’s the point? I just wanted to clarify this so that no one thinks he is me or I am him.

  2. A statement from the authors (Gold / Hunton):

    Explanation of Retraction (Hunton & Gold 2010)

    On November 9, 2012, The Accounting Review published an early-view version of the voluntary retraction of Hunton & Gold (2010). The retraction will be printed in the January 2013 issue with the following wording:

    “The authors confirmed a misstatement in the article and were unable to provide supporting information requested by the editor and publisher. Accordingly, the article has been retracted.”

    The following statement explains the reason for the authors’ voluntary retraction.

    In the retracted article, the authors reported that the 150 offices of the participating CPA firm on which the study was based were located in the United States. In May 2012, the lead author learned from the coordinating partner of the participating CPA firm that the 150 offices included both domestic and international offices of the firm. The authors apologize for the inadvertently inaccurate description of the sample frame.

    The Editor and the Chairperson of the Publications Committee of the American Accounting Association subsequently requested more information about the study and the participating CPA firm. Unfortunately, the information they requested is subject to a confidentiality agreement between the lead author and the participating firm; thus, the lead author has a contractual obligation not to disclose the information requested by the Editor and the Chairperson. The second author was neither involved in administering the experiment nor in receiving the data from the CPA firm. The second author does not know the identity of the CPA firm or the coordinating partner at the CPA firm. The second author is not a party to the confidentiality agreement between the lead author and the CPA firm.

    The authors offered to print a correction of the inaccurate description of the sample frame; however, the Editor and the Chairperson rejected that offer. Consequently, in spite of the authors’ belief that the inaccurate description of the sample does not materially impact either the internal validity of the study or the conclusions set forth in the Article, the authors consider it appropriate to voluntarily withdraw the Article from The Accounting Review at this time. Should the participating CPA firm change its position on releasing the requested information in the future, the authors will request that the Editor and the Chairperson consider reinstating the paper.

    Signed:

    James Hunton
    Anna Gold

    References:
    Hunton, J. E. and Gold, A. (2010), “A field experiment comprising the outcomes of three fraud brainstorming procedures: Nominal group, round robin, and open discussions,” The Accounting Review 85(3): 911-935.

  3. The explanation provided by the Hunton and Gold regarding the recent TAR retraction seems to provide more questions than answers. Some of those questions raise serious concerns about the validity of the study.

    1. In the paper, the audit clients are described as publically listed (p. 919), and since the paper describes SAS 99 as being applicable to these clients, they would presumably be listed in the U.S. However, according to Audit Analytics for fiscal year 2007, the Big Four auditor with the greatest number of worldwide offices with at least one SEC registrant was PwC, with 134 offices (the remaining firms each had 130 offices). How can you take a random sample of 150 offices from a population of (at most) 134?

    Further, the authors state that only clients from the retail, manufacturing, and service industries with at least $1 billion in gross revenues with a December 31, 2007 fiscal year-end were considered (p. 919). This restriction further limits the number of offices with eligible clients. For example, the Big Four auditor with the greatest number of offices with at least one SEC registrant with at least $1 billion in gross revenues with a December 31, 2007 fiscal year end was Ernst & Young, with 102 offices (followed by PwC, Deloitte and KPMG, with 94, 86, and 83 offices, respectively). Limiting by industry would further reduce the pool of offices with eligible clients (this would probably be the most limiting factor, since most industries tend to be concentrated primarily within a handful of offices).

    2. Why the firm would use a random sample of their worldwide offices in the first place, especially a sample including foreign affiliates of the firm? Why not use every US office (or every worldwide office with SEC registrants)? The design further limited participation to one randomly selected client per office (p. 919). This design decision is especially odd. If the firm chose to sample from the applicable population of offices, why not use a smaller sample of offices and a greater number of clients per office? Also, why wouldn’t the firm just sample from the pool of eligible clients? Finally, would the firm really expect its foreign affiliates to be happy to participate just because the US firm is asking them to do so? Would it not be much simpler and more effective to focus on US offices and get large numbers of clients from the largest US Offices (e.g., New York, Chicago, LA) and fill in the remaining clients needed to reach 150 clients from smaller offices?

    3. Given the current hesitancy of the Big Four to allow any meaningful access to data, why would the international offices be consistently willing to participate in the study, especially since each national affiliate of the Big Four is a distinct legal entity? The coordination of this study across the firm’s international offices seems like a herculean effort, at least. Further, even if the authors were not aware that the population of offices included international offices, the lead author was presumably aware of the identity of the partner coordinating the study for the firm. Footnote 4 of the paper and discussion on page 919 suggest that the US national office coordinated the study. It seems quite implausible that the US national office alone would be able to coordinate the study internationally.

    4. In the statement that has been circulated among the accounting research community, the authors state:

    “The second author was neither involved in administering the experiment nor in receiving the data from the CPA firm. The second author does not know the identity of the CPA firm or the coordinating partner at the CPA firm. The second author is not a party to the confidentiality agreement between the lead author and the CPA firm.”

    However, this statement is inconsistent with language in the paper suggesting that both authors had access to the data and were involved in discussions with the firm regarding the design of the study (e.g. Footnote 17). Also, isn’t this kind of arrangement quite odd, at best? Not even the second author could verify the data. We are left with only the first author’s word that this study actually took place with no way for anyone (not even the second author or the journal editor) to obtain any kind of assurance on the matter. Why wouldn’t the firm be willing to allow Anna or Harry Evans to sign a confidentiality agreement in order to obtain some kind of independent verification? If the firm was willing to allow the study in the first place, it seems quite unreasonable for them to be unwilling to allow a reputable third party (e.g. Harry) to obtain verification of the legitimacy of the study. In addition, assuming the firm is this extremely vigilant in not allowing Harry or Anna to know about the firm, does it seem odd that the firm failed to read the paper before publication and, therefore, note the errors in the paper, including the claim that is made in multiple places in the paper that the data came from a random sample of the firm’s US offices?

    5. Why do the authors state that the paper is being voluntarily withdrawn if the authors don’t believe that the validity of the paper is in any way questioned? The retraction doesn’t really seem voluntary. If the authors did actually offer to retract the study that implies that the errors in the paper are not simply innocent mistakes.

    Given that most, if not all US offices would have had to be participants in the study (based on the discussion above), it wouldn’t be too hard to obtain some additional information from individuals at the firms to verify whether or not the study actually took place. In particular, if we were to locate a handful of partners from each of the Big Four who were office-managing partners in 2008, we could ask them if their office participated in the study. If none of those partners recall their office having participated in the study, the reported data would appear to be quite suspect.

    Am I missing something here?

    Sincerely,

    “Harry Markopolos”

    1. “Do you still beat your husband”?

      The reason that the question is considered unfair is because it presumes the guilt of the accused.

      So too, in this episode.

      The sample size numbers in the study don’t add up. Therefore it must be fraud.

      I have opined, on this list, on the tendency of rookie auditors to see fraud in every column of numbers that doesn’t foot. Welcome to rookie auditor time — only now the rookie auditors are running the show. They obtained a conviction based on rumor — the numbers don’t add up — therefore it must be fraud. The data are confidential. Case closed. It must be fraud…. Reviewers — even vigilantee reviewers — are always right. Case closed.

      Real auditors know that the consequences of alleging fraud are severe and that such allegations are only made with substantial proof. It is, regretably, the case that reviewers, even post-hoc, vigilantee reviewers such as “Harry M. Markopolos” are regularly allowed to make crazy, batty, nutty assertations and that authors are, because of the rules of the game, MUST alter their papers to respond to reviewer nuttiness.

      How cowardly of “Harry M. Markopolos” to make these accusations anonymously. And how symptomic of the craziness of the anonymous peer review, in which reviewers can allege fraud, without support, and the reviewers are held to be correct, and the reviewers, without support, are not required to make anything but allegations to permanently stain the reputations of respected scholars.

      It’s a mad, mad, mad world we academics live in.

      There are days, and weeks, in which I am ashamed to be a member of the community of accounting scholars. Count this month and the last in this category.

      Dan Stone

      1. I fail to see how my questions were unfair. At no point did I state “it must be fraud.” Instead, I simply pointed out significant inconsistencies within the study that merit further attention. The basis for my arguments is verifiable, objective data and observation about the state of the world in which the study took place. None of my questions presume guilt, as you claim. Further, at no point did I state definitive conclusions of my own–I simply raised questions that I believe merit further consideration.

        To use your auditing analogy, when an auditor finds an inconsistency, they should investigate further. Ignoring the issue would be as irresponsible as immediately concluding that the inconsistency is definitive proof of fraud. While I did not claim that this is clearly fraud, you failed to offer an alternative explanation for the inconsistencies I pointed out. Instead you summarily dismissed any possibility of improprieties (or even meaningful errors).

        Your over-the-top response is a perfect example of the kind of behavior that justifies my decision to remain anonymous. You fail to substantively address any of the questions I raise and instead spend a significant amount of time attacking me for raising legitimate questions based on publically available information.

        Regarding the state of our discipline as a whole, shouldn’t authors (and co-authors) have the responsibility to design their research in a way that will settle substantive accusations of fraud (using your words, not mine)? As accounting scholars, we should be especially aware of the need to perform our research in a way that stands up to accusations of fraud. Research that cannot withstand reasonable scrutiny has no place in our journals. Instead, such “research” merits the same level of scientific respect as the fields of astrology and alchemy.

        “Sherlock Markopolos”

      2. Dan your comprehension of a logical argument is lacking and calling harry a coward, crazy, batty, nutty does not assist your argument. Harry is making a well reasoned and logical questioning of Hunton’s research, something that was missing in the “peer review” process.

        Hunton in his reply, uses a circular argument ie, ” I write papers, which are peer reviewed, are published and that makes me a well respected paper writer, therefore everything I write is beyond question, and if you question it I am not allowed to show you the proof but I write papers……. if anyone questions any of those micro statement they are directed to the next statement.

        As an officer of the court and an instructor if I were to present an argument and then say “well I could prove it but I can’t show you because I promised to not use the statements of proof”. I would by laughed out of court or the classroom. You used the statement that you beat your husband, well if your Hunton your response is “I have written statements from 20 witnesses but they won’t let me show you, but I am well respected so you must accept my word.” I guess Dan would blindly accept that argument as proof.

        One thing that Harry did miss was how many people on the board or on committees of the American Accounting Association has Hunton written papers with, which would allow him to go almost 3 years before someone questioned his work, and who peered review this that they wouldn’t question the evidence.

        Someday a lawyer with a lot of time on their hands is going to win a lot of money in a class action against people and universities who use research grants to write fraudulent papers, its only a matter of time.

      3. FYI for respondents to this topic, the Harry S. Markopolos posting to this forum is not the same person as Harry Markopolos who turned in Bernard Madoff to the SEC. I know this to be true because I’m Harry Markopolos (Whitman, MA) and my middle initial is not S. In fact, I don’t use a middle initial figuring that with a last name like mine, what’s the point? I just wanted to clarify this so that no one thinks he is me or I am him.

      1. FYI for respondents to this topic, the Harry S. Markopolos posting to this forum is not the same person as Harry Markopolos who turned in Bernard Madoff to the SEC. I know this to be true because I’m Harry Markopolos (Whitman, MA) and my middle initial is not S. In fact, I don’t use a middle initial figuring that with a last name like mine, what’s the point? I just wanted to clarify this so that no one thinks he is me or I am him.

  4. I’m not going to bother re-checking Harry’s work, but I did want to point out one thing. On p. 919 the authors state “…only firms with gross revenues LESS [emphasis added] than $1 billion would be allowed.” Harry appears to search for firms with gross revenues of AT LEAST $1 billion. I suspect this completely changes the numbers Harry reports in Point 1.

    1. Jay, thanks for pointing that out–I misread that part of the paper. While this wouldn’t change the first set of numbers I reported (i.e. regardless of size, no firm has more than 134 offices with SEC registrants), it does impact the second set of numbers. When I reran the analysis including only offices with SEC registrants with less than $1 billion in gross revenues, the number of offices with eligible clients decreased as follows:

      PwC: 80
      EY: 92
      Deloitte: 88
      KPMG: 88

      Also, I still haven’t limited the number of offices to the industries named by the authors, which would further restrict the number of offices with eligible clients. I would encourage others to replicate my analyses.

      “Harry S. Markopolos”

      1. Harry, I’m just reading the “Design” section on p. 919, but it does not say the clients were SEC registrants. Perhaps it’s stated somewhere else, I don’t know. I get why you make that leap, but given the audit firm initiated the selection and treatment condition assignment, non-registrants may have been used. Am I missing something?

    2. FYI for respondents to this topic, the Harry S. Markopolos posting to this forum is not the same person as Harry Markopolos who turned in Bernard Madoff to the SEC. I know this to be true because I’m Harry Markopolos (Whitman, MA) and my middle initial is not S. In fact, I don’t use a middle initial figuring that with a last name like mine, what’s the point? I just wanted to clarify this so that no one thinks he is me or I am him.

  5. Jay, the authors state that “only publicly listed companies would be included.” Further, the paper repeatedly describes SAS 99 as being applicable to these companies. I don’t see it as a leap at all to say that the paper implies that the clients were SEC registrants. If they are subject to SAS 99 and publicly listed, then they would be SEC registrants.

    As further evidence, the design section describes the guidance memo as being “standardized for all auditors within each of the treatment conditions.” (p. 920) So, basically, within each condition each audit team was told to do the same thing. Let’s assume for a moment that the NA office was able to coordinate this monumental effort across the world to include auditors of foreign companies who are publicly listed, but not in the US (recalling my argument that any coordination with the international affiliates seems unlikely already, even if we are just talking about coordination across offices with clients who are SEC registrants). How could you have a single standardized guidance memo (per treatment condition) across audit teams that are subject to different standards, especially since Footnote 17 states that the memo included firm-specific guidance, policies, and procedures? Also, since the the first author was involved in discussions regarding that guidance memo and Footnote 17 states that both authors thoroughly examined the memo, how would they not be aware that the memo was also being applied to companies who were not subject to SAS 99? In other words, in that scenario, how would the authors have been unaware that their random sample of offices included international affiliates?

    On that basis, I find it incredibly unlikely that the sample could have included audit teams of publicly traded companies who were not SEC registrants. Further, if, improbably, that was what happened, the data analysis within the paper would be highly suspect, having failed to account for some very relevant variables.

    “Harry”

  6. It seems to me that information is being used in an approximately rational way, but that the posters differ in their information set (e.g. Stone’s CV shows several coauthored papers with Hunton).

    It makes sense that a newbie auditor sees fraud in everything since they do not know the base-rate of fraud, nor are they likely to have well-calibrated likelihood functions. Although this could suggest an under-assessment of fraud, consider that the accounting curriculum at the university level (at least at my University) includes lots of coverage of fraud – nearly every course includes a case about fraud. Furthermore, a Type I error is considerably more costly in the workplace than it is in the classroom, and newbies are likely unaware of the magnitude of this cost.

    Here’s a rather bland comment: the evidence seems to suggest that the probability of fraud is in the open interval (0,1). Given the externalities of fraud being found in accy journals, it seems that the collection, disclosure, and evaluation of evidence is a good thing for our discipline, and academia as a whole. So I am glad to see Stone’s defense of Hunton as well as others analysis of the data that is publicly available. Since a Type I error is costly, I think posting anonymously ought to be acceptable to the extent the information and analysis can be replicated (i.e. the data is publicly available).

    My subjective likelihood function suggests that if this is fraud, its unlikely to be an isolated case. So, given Hunton’s long list of big-name coauthors, I expect that there are plenty of people with relevant information. Has Hunton had suspect data in the past (i.e. collected without coauthors, etc)?

    PS – I am hoping this ends up being an honest mistake…

    -Accy PhD Student who does not want to bear potential reputation costs of his post because he sees no upside potential of disclosing his name.

  7. …and to be a little more explicit:

    Are the same mechanism that can lead newbie auditors to (perhaps rational) over assesses fraud risk present in Harry’s case?

    Regarding the ad hominem that is going on, we are not reasoning under certainty here, and once we are dealing with uncertainty I think disposition is relevant. However, conditioning on analysis and information (e.g. when the info is publicly available) ought to make personal characteristics irrelevant (hence the justifiability of unanimous posts, under certain conditions).That is to say, the personal characteristics cause lies and poor analysis. But the latter are observable, and by the modern definition of causation that make disposition uninformative (i.e. we have conditional independence).

    We are dealing with a similar situation regarding Hunton’s list of big-name coauthors. Absent more information, it suggest Hunton is unlikely to be dishonest (e.g. wouldn’t others notice?). However, if we condition on the coauthors assessment of Hunton’s data collection procedures, we may reach a different conclusion.

  8. As a writers list of co-authors grows, misstatement, mistakes, errors that were small or that had no bearing on the quality of the papers written grows as well (do you think that the 100 or so people that he has co-authored papers with are going to be as critical of him (in his later papers) as someone who has no working relation ship with him are, please it just human nature). As time goes on the people that had worked with an author are more likely to forgive/overlook these small mistakes, thus the mistakes grow as more papers are written, the reputation of the writer grows as well, making harder for peers to publicly challenge the work (do your really think the thousand or so people who cited his work are now going to challenge his work after they have used his work to support their papers). This process continues until a very obvious error occurred, this is very common with other fields and that is what has happened here.

  9. I learned about this retraction and website from a colleague. I have to say that after reading the Markopolos posts I get the distinct impression that this person was involved with the retraction process. This entire set of posts made me genuinely disappointed. The retraction is not yet in print, but Markopolos already performed in-depth analyses of the paper and made claims about fraud within a day of it appearing on this website. I presume that Markopolos is someone who has worked to make this retraction happen, and now is making an unambiguous effort to spread claims of massive fraud. The name chosen by the anonymous poster says all we need to know. This person is taking the position that he is revealing some huge fraud by taking anonymous and uncorroborated shots at a paper. I therefore have read this with a critical eye. The replies to Stone suggest that there was no intent to say that fraud occurred. This appears entirely disingenuous after reading the posts.
    It is clear that the editor will not say anything and the journal published a statement that also says little or nothing. The fact that no position is being taken by the journal leads me to wonder if the editor and journal are worried that they lack proper justification for a retraction and will face legal troubles in the future if they say anything at all. If the journal had actual evidence of wrongdoing, then I would expect that the journal and editor would have the confidence to print it. The authors are the only ones willing to provide an explanation, and the editor and journal do not dispute it in any way. I am disposed to accept the authors’ explanation in this circumstance. If we accept the only explanation available, then it looks as if there was an inconsequential error and the editor’s response to the error was to assume that the authors could not be trusted and request that a contractual agreement be violated. The authors reacted in the most legal and ethical manner possible. They elected not to violate their legally binding contract with the firm, and now they are paying a heavy price for having integrity. If the journal had problems with the confidentiality agreement, this was a matter that should have been addressed during the very early stages of review. The journal could have simply stated that they did not want to review a paper with a confidential firm involved. Attacking the sample years after publication appears to be a witch hunt, and a lot of damage is being done for a small error in a sample description. One editor is deciding to undo the prior decisions of another editor, another associate editor, and the blind reviewers because the description of the number of offices was incorrect. This does not sound like much more than a typo but the response appears colossal.
    How this all translates into the frauds that Markopols claims escapes my logic. Some of the Markopolos claims are
    1 There is an issue with the total number of offices in the study.
    It emerges from the existing posts that this number depends totally on the description of how the sample of offices is determined. It also seems clear that the firm itself decided how many offices to involve and reported this figure back to the authors. The authors accepted the number given by the firm. The number must not have been surprising to any of the blind reviewers, associate editors, or the editor of the Accounting Review during a long review process. Perhaps it was discussed at length during the review process, or perhaps everyone found no issues with it. The reviewers were likely some of the finest audit/fraud researchers in the world. The Markopolos discussion of offices does not inform me of anything other than the fact that Markopolos seems to be going to great efforts to find a way to make fraud claims about this paper when the journal itself did not make any such claims.
    2. There are lots of questions in the posts about why the firm would choose particular sets of offices or particular clients in each office.
    I doubt even the authors know why the firm made certain choices. Researchers could make similar attacks about every field experiment conducted in the past. Why didn’t the firm make different choices? We do not know, but this does not mean anything other than the fact that the firm made choices. They were their choices to make. Authors do not have the capacity to control firms’ decisions in field experiments.
    3. Markopolos claims that an audit firm would never allow a field experiment like this.
    Well, that is a bold and somewhat conceited statement. This sounds like Markopolos believes that no one can do a field study with an audit firm because Markopolos has not been able to do a field study. Firms get involved in field studies because they want to learn something – such as which fraud brainstorming technique is best. A rare few authors have the talents needed to get firms to commit. I suppose that Hunton spent as many as 10 years working with a firm in order to gain the trust needed to conduct a field experiment. To make a blanket statement that nobody could get a firm to conduct a field study is absurd, and it points again to the motive of Markopolos.
    4. Markopolos claims that the firm should be willing to let others know who they are –even though there was a signed confidentiality agreement.
    If the firm insisted that it remain confidential in order to carry out the study, and the study revealed information about the firm that could potentially cause it to be sued, then why would a huge multinational firm with much to lose allow their agreement to be violated simply because an editor wants it to be violated. They would not, and the attorneys would never let it happen. Lawyers run the show when research is being done within firms. The attorneys make the decisions, and they have no concerns about helping keep a paper they have never heard of and do not care about in print. They also do not spend their time reading Accounting Review papers for errors. Confidentiality agreements are much more routine in field studies than other areas of research because the research can expose the firms to real and significant losses.
    5. Markopolos claims that researchers should not trust their coauthors and we should all assume that coauthors are lying if they do anything that involves confidentiality.
    I doubt that any good researchers in any field spend their time interrogating their coauthors. People working with experienced and established researchers must place trust in order to accomplish anything. The view that there is always fraud if we do not watch over the shoulders of our colleagues is sad, and I can see why Dr. Stone indicated such sadness in his post. If the Accounting Review is unwilling to publish field studies, interviews, case studies, etc that require confidentiality of the participants, then the journal needs to take this position for everyone and take the position only during the review process. Of course, this means that our profession will no longer be able to conduct this type of research. Imagine interviewing ten CEOs about how they manage earnings for an exceedingly valuable paper but having to inform them that they will need to give their names and email addresses to a journal editor and some reviewers so that they can be contacted to insure that they actually participated in the interviews. What is the likelihood they would participate? They would not. Perhaps we should not trust that any experiments with accounting students in the Accounting Review were even conducted and ask for all authors to provide the names of the students for proof. Without any level of trust, our profession will lose the most insightful research, and we will all have to purchase our data from a database retailer. I am already assuming that the firm involved in this field study will never agree to another study because of the editor’s reaction, and we have all lost the opportunity to ever work with this firm at any meaningful level again.

    1. “Margaret”

      Since your post is somewhat lengthy and a good portion of it isn’t formatted in a way that facilities response, I have pasted portions of your comments below, in quotations and preceded by MCS, and I have responded in turn.

      MCS: “I have to say that after reading the Markopolos posts I get the distinct impression that this person was involved with the retraction process. This entire set of posts made me genuinely disappointed. The retraction is not yet in print, but Markopolos already performed in-depth analyses of the paper and made claims about fraud within a day of it appearing on this website.” I presume that Markopolos is someone who has worked to make this retraction happen, and now is making an unambiguous effort to spread claims of massive fraud. The name chosen by the anonymous poster says all we need to know. This person is taking the position that he is revealing some huge fraud by taking anonymous and uncorroborated shots at a paper. I therefore have read this with a critical eye. The replies to Stone suggest that there was no intent to say that fraud occurred. This appears entirely disingenuous after reading the posts.”

      I’m sorry, but how is November 27 to December 2 one day? I had no involvement in the retraction process, nor am I associated with the AAA in any capacity beyond simple membership. The analysis I cite in my first question took me an hour or two to run. I spent another hour or two looking through the paper in depth to make sure I hadn’t missed anything (I had read it previously, as this paper was a pretty big deal when it was first published), which is when I came up with my other questions. I did all of this on December 2, although prior to that date I did give the issue some thought (after the authors’ statement was posted here on November 27).

      Also, where did I say, “this must be fraud”? My pseudonym is appropriate because the real Markopolos frequently pointed out that some publicly available information about Madoff didn’t add up. I have pointed out some things that don’t add up related to this retraction, but I completely left open the possibility for an innocuous alternative explanation (i.e. my pseudonym isn’t perfect because, unlike the real Markopolos, I didn’t definitively link the inconsistencies to fraud). In fact, I explicitly disassociated myself with that aspect of my pseudonym, so your suggestion that I am being disingenuous is a colossal straw man.

      Also, since my questions are based on publicly available data, they can be corroborated by anyone. My anonymity is irrelevant to the validity of my questions, as they are based solely on logic and publicly available data. To say that my claims are uncorroborated is completely inaccurate, especially since everyone can verify my findings for themselves. Also, I’m aware that several people have replicated my analysis and the findings appear to be robust. I am not aware of anyone using Audit Analytics or any other publicly available database to show that Hunton and Gold’s claims are possible.

      MCS: “It is clear that the editor will not say anything and the journal published a statement that also says little or nothing. The fact that no position is being taken by the journal leads me to wonder if the editor and journal are worried that they lack proper justification for a retraction and will face legal troubles in the future if they say anything at all. If the journal had actual evidence of wrongdoing, then I would expect that the journal and editor would have the confidence to print it.”

      The journal retracted the paper–I would hardly call that “no position.” The journal could very well have sufficient basis for a retraction while still wanting to avoid potential lawsuits that could result from public statements. While this is speculation on my part, the fact that the journal retracted the paper suggests that they felt they had a sufficient legal basis for making the retraction. My questions, if unanswered, would provide sufficient justification for a retraction because they call into question the validity of the data, regardless of intent (i.e. the data could be invalid even absent fraud). However, even if the journal raised similar questions and the authors were unable to provide sufficient answers, that doesn’t mean the journal wouldn’t also be worried about facing legal challenges to public statements (e.g. a lawsuit alleging libel).

      MCS: “The authors are the only ones willing to provide an explanation, and the editor and journal do not dispute it in any way. I am disposed to accept the authors’ explanation in this circumstance. If we accept the only explanation available, then it looks as if there was an inconsequential error and the editor’s response to the error was to assume that the authors could not be trusted and request that a contractual agreement be violated.”

      I’ve already demonstrated the insufficiency of the author’s explanation. Their explanation cannot be true without the paper having another significant inaccuracy that would probably be sufficient to invalidate the data analysis currently presented in the paper. How is that inconsequential? Also, if I was able identify such an inconsistency using publicly available data, wouldn’t the editor and others involved in the retraction process have been better able to spot such inconsistencies, assuming they had some access to non-public information when investigating the retraction? In addition, the fact that the journal chose to incorporate any part of authors’ statement in the published retraction notice is an implicit disagreement.

      MCS: “The authors reacted in the most legal and ethical manner possible. They elected not to violate their legally binding contract with the firm, and now they are paying a heavy price for having integrity. If the journal had problems with the confidentiality agreement, this was a matter that should have been addressed during the very early stages of review. The journal could have simply stated that they did not want to review a paper with a confidential firm involved.”

      You expect the review process to catch everything? Companies restate their financial statements, in part, because auditors don’t provide complete assurance. Just because the reviewers and accepting editors didn’t catch something doesn’t mean we shouldn’t scrutinize it ex post.

      No one has suggested that the authors should have violated their confidentiality agreement. However, I find it quite odd that the firm would be so concerned about their identity that they wouldn’t reveal themselves to a journal editor. Couldn’t Harry Evans have signed a confidentiality agreement in order to validate that the study took place and to ask a few questions? How does that expose the firm to incremental legal liability?

      Also, within our discipline as a whole, shouldn’t authors (and co-authors) have the responsibility to design their research in a way that will settle questions about the validity of the research? Research that cannot withstand reasonable scrutiny has no place in our journals. Instead, such “research” merits the same level of scientific respect as the fields of astrology and alchemy.

      MCS: “Attacking the sample years after publication appears to be a witch hunt, and a lot of damage is being done for a small error in a sample description. One editor is deciding to undo the prior decisions of another editor, another associate editor, and the blind reviewers because the description of the number of offices was incorrect. This does not sound like much more than a typo but the response appears colossal.”

      Perhaps you need to read up on the definitions of “witch hunt” and “typo”… Based on publicly available information, I demonstrated how the follow-up explanation given by the authors cannot be true unless the paper contains some other significant misstatement that would undermine the validity of the study as a whole. The decision to retract the paper seems quite appropriate in that light.

      Regarding your numbered points:

      1. Your response provides no alternative explanation to my verifiable, objective finding that the study could not have taken place without the paper containing another factual misstatement about the sample that would be much more serious (i.e. actually threaten the validity of the study). The fact that the reviewers and editors don’t appear to have taken issue with the reported number of offices is not a justification for ignoring the issue now.

      Using your logic, I can imagine you buying stock in Allied Crude Vegetable Oil Refining Corporation after a whistleblower pointed out that their reported inventory of soybean oil vastly exceeded global production. After all, the auditors and inspectors didn’t take issue with the company’s reported inventory…

      Also, my supposed “great efforts,” were actually quite simple…

      2. Of course authors don’t control firms’ decisions in field studies. I raised those questions because they seem like very odd decisions for a firm to make. That is, they don’t really conform with reasonable expectations of how a firm would behave in such a field study (and in some cases the decisions differ significantly with what we would expect).

      3. Perhaps you should have spent a bit more time reading my statements. Where did I say that an audit firm would never allow a field experiment like this? I simply argued that the coordination that would be required to carry out such the study (as described by the authors) would be monumental. Given the current state of affairs, it seems implausible (though by no means impossible) that one of the accounting firms would subject themselves to such an effort, especially since the study could have been designed (by the firm itself!) in a way that would have required much less effort. Once again you are constructing a massive straw man in an attempt to sidestep the issue at hand.

      4. Unsurprisingly, you’ve built another straw man. Where did I state or imply that I expected the firm to simply rescind their confidentiality agreement and make their identity public knowledge? I stated that it seemed reasonable that if the firm was willing to participate in the study in the first place, that they would also be willing to enter into confidentiality agreements with the second author and an editor. For example, how would the firm face any additional legal exposure by having Harry Evans sign a confidentiality agreement in order to know the identity of the firm and verify that the study took place as described?

      5. Coauthors are jointly responsible for their published work. Of course you should trust the people you work with. However, as I have already stated, research should be designed in a manner that allows it to withstand scrutiny. Having an expectation that coauthors are responsible for verifying the integrity of the data reported in a paper is one such safeguard. If anything, you are doing your coauthors a favor, because, should any question arise, you are able to state that you personally verified the integrity of the data (i.e. you are able to defend your coauthors).

      While I find it sad that companies commit fraud, that doesn’t mean we should get rid of assurance. Similarly, while I am saddened by the possibility of research fraud, I choose not to live in ignorance. As I stated earlier, “research” that is unverifiable and shielded from scrutiny doesn’t deserve our scientific respect.

      On a more positive note, I believe that it is completely possible to design field studies and experiments and to use private data sets in our research in ways that do stand up to scrutiny. In the case of Hunton and Gold, for example, the confidentiality agreement could have included the second author and could have allowed for an editor to sign a similar agreement in order to verify that the study took place as described.

      “Harry”

    2. FYI for respondents to this topic, the Harry S. Markopolos posting to this forum is not the same person as Harry Markopolos who turned in Bernard Madoff to the SEC. I know this to be true because I’m Harry Markopolos (Whitman, MA) and my middle initial is not S. In fact, I don’t use a middle initial figuring that with a last name like mine, what’s the point? I just wanted to clarify this so that no one thinks he is me or I am him.

  10. I find it odd that only Hunton had access to the data he used for the retracted paper. Hunton’s papers are unmatched in terms of participants, and data quality in general. I am wondering if other coauthors noticed that Hunton collected the data by himself, or maybe they never had communications with Hunton’s industry contact. No doubt, they will defend him for personal reasons. But I do not think they will lie about seeing the subjects fill out the instruments or calling a Big 4 partner, etc. The typical person seems more likely to see Hunton’s data collection procedures through rose-tinted glasses than to completely misrepresent what they know. I think that consistently opaque data collection procedures are more likely when there is fraud then when there is not, hence its diagnosticity.

    Reputation building over time makes sense here – to the extent prior coauthors and editors publish an authors’ work, I think it’s rational to believe the work is of high quality and free of errors, regardless of your personal connection to the coauthor. But I’d like to see coauthors answers to my questions, i.e. did they see the data collection? Did they have communication with Hunton’s industry contacts? I agree that if you asked them whether they trusted Hunton, you’d get biased and hard to interpret information.

  11. @”Margaret Chase Smith” – I am not sure what the editors would do if they had suspicion of fraud. I think they’d need very convincing evidence to say anything that even indicates suspicion. Personally, I find the behavior we observed to be consistent with them neither being sure there was no fraud nor sure there was. Their loss function is not symmetric – Type I and Type II errors have a very different cost associated with them. A lot of evidence of fraud would be necessary before they would say anything about it.

    My priors suggest that the editors would have defended the mistake as being a small one if they had a very low fraud risk assessment. Would they even have requested to look at the data? I am not sure. But regardless, I think they’d need a very high probability (e.g. >85%) that there was fraud before they’d say anything.

    Regarding confidential data sets, there are indeed costs associated with making sure all data is honest. But there are obvious benefits. I think quality control could be improved substantially, in fact it seems that kind of the point of this website.

  12. Our university and management school have an IRB process that requires the following for any field research –

    “Maintaining confidentiality of information collected from research participants means that only the investigator(s) or specific individuals of the research team identified in the approved IRB application can identify a participant or participants. The researchers must make all possible efforts to prevent anyone other than the research team that has been provided with IRB consent from identifying participants or matching participants to their responses.”

    Several of those posting to this thread appear to believe that there is no problem with violating IRB rules, which I assume are similar across universities. How can field researchers share confidential information with an editor when the researchers are legally bound not to do so by their own institution’s rules? This is not a simple issue. In the case presented here there was also a second contract with a business that required confidentiality. It looks like these authors were caught between a rock and hard place.

    My interpretation is that our IRB rules make it impossible for faculty at my university to share participant information. I would also assume that if an author joined a paper after the IRB documents and firm contracts were written, the author who signed the agreements would be legally prohibited from sharing confidential information with the added author. This is certainly a very challenging problem.

    1. @Kevin, I don’t believe allowing an editor to contact the firm would violate the wording in your IRB agreement. Your IRB agreement says that each participant’s identity needs to be protected. Knowing that a firm says they participated will not allow anyone, including the editor, to identify who participated in the study. In Hunton and Gold’s study, over 2,600 people are claimed to have participated. If I knew those participants worked for, say, KPMG, I still have tens of thousands of people that worked for KPMG at the time who did not participate. How would knowing which of the firms participated allow me to identify participants and therefore violate the IRB agreement?

      The bigger issue that few of the above comments have addressed is the Audit Analytics data which shows that the study described in Hunton and Gold is almost surely not possible. That issue is the proverbial elephant in the room.

      I’ve looked at Audit Analytics data enough to know that the number of any one audit firm’s offices in the world with publicly traded clients is likely far less than 150 (around 100) so how can Hunton and Gold sample all a firm’s offices with publicly traded clients and then eliminate any offices using the other two filters (i.e. sales less than $1 billion and only clients in the three industries they list on p. 919) and come up with 150 offices? Audit Analytics shows that using these three filters (one or more publicly traded client, sales less than $1 billion, and in the retail, manufacturing or service industries), the number of offices is around 50. Anyone who wants to criticize either the retraction needs to explain or refute this data.

      The possibility that the study included non SEC clients is the only potential explanation I can think of. However, that explanation doesn’t seem reasonable given that the study was a SAS 99 study, coordinated by a US partner with one memo describing SAS 99. To do the study that way, non SEC filers in a foreign country would be very confused by the memo. However, an empirical question that one could answer is if there are any firms with 150 offices that include SEC and non SEC clients with less than $1 billion in sales in the three industries. To answer that question, it probably wouldn’t be that hard to call all the Big Four firms and ask them how many worldwide offices they have with publicly traded clients that have sales less than $1 billion in the retail, manufacturing and service industries. If that number is less than 150 for all the firms, the emperor is definitely naked.

      If I was investigating this retraction for research fraud, I would be calling the Big Four firms and asking that simple question. If the number is big enough to randomly sample and get 150 offices, then Hunton still needs to explain how a SAS 99 study was done using participants who never worked on an SEC an probably have never heard of SAS 99, using a memo discussing guidance in SAS 99 (see p. 920) in, say, the Johannesburg office.

      1. If Hunton made up his data, why did he do such a sloppy job making it up? Lowering the number of offices would have been very easy. So, anyone who claims that he made it up also needs to claim that he did a poor job researching what’s the reasonable number of US offices for a Big4 firm. I am not saying it’s unlikely, but something to think about before concluding that Hunton is a fraud.

  13. @Kevin – “How can field researchers share confidential information with an editor when the researchers are legally bound not to do so by their own institution’s rules?”

    By my reading of the IRB excerpt, they could do so by simply removing identifying information. The journal could then perform several tests on the data, for instance a test to see if the observations appear to have been generated by a random number sampler.

    This of course ignores confidentiality agreements with a business. But I am not sure it’s the IRB that’s stopping the data from being shared. And no one is faulting the authors solely for having confidential data.

  14. Some of the arguments here are getting bitter, and Margaret and Harry do not seem to enjoy each other, but many discussions center on the number of locations. It felt prudent to me to read the sample description in the paper. The authors said:
    “Design. The current study was conducted at the request of a large audit firm. Management of the firm was interested in evaluating the efficacy of their existing fraud brainstorming procedure—open discussion. After discussing many approaches with the researchers, the firm decided to compare the relative effectiveness of nominal group and round robin brainstorming to open discussion brainstorming. Managing partners at the national level of the CPA firm used a random method to select 150 offices from among the entire set of all U.S. offices to participate in the study and randomly assigned each local office to one of three treatment conditions – nominal group, round robin, or open discussion brainstorming. At each local office, the managing partner randomly selected one client, subject to the following criteria imposed at the national level: the clients must be in the retail, manufacturing, and service industries; only clients with a December 31, 2007 fiscal yearend would be considered; only publicly listed companies would be included; and only firms with gross revenues less than $1 billion would be allowed. Fifty of the randomly selected offices conducted fraud risk brainstorming sessions using open discussion, 50 offices used the nominal group procedure, and the remaining 50 used the round robin procedure.”

    The authors also said:

    “The guidance conveyed information that was common to all treatments; namely, the memorandum asked the auditors to consider potential fraud risks in the context of guidance provided by
    SAS No. 99, the fraud triangle, and firm policies and procedures related to fraud detection and prevention – all of which were described in the memorandum. Every member of the brainstorming teams was asked to sit alone, think about fraud risks related to the client and write down as many fraud risks as they could identify. The memorandum asked the auditors to keep track of the number of minutes they spent considering and documenting potential fraud risks during this preparatory stage. Finally, each team member was asked to document time spent along with the identified fraud risks to a designated administrative assistant at the local office. In addition to the common instructions, treatment groups were provided with the following specific information.”

    The information about the nature of the clients was provided by the accounting firm, and the authors would not have had reason to suspect that the partners were being dishonest with them.

    I notice that the authors never said SEC registrants. This is only said by people posting to this site. If the authors were to add the word “international” to their description of the design, there no longer appears to be a problem. Edit “from among the entire set of U.S. offices” to “from among the entire set of U.S. and international offices”. PWC for example has over 750 offices in more than 150 countries. Other firms are similar. When I read the paper and the explanation of the error, I do not understand the idea that the number of offices is not realistic. The firms have enough offices for 150 to be selected if they are not SEC registrants, and there was no statement that all clients were SEC registrants. Furthermore, “in the context of guidance provided by SAS 99” can merely indicate that participants were informed that they need to brainstorm about fraud indicators.

    What am I missing? There are many assumptions that are not in the paper that must be made in order to support an assumption of fraud. Several people on this site appear very eager to make such assumptions without knowing what discussions have occurred at the AAA.

    1. I don’t feel any bitterness toward Margaret. I am simply making an appeal to logic in my posts and I don’t think I have been emotional about it at all. In fact, I have tried to be quite respectful by taking the time to respond in detail to the points that Margaret and others have raised.
      Regarding your other comments, I would preface my response by saying that a number of your points have already been addressed in detail by myself or other commenters. However, I will take some time to address them again.

      The only assumption I made in my analysis was that clients were SEC registrants. I was quite upfront about it and I explained my reasoning. The first paragraph you quote states that the clients were publicly registered. I have previously acknowledged that non-SEC registrants could have been included unbeknownst to the authors. However, I have also pointed out that this would invalidate the data analysis as reported. I have also pointed out numerous reasons why it would have been difficult to include non-SEC registrant clients without the knowledge of the authors. One of the most compelling reasons that such an event is unlikely is stated in the second paragraph you quote: all audit teams received the same guidance memo. If we introduce multiple regulatory regimes into the equation (by allowing for non-SEC registrants), it would be incredibly difficult, if not impossible, to have a universal guidance memo that would satisfy all relevant regulatory requirements across regulatory and legal regimes.

      Further, let’s assume for a moment that not all the clients were SEC registrants. According to the paper, they would still have to be (1) publicly listed, (2) have a December 31, 2007 year-end, (3) have less than $1 billion in gross revenues, (4) be from the retail, manufacturing, and service industries, and (5) be clients with a regulatory environment in which the firm would perform fraud brainstorming (in some regulatory environments where fraud brainstorming is not required of the auditor, the auditor may choose not to have a fraud brainstorming in order to avoid potential increases in liability). Still, let’s ignore the fifth restriction for a moment. We know that the remaining four restrictions eliminate the vast majority of offices that serve SEC registrants (based on Audit Analytics), so it’s reasonable to expect that they would similarly restrict offices that serve publicly listed companies that are not SEC registrants. If that is the case, it’s unlikely that any of the Big Four would have enough eligible offices from which they could sample 150. Validity issues aside, your explanation is still mathematically implausible

      As somewhat of a side note, where did you find the total number of PwC’s international offices? After having gone through their list of offices to some degree, I’m highly doubtful that they have 750 offices with audit services (many of their locations are listed as multiple offices because they often have separate legal entities for their auditing, advisory, and/or tax services in foreign countries).

      What other assumptions did I or others make in analyzing the number of offices reported by Hunton and Gold?

      In addition, I haven’t assumed that the data was fabricated. However, in my opinion, here are the plausible explanations for the inconsistencies that I and others have pointed out:

      1. The firm misled the authors and included offices that serve non-SEC registrants.

      I’ve already explained why this is less likely to be the case. In addition, wouldn’t the firm have told the authors as much when they inquired about the original misstatement? My analysis wasn’t of the original statement made in the paper (150 U.S. offices, which would be impossible since none of the firms have even close to 150 U.S. offices), it was of the explanation for that misstatement provided by the authors (i.e. that international offices were included). Given the tone of the paper and the motivation (focused on SAS 99), we are given no reason to believe that the clients are not SEC registrants.

      However, assuming that this is what happened, the data analysis in the paper would have failed to control for significant variations in legal and regulatory regimes across audits, making the reported analysis invalid.

      2. The firm misled the authors about some other aspect of the sample.

      This is similar to #1. While this is also possibility, it seems like the authors would have noticed, given their involvement, as described in the paper. Also, most variation from the description in the paper would invalidate the data analysis.

      3. The data was fabricated.

      In any of these cases, the AAA would have been completely justified in retracting the paper (regardless of their actual motive for the retraction). I would love to hear plausible alternative explanations that I’ve omitted, since I don’t like any of these alternatives and the possibility of the third is quite frightening. Given the scariness, I would really love to see objective evidence that would rule out the third alternative. However, based on the public statement from the authors, it seems that even Anna Gold can’t rule out the third alternative…

  15. I know of at least one international accounting/consulting group (ACAL) that (in 2010, at least) shared a corporate umbrella with a retail services with numerous U.S. branches (H&R Block). I’ve heard of another operation based in Europe where the flagship firm was a private bank that also got into other financial services (I’m less sure of the facts here, so I won’t name them). The firm in the study is probably not either of these, but I’m sure there are any number of similar cases. In addition, there are “alliance” systems, particularly in the professions, where there is no common corporate governance, but the entities share a common brand name and marketing organization, plus (hopefully) quality control. The point is that it isn’t always easy to figure out how many “offices” a “firm” has.

    1. Auditing tends to be one of the more regulated industries, which makes it much easier to track international affiliates. Unless you can demonstrate how Audit Analytics does not have a complete (or even near-complete) listing of offices that audit SEC-registrants (which seems highly unlikely), my analysis still stands.

  16. It seems to me that the collection of data at arm’s length through an agreement between a firm and an academic researcher is necessarily subject to some risk. The standards for choosing random samples, for example, can depart from the rigorous standards of probabilistic randomness, based on other factors like ready availability, staff familiarity, the ability and cooperativeness of staff working with various clients, and many other factors. It is possible therefore that the delivery of data to the academic researcher under such an agreement may entail some departure from the design of the original experiment. In addition, because the firm is eager to see the analysis of data that they feel is adequate for their purposes, there may even be some deficiency in how the transferred data is described to the researcher. It is possible, whether likely or not I do not know, that the researcher(s) may be complicit in accepting such deficient datasets for fear of losing what they consider to be a still valuable source of information and/or material with publication potential. When evidence is presented and controversy grows concerning a resultant publication, it is perfectly understandable to me that the firm would not want to get further involved or extend the confidentiality agreement to an editor or anyone else. They have nothing to gain and only credibility to lose. They have probably gotten what they want out of the analysis if they think the dataset is “good enough,” and they don’t want to take on any additional risk of reputational or even legal damage. Therefore the fact that the researchers cannot produce the data should not be considered any evidence of fraud whatsoever. The retraction of the article in the manner with which it has been carried out seems to be the appropriate remedy. I think that a wide ranging witch hunt involving the authors and coauthors is not at all indicated, but it is a reminder to authors and editors of the complex interface between the academic and real world and the need to verify the accuracy of data descriptions when data passes from one to the other.

    1. Observer. I concur with your opinion that field study data have inherent risks, and the editor may have determined it was necessary to retract the paper because the risks of further errors in data were too high. This argument is rational, and there appears to be evidence to support a retraction of this type. To prevent the speculation evident on this site, the journal should disclose these matters in its retraction.

      It is a fact that anonymous actors in the AAA are using this retraction as conclusive evidence of fraud and to levy further accusations of fraud against the authors, and many of us are witnessing this. I concur as well with the conclusions from Accyphd when he states that there are alternatives to obliging a violation of data confidentiality, and confidentiality of data itself does not signify fraud. We are left to conjecture whether the authors offered other means of data verification, but the AAA insisted only on violating agreements. If this is the case, then we are further left to conjecture why the AAA would take this stance and whether it is appropriate to require authors to violate contractual agreements.

      1. As an aside, I am curious what is meant by conjecture. If I have priors and see data, I update my beliefs. This process is inductive. With enough conviction or data, my beliefs can be very strong, and in the limiting case I have certainty (i.e. 1’s and 0’s and deduction). Alas, the comfort of the unit interval’s infimum and supremum is a rarity.

        With Hunton, we are dealing with incomplete information. So I cannot see how we can do anything but conjecture. And if we cannot do this or if this is a bad thing, how are we to police our journals for fraudsters?

    2. This is a reasonable hypothesis that explains the data quite well.

      I think the biggest cause for concern in Hunton’s case is that his ability to get data has been uniquely exceptional. And it’s not just from auditors; it’s from analysts, directors, etc etc. And his sample sizes are huge. I am unaware of any other professor in the history of accounting research that has been able to do what Hunton has done.

      It seems reasonable that someone with this level of access would have a strong relationship with the audit firm in question. I am thus surprised that a firm would offer no help to Hunton. Also, consider that his coauthor was completely detached from the data collection process. Has Hunton done this before?

      My beliefs are that the probability of fraud is very low for any randomly selected accounting scholar. Hunton’s is a unique situation that increases the probability of fraud, but not drastically. Nevertheless, to the extent that we ought to devout any resources to fraud detection, I think a risk-based audit would suggest we look at Hunton’s work before others’.

      To the extent we as accounting scholars agree that policing for fraud is a worth while endeavor, we will have to keep our complaining to a minimum and realize that it’s likely not fraud, but nevertheless go along with the process to maintain the integrity of our journals.

  17. I think Harry brings up some valid points. As this seems to have turned into a detailed discussion about the paper, I would like to bring up something else that is concerning. In Table 1, the average audit team is roughly 16 auditors (total participants less specialist), but the average client revenues are only $545 million. This seems out- of-whack since the average audit team is VERY large for what are average-sized clients. It has been over a decade since I carried an audit bag, but I had multiple clients with revenues in excess of $1 billion that never had an audit team larger than 10 people. Did anyone else notice this or am I simply reading the table incorrectly?

    Also, page 922 states that all of the participants (over 2,600 auditors) correctly answered the manipulation check. Achieving 100% accuracy with so many participants seems unlikely.

  18. A legal view

    a.If authors reveal the participating firm without the firm’s expressed consent the authors’ actions violate federal law and university regulation and expose the authors to common law action by the firm. Depending upon the state, state laws could also be violated.

    b.Only the participating firm has the authority to allow itself to be revealed. The authors have no legal authority to make this decision and cannot be compelled by requests from editors or other parties.

    c.Authors can make the choice to breach confidentiality only when such action is needed to avoid imminent physical harm or public endangerment or in response to a subpoena, but a subpoena may be insufficient to compel breaches of confidentiality in some cases.

    d.While there is no statutory law for editorial policies, editorial teams are expected to follow general guidelines created by their umbrella associations such as ABS and AMA. Editors oblige errata when small errors are identified, but large errors are grounds for retraction. Allegations of fraud and related retractions necessitate full investigation by the authors’ institutions, and editorial teams are not to make determinations of fraud based upon personal opinions. If the editorial team made a determination of fraud without an institutional investigation, there would be evidence for a lack of due process. If the editors retracted the paper for a significant error, as the authors claim, then any mention of potential fraud by the editorial team would be defamation and breach of editors’ duty of confidentiality. If the editorial team made no determination of fraud but prepared a disclosure that would lead a reasonable person to believe that the retraction involved fraud, this could be evidence of intent to cause harm. The comments on this blog could be used as prima facie evidence that the disclosure suggests fraud which also provides grounds for defamation.

    It is likely that a journal of this stature is aware of law related to confidentiality and is also be aware of the need for due process in a decision of this import, but I would recommend that the authors seek legal counsel to evaluate potential civil claims.

  19. I’ve long been suspicious of Hunton’s work. He surveys people in settings he controls and insists on being the one to do the data analysis on the data that he collects and every time the results perfectly fit his arguments. It would be so easy for him to tweak the data a little here and there given that no one audits his data collection and analysis. It’s always just been weird that everything falls so perfectly in his research all the time. It would not be possible to launch a full investigation into the validity of his prior research since there is no audit trail (which he makes sure of), but if it were possible, I’d bet anything that there would be a lot more retractions of his research.

  20. “If you misunderstood this, how do I know that you didn’t misunderstand something else.” Did the editors really end that with a period rather than a question mark? The period really gives it an accusatory tone.

    1. Jim Hunton can fix this problem very simply, by convincing his cooperating firm to get the journal’s editor to sign a nondisclosure agreement, and then provide the data to back up his claims that the study actually took place and that except for a few minor descriptive details everything else is fine. Gosh, why aren’t his numerous co-authors coming forth with supporting comments? The silence is deafening. Any way we view this, it’s tragic.

      1. I don’t believe that Hunton can fix this problem, aside from the nondisclosure issues (which by HHS standards generally cover “individuals” taking part in medical research not the facilities or companies). If as someone argued that Hunton and Bently are covered by a nondisclosure/confidentiality agreement, all any company would have to do to avoid a law suit, of any kind, would be for everyone who does business with the company to sign such an agreement and no one would be able to sue anyone.
        I think the bigger problem for him is one of the three possibilities that can’t be fixed.
        A. someone like Harry did the math and Huntons facts do not add up.
        B. the company, who was the object of the research had found flaws in the research or conclusions and wanted to set the record straight (before they were named in a lawsuit).
        C. a participant/individual involved in the research who worked for the company came forward (after leaving the company) with a different set of facts/data (possibly wanting to clear their name).This is the more likely case as the length of time between the publication and retraction is so great.
        I also suspect that the reason that Hunton’s co-authors have not come to his defense is, they, like the poster “university professor” have taken a second look at their working relationship and methodology of Hunton and had an epiphany.
        I agree with you in that this is a very tragic event and probably could have been avoided if the peer review process was altered and improved

  21. Boriphat raises a great point: where are Hunton’s coauthors? Why the silence? Can they say anything to validate the data used in their publications? Obviously, Anna Gold was unable to validate the data used in their paper or she would not have insisted that any statement by them state that she knew nothing about the data. It seems that any papers where the co-author(s) are unable to validate the data will have a cloud over them. You would think his coauthor(s) would want to be on record saying they know the data is valid in their study–if that were the case. Maybe more retractions will be forthcoming…

    1. The silence is simple. Every coauthor can see that this board is being used as a place to launch anonymous attacks. One Hunton coauthor identified himself early, and made the comment that no evidence of fraud has been presented. He was immediately attacked and dismissed as a coauthor trying to defend another coauthor. No one can defend themselves against a demand like ‘prove that every participant in the portion of a sample that Hunton collected was real and prove that Hunton did not alter any data point he collected’ How could the validity of every data point ever be proven with certainty? Even if every coauthor indicates that they have seen no evidence of any wrongdoing and have full faith in Hunton’s ethics -and this appears to be the case- no one can provide absolute proof about their coauthors’ contributions. It is an attack for which there is no defense in any field of study involving surveys with human participants and it seems likely that this is the attack that will be used on Hunton’s papers. Hunton’s coauthors are addressing this through professional channels and communications with AAA committees, not by posting unsupported accusations on blogs. Finding this blog today was enough to decide that I will not read it again. Academics should not waste time here but should instead seek to require the AAA to support its decision.

      1. This site is good. It’s the comment threads that leave much to desire as they have been taken over by disgruntled people in need of an outlet for their frustration. The solution is simple: require that people register with their real name.

      2. Hi Peter Bloomberg (and your real name is?)! We are frustrated by people who try to hide the truth by trolling this page just like you. We actually care deeply about science. We like honesty!

      3. I do not assume that this post refers to my posts, but just in case:

        I was the poster that identified Stone as a coauthor and by no means did I intend to suggest his comments should be dismissed (quite the contrary!). Furthermore, I tried my best to make it clear that the “absolute assurance” described above is unreasonable and unattainable. I merely was pointing out that there are many sources of information that can substantially change one’s beliefs regarding the probability of fraud (assuming similar prior information to my own), and that an investigation ought to be done despite the very low probability that Hunton is a fraudster (e.g. due to relative risk, etc).

        Nevertheless, I have found that there is good reason not to publicly discuss these issues and I would delete my posts if I could. I hope the truth comes out and that Hunton gets a fair evaluation and justice if he has been wrongfully accused.

      4. Silence: Care to explain why Hunton just quit his job at Bentley? It doesn’t take a rocket scientist to connect the dots here.

  22. I’m going to go out on a limb here and guess that ‘silence’, who is apparently against anonymity (how ironic), is actually Hunton.

  23. Professor that is a very good observation.

    My involvement is, that a friend of mine had worked with Hunton, I was pointing out the legal parallels of a scientific a paper to legal papers and that any legal papers presented to a court must rest on the “four corners” that means that all the information presented in an affidavit must be presentable in a later court for review and questioning. Thus if i present a search warrant to a court, and in the affidavit i say i have information (could be a phone conversation, witness statements etc.) I must be able to produce that evidence for later judicial review and questioning. Any paper presented in a journal, I believe, must meet those same standards. Hunton refused to present that evidence (for what ever personal or perceived legal reason, and he might well think he has a valid reason). In a court, refusal to present the evidence to a later court would constitute grounds for dismissal ( much the same as a retraction of a paper).

    Dan ( a co-author on several papers as pointed out by PhD) did come to the defense of Hunton, but only to attacked Harry’s well reasoned and logical arguments, by calling Harry names (its call the “dodo head” argument, which most of us gave up in grade school). I have no problem with Dan coming to the defense of Hunton but he should have used his past association with Hunton and given examples of Hunton’s, from what i can tell, very good work.

    No one that I have read so far has implied fraud, just a rational questioning of the paper (something that should have been done in the peer review process. As with any publication the need for peer reviewers is very great, as the process is long and has few rewards and the pressure to complete the review is very heavy, things slip by, it happens (maybe if the reviewer(s) had more time they could have sent the paper back for further clarification, maybe Harry should have been the reviewer).
    .
    The poster “Silence” presented an argument in quotes which I can not find in any of the posts in this thread (straw-man argument). “Silence” also claims that Hunton’s supporters are using professional channels and direct conversation with AAA to support him, that is a political means (aka: the “Rolodex defense” its where you call favors from people in your rolodex ) and not a rational defense of a scientific position. I do agree that AAA should present a little more information but then they were probably working on advice of a lawyer and it is their publication.

    My previous post was in response to Hunton’s decision to resign. I have never believed in the merits of public statements. I also found it odd that Bentley would go from very spirited defense of Hunton to that of taking the resignation of an award wining and well respected professor, over some misstatements in a retracted paper (they should have access to the information that Hunton would not disclose to the AAA and known that any allegations of fraud (or what ever) were not true) was not the right thing to do. Huntons decision to resign is however very curious. If it was just a numbers thing, deal with it, take the heat and move on. I have lost bigger cases over my mistakes, and more public pressure than a few words on a blog, why resign?

  24. Who still thinks Hunton was wronged? Bentley launced and investigation prompting his immediate resignation from a $300K+ job. Who wouldn’t want to clear their name if they were free from blame? It’s sad that some of these folks actually teach auditing and/ or forensic accounting. I hope as a group (accounting academics) we are more understanding of why auditors fail to detect fraud at their clients. We lacked skepticism and the willingness to investigate the Hunton red flags for a long time.

  25. “The second author was neither involved in administering the experiment nor in receiving the data from the CPA firm. The second author does not know the identity of the CPA firm or the coordinating partner at the CPA firm. The second author is not a party to the confidentiality agreement between the lead author and the CPA firm.” Explanation of Retraction (Hunton & Gold 2010, Retraction Watch, November 27, 2012).

    Who gathered the data in all of Hunton’s other studies?

    Who knew the identity of the firms that provided the data?

    Who was party to the confidentiality agreement with these firms?

    Who received data directly from the firms and not from Hunton?

    Who performed the data analysis?

    Recent studies and their sample descriptions are listed in alphabetical order below:

    Asare, K.N., Abdolmohammadi, M.J. & Hunton, J.E. 2011, “The Influence of Corporate Governance Ratings on Buy-Side Analysts’ Earnings Forecast Certainty: Evidence from the United States and the United Kingdom”, Behavioral Research in Accounting, vol. 23, no. 2, pp. 1-25.
    ***17 analysts from the U.K. and 19 from the U.S. ***
    Bierstaker, J.L., Hunton, J.E. & Thibodeau, J.C. 2009, “Do Client-Prepared Internal Control Documentation and Business Process Flowcharts Help or Hinder an Auditor’s Ability to Identify Missing Controls?”, Auditing, vol. 28, no. 1, pp. 79-94.
    ***395 experienced auditors who were newly hired by one of the Big 4 CPA firms***
    Hunton, J.E., Hoitash, R., Thibodeau, J.C. 2011, “The Relationship between Perceived Tone at the Top and Earnings Quality, Contemporary Accounting Research, vol. 28, no. 4, pp. 1190.
    ***206 financial reporting managers***
    Hunton, J.E., Libby, D., Mauldin, E. & Wheeler, P. 2010, “Continuous monitoring and the status quo effect”, International Journal of Accounting Information Systems, vol. 11, no. 3, pp. 239-252.
    ***61 experienced managers***
    Hunton, J.E., Libby, R. & Mazza, C.L. 2006, “Financial Reporting Transparency and Earnings Management”, The Accounting Review, vol. 81, no. 1, pp. 135-157.
    ***62 participants include 59 financial managers and three CEOs***
    Hunton, J.E., Mauldin, E.G. & Wheeler, P.R. 2008, “Potential Functional and Dysfunctional Effects of Continuous Monitoring”, The Accounting Review, vol. 83, no. 6, pp. 1551-1569.
    ***72 corporate managers***
    Hunton, J.E. & Rose, J.M. 2012, “Will corporate directors engage in bias arbitrage to curry favor with shareholders?”, Journal of Accounting and Public Policy, vol. 31, no. 4, pp. 432.
    ***71 experienced directors who serve on the boards of mostly very large or large companies***
    Hunton, J.E. & Rose, J.M. 2011, “Effects of Anonymous Whistle-Blowing and Perceived Reputation Threats on Investigations of Whistle-Blowing Allegations by Audit Committee Members”, The Journal of Management Studies, vol. 48, no. 1, pp. 75.
    ***83 experienced audit committee board members***
    Hunton, J.E. & Rose, J.M. 2008, “Can directors’ self-interests influence accounting choices?”, Accounting, Organizations and Society, vol. 33, no. 7, pp. 783.
    ***88 experienced audit committee members***
    Hunton, J.E., Wright, A.M. & Wright, S. 2007, “The Potential Impact of More Frequent Financial Reporting and Assurance: User, Preparer, and Auditor Assessments”, Journal of Emerging Technologies in Accounting, vol. 4, pp. 47-67.
    ***215 participants, 84 auditors, 30 controllers, and 80 investors***
    Hunton, J.E., Wright, A.M. & Wright, S. 2004, “Are Financial Auditors Overconfident in Their Ability to Assess Risks Associated with Enterprise Resource Planning Systems?”, Journal of Information Systems, vol. 18, no. 2, pp. 7-28.
    ***165 auditors***
    Hunton, J., Arnold, V. & Reck, J.L. 2010, “Decision Aid Reliance: A Longitudinal Field Study Involving Professional Buy-Side Financial Analysts”, Contemporary Accounting Research, vol. 27, no. 4, pp. 997.
    ***27 buy-side analysts***
    LIBBY, R., HUNTON, J.E., TAN, H. & SEYBERT, N. 2008, “Relationship Incentives and the Optimistic/Pessimistic Pattern in Analysts’ Forecasts”, Journal of Accounting Research, vol. 46, no. 1, pp. 173-198.
    ***47 experienced sell-side financial analysts***
    Libby, R., Hun-Tong Tan & Hunton, J.E. 2006, “Does the Form of Management’s Earnings Guidance Affect Analysts’ Earnings Forecasts?”, The Accounting Review, vol. 81, no. 1, pp. 207-225.
    ***95 sell-side analysts***
    LIBBY, R., NELSON, M.W. & HUNTON, J.E. 2006, “Recognition v. Disclosure, Auditor Tolerance for Misstatement, and the Reliability of Stock-Compensation and Lease Information”, Journal of Accounting Research, vol. 44, no. 3, pp. 533-560.
    ***44 Big 4 partners***
    Mazza, C.R., Hunton, J.E. & McEwen, R.A. 2011, “Fair Value (U.S. GAAP) and Entity-Specific (IFRS) Measurements for Performance Obligations: The Potential Mitigating Effect of Benchmarks on Earnings Management”, The Journal of Behavioral Finance, vol. 12, no. 2, pp. 68.
    ***86 managers***
    McEwen, R.A., Mazza, C.R. & Hunton, J.E. 2008, “Effects of Managerial Discretion in Fair Value Accounting Regulation and Motivational Incentives to “Go Along” with Management on Analysts’ Expectations and Judgments”, The Journal of Behavioral Finance, vol. 9, no. 4, pp. 240.
    ***44 experienced financial analysts***
    Smith, A.L., Baxter, R.J., Boss, S.R. & Hunton, J.E. 2012, “The Dark Side of Online Knowledge Sharing”, Journal of Information Systems, vol. 26, no. 2, pp. 77.
    ***187 programmers***
    Tan, H., Libby, R. & Hunton, J.E. 2010, “When Do Analysts Adjust for Biases in Management Guidance? Effects of Guidance Track Record and Analysts’ Incentives”, Contemporary Accounting Research, vol. 27, no. 1, pp. 187.
    ***47 experienced sell-side financial analysts***

    1. I cannot resist comment. This is a much better why of asking the questions I think would be informative (but would not give perfect assurance, of course).

      In light of LDS’s comment, I’d like to compare the manipulation checks and incoherent/missing observations in each of these studies to the those in Hunton’s coauthors’ papers (with similar subjects): e.g. do Hunton’s coauthors have a higher than expected pass rate when they are working with him than with others/solo-authoring? Same with dropped observations.

      My priors say this would be a very diagnostic test.

    2. Not to mention duplication of a 1994-5 study using a different subset, but with similiar results and similar word useage

  26. Wow! Are these journal editors going to ask for the data for these studies? I feel bad about co-authors who have been caught up in this issue.

    1. I doubt the editors will ask for data, but don’t be surprised if other researchers go over his other papers with a fine tooth comb. I do wonder how many people will be willing to cite his papers?

      Note: I’m still waiting for the supporters to explain the resignation.

      1. TAR, JAR, CAR, etc. Career-making publications. All of these editors SHOULD be asking for the data, in fairness to all who strive to publish in these high-ranking journals. But wait–reputable journals often indicate “Data is available upon request.” Any of us should be able request this data! Isn’t replication a normal part of academic research?

        1. I once submitted a paper to TAR that directly refuted another study earlier published in TAR. We requested the data for that earlier study and the authors refused to provide it to us despite TAR’s stated policy.

  27. Had I been Hutton, I would have made every effort to defend my study if the study is integral and reliable. I would have defended until my last breath. Any sensible researcher will do it, or her/his academic career would be over. Editors, CPA firm, and the university: See you in the court! That is common sense.
    Look at what Hunton did: voluntary retraction and voluntary resignation??? That tells us something.

    After Hunton’s resignation, suddenly those alias supporting him here disappeared. Now it is Hunton’s co-authors that get nervous.

    1. From the report
      “Dr. Hunton abruptly resigned from Bentley and within several weeks sold his home and moved out of state. He repeatedly declined to participate in this investigation because, according to his lawyer, he has a “crippling” medical condition.”

      My guess that the ‘ “crippling” medical condition’ referred to in the report is the death of whatever part of the brain that controls ethical behavior.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.