“We should err on the side of protecting people’s reputation:” Management journal changes policy to avoid fraud

Patrick Wright, via the University of South Carolina
Patrick Wright, via the University of South Carolina

How can academic journals ensure the integrity of the data they publish? For one journal, the key is looking deeply at statistics, which revealed crucial problems in the research of recent high-profile fraudsters such as Anil Potti. Editor-in-chief of the Journal of Management, Patrick Wright from the University of South Carolina, recently authored an editorial about how he’s taken those lessons to heart — and why he believes retractions don’t always hurt a journal’s reputation. 

RW: Can you take us through the changes in the editorial policy of your journal?

PW: In large part it is two-fold. One is to allow reviewers to self identify when they do not feel fully capable of doing a strong critique of statistical analyses. They always could do that in their comments to the editor (and often did), but we wanted to be sure that Action Editors could identify situations where a paper was being evaluated by reviewers who were not skilled in the statistical techniques used. When that happens, Action Editors can bring in a third reviewer to focus on those analyses.

Second, we created a subgroup of reviewers who are VERY skilled in statistical techniques and can provide thorough critiques of statistical analyses. Even a skilled reviewer may not read the tables in great detail, particularly to look for any minor irregularities. This is because they have to do a comprehensive review of the entire paper. So, on papers that are invited to revise and resubmit, we now add one of those skilled statistical experts to evaluate the statistics, and only the statistics, of the revised paper. This allows them to focus on looking for any irregularities. And it does so efficiently in that we do not waste their time on the 80% of papers that will be rejected for other reasons, and allow them to focus on the 20% that stand a chance of being published (usually half of those get rejected after revision).

RW: Since you’ve implemented the policy, how many reviewers have noted that they don’t feel comfortable evaluating the statistics of a paper? Have you ever had to shift the reviewer panel in order to get people who can evaluate the statistics rigorously?

PW: We certainly have had some. But again, remember that most papers are rejected for other reasons (lack of theoretical contribution, poor design, poor writing, etc.), so Action Editors only ask for an expert in the situations where the reviewers liked the paper enough to suggest a revision, but stated that they could not thoroughly evaluate the analyses. When this happens, we add the third reviewer before inviting the revision (this has happened twice since the new policy was put into place). Note that regardless of the reviewers’ self-evaluations, all revised papers will get a statistical expert assigned at that stage. So the statistical expert can be brought in on the initial submission if both reviewers express a lack of confidence in their ability to evaluate the statistics, or at the revision stage.

RW: Did a particular incident prompt these changes? We haven’t covered any retractions from the Journal of Management so far since we started in 2010.

PW: No, I have been blessed to not have to deal with any formal complaints to the journal during my editorship. But a few of our papers were receiving attention on PubPeer, and I was being contacted by the authors. In all cases the authors made corrections, but in none of the cases did the corrections change the nature of the results or conclusions. Yet these authors were extremely upset at having their reputations questioned by an anonymous post.

RW: You also lay out a procedure for investigations at the journal. Can you tell us more about this?

PW: To be honest, I was aware that JOM had signed off onto the Committee on Publication Ethics (COPE) process when I took over as editor, but was not really aware of what the process entailed. When I began working on the editorial I went to the COPE site and read through the suggested process. I was really impressed at how the process works in ways that maintains the confidentiality (and reputation) of both the authors and the accusers. The only downside is the time…going through a preliminary investigation within the journal then turning it over the to author’s institutions for a thorough internal investigation is a time-consuming process. But better to take a long time to get to the right outcome than to rush and risk unnecessarily and unfairly hurting someone’s reputation.

RW: There is a perception that retractions make journals look bad, but you say in your editorial that this is far from the truth. We have seen first-hand that editors can be reluctant to retract papers. Why do you believe editors – and not just at your journal – are not reluctant to retract papers?

PW: Obviously I can’t speak for other editors, but the ones I have talked to have never expressed a reluctance to retract a paper that was demonstrably wrong. This is due to a number of reasons. First, our primary goal is to use the journal to publish research that moves the science in a positive direction. If we publish something that is detrimental to science, we have failed to meet our own goal. Second, which would do more damage to the journal’s reputation…retracting a paper we later found to be problematic or failing to retract a paper that is problematic? I don’t think anyone in the field expects that a journal can catch 100% of the authors who may try to “fudge” their data, so they will extend grace when we recognize that situation and rectify it. However, they do expect that we would never knowingly publish such a paper, so there’s no grace for a failure to retract a problematic paper.

On the other hand, there is a reluctance to retract papers without proof that the authors actually committed an ethical breach. But this is due more to the impact on the authors than on the journal. If authors make mistakes, they can correct them through a corrigendum. This certainly does not help their reputations, but it demonstrates courage and accountability to keep the scientific process pure. But for a journal to retract a paper will have a profound effect on an author’s professional reputation, so I think it is right for editors to be reluctant to retract papers unless there is clear and convincing evidence of wrongdoing. And, actually, the COPE process is the editor’s best friend as it pretty much takes that decision out of the journal’s hands and puts it in the authors’ institution’s hands.  

RW: If someone suspects misconduct, you suggest first contacting the authors, then the journal if the authors don’t assuage their concerns. Our co-founders, who have been hearing about misconduct allegations for years, have recently said potential whistleblowers shouldn’t contact authors, since truly unethical scientists will simply cover up the evidence of their misconduct, making it harder to conduct a thorough investigation. How do you respond to that concern?

PW: I guess it depends on what you think is the base rate of unethical conduct and the consequences of it. Let’s say that .5% of our authors are unethical to the point of not only fudging data, but then covering it up if warned about an investigation. However, 5% of our authors make minor mistakes in running and/or reporting their analyses. If you contact the authors first, you might miss the one unethical author, but you don’t publicly sully the reputations of the 10 who are completely ethical. Now, if those percentages are reversed, then the tradeoff may be well worth it. Call me overly optimistic, but I think the former is far more likely than the latter.

Second, if we’re talking about cancer research with multi-million dollar research grants and where bad research may kill people, then maybe the former tradeoff does make sense. But when we’re talking about management research, I think we should err on the side of protecting people’s reputation. If a bad study gets published, over time it will be revealed (through meta-analysis) as an outlier and its impact will almost disappear. No one will have died, and there probably won’t be great costs to society. So it’s important to consider the context…what’s appropriate for medical research may not be appropriate for social science research and vice versa.

RW: Instead of contacting the authors, our co-founders suggested that potential whistleblowers post their concerns on post-publishing review websites, such as PubPeer. What do you think about that advice?

PW: I do not think that anonymously posting comments on public sites is the correct way to go. First, it unnecessarily attacks the professional reputation of authors who may (a) have made minor, correctable mistakes and (b) may have made no mistakes at all. Had the “poster” contacted the author directly, the changes could be made without any accusation of ethical breach. Second, it does so without any repercussions for those who wrongly impugn the reputations of others. Again, had the “poster” contacted the author directly, the problem could have been rectified without any negative consequences to either party. But a wrongful accusation can be made anonymously and cause negative consequences to the author who is in the right while having no negative consequences to the accuser who is in the wrong. That does not sound fair to me.

This does not mean that I am against public dialogue around research issues. I have had my share of public disagreements with other authors published in journals. If researchers thinks that something is wrong with a study, they are welcome to publish critiques in the journal, and if the journal chooses not to publish the critique, they are welcome to publicly blog their concerns. In today’s world, no critiques can be completely suppressed from public exposure. But, researchers confident in their skills and their critiques will not be afraid to have their name associated with their critique. So, anonymous postings, to me, seem rather cowardly, and not in line with an open scientific process.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

10 thoughts on ““We should err on the side of protecting people’s reputation:” Management journal changes policy to avoid fraud”

  1. “Cowardly” seems kind of harsh. Considering how many people have found real fraud in journal articles and paid a heavy price for exposing it because no one was willing or able to protect them, anonymity seems like a reasonable act of self-protection under these circumstances. In addition, I would assume that most people can distinguish between honest or dumb mistakes and intent to deceive; although the original posters might not be able to make that distinction, other people should, and thus judge the commentary accordingly. This should mitigate the effects of comments of unintentional errors or mistakes on innocents (not always, but nothing works always). There is also the caveat that if something is wrong, you may not know the source, so blaming a specific author for something in a paper may not be just. If someone manipulates a whole bunch of lanes in a Western blot in Photoshop, chances are that it was not an honest mistake, and someone (or more than one someone) was trying to deceive.

    You can’t make other people or their institutions behave humanely, just as anonymous people can’t prevent other anonymous people from making claims they shouldn’t (either with ill intent or Dunning-Krueger disease), but in the absence of protection, I don’t see a better choice than anonymity.

    1. The Journal of Management is a member of the Committee on Publication Ethics (COPE). The COPE ethical guidelines make it clear that editors have a responsibility to investigate concerns that are raised anonymously online or via social media – presumably that includes pubpeer. As such it seems odd that this editor appears to be so critical of concerns raised by anonymous commentators on pubpeer.

  2. Had the “poster” contacted the author directly, the changes could be made without any accusation of ethical breach.

    This assumes that authors are willing to make changes if only “posters” would just reach out and contact them directly. Nobody gets hurt, and everybody wins. Mike Blatt in Plant Science has expressed the same attitude.


    To be blunt, if authors made changes even half the time they should make changes upon being contacted directly (and in private), PubPeer would be a LOT more quiet.

  3. With this comment…
    I do not think that anonymously posting comments on public sites is the correct way to go. First, it unnecessarily attacks the professional reputation of authors who may (a) have made minor, correctable mistakes and (b) may have made no mistakes at all.
    …the author belies a fundamental misunderstanding of what PubPeer is and how it works.

    Not everything on PubPeer is an “attack”, and in-fact their moderation system is specifically set up to avoid such discourse (I suggest to go read their T.O.S. document). To write off such an effective and well-proven forum as “unfair” seems to belittle the discussion. Lots of things in this field are unfair, but arguably none more so than the continued ability of less than honorable scientists to publish and remain employed.

    Perhaps if this assertion were backed up by a collection of robust examples, wherein anonymous PubPeer commenters have made personal attacks that subsequently proved to be incorrect but resulted in lasting damage to an accused scientists’ career, then it might carry more weight. I suspect that such examples are extremely rare or non-existent, rendering the assertion unfounded.

  4. I would like to see some kind of standardization of raw statistical data that would make review much easier. Decoding raw or semi-processed data for statistical review is more work than deciding what statistics are appropriate. A statistical reviewer should be paid at least an honoraria because of the importance of the review results and the amount of time it can take to re-analyze a large dataset. Finally, a standard statistical appendix would help detect fraud, errors, or problems in a study. Done right it will be impossible to defraud the journal or the audience. I vote for xml formatting of data even though it can make large datasets even more massive to export.

  5. “if we’re talking about cancer research with multi-million dollar research grants and where bad research may kill people, then maybe the former tradeoff does make sense. But when we’re talking about management research”

    Poor management and asinine workplace policies can be deadly. Somebody has to manage that multi-million dollar grant along with a staff consisting of various personalities, attributes, and talents.

  6. In effect, as implied by dr db karron, the nature of the questioned research determines how to view the concern about anonymity. From an operational standpoint, “Anonymous” at ORI meant no one knew how to contact the complainant to ask followup questions that would be generated in assessing an allegation’s merit. Like allegations that clearly involved “interpretations or judgments” about data, those that needed followup hard support for their merit simply could not progress. The reliance on images In science changed the situation dramatically. Once a questioned image constituted experimental results, then the data itself was “making the allegation.” Forensic tools could be used to assess merit; and whether the questioned data was potentially important to the results was often (and helpfully) spelled out by the authors within the text of the paper itself. (As just one example, including the questioned result in the abstract was a no-brainer!). An unequivocal image together with *context* are generally sufficient to assess merit.

    1. The claimed flaws in the Journal of Management articles can all be checked for accuracy within minutes. The claims tend to fall into one of two categories: 1) statistical results are incorrectly calculated, 2) two sets of statistical results are inconsistent with each other (i.e., both cannot be true at the same time). As such it is unclear why this editor is so concerned with anonymity. The accuracy of the critiques should be very easy to determine but it does not appear that the editor is disputing the claims that have been made on pubpeer.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.