Do scientists need audits?

Viraj Mane
Viraj Mane
Amy Lossie
Amy Lossie

If audits work for the Internal Revenue Service, could they also work for science? We’re pleased to present a guest post from Viraj Mane, a life sciences commercialization manager in Toronto, and Amy Lossie at the National Institutes of Health, who have a unique proposal for how to improve the quality of papers: Random audits of manuscripts.

Skim articles, books, documentaries, or movies about Steve Jobs and you’ll see that ruthlessness is the sine qua non of some of our greatest business leaders. It would be naïve to assume that scientists somehow resist these universal impulses toward leadership, competition, and recognition. In the white-hot field of stem cell therapy, where promising discoveries attract millions of dollars, egregious lapses in judgment and honesty have been uncovered in Japan, Germany, and South Korea. The nature of the offenses ranged from fraudulent (plagiarism and duplication of figures) to horrifying (female subordinates coerced into donating their eggs).

When a researcher embraces deception, the consequences extend well beyond the involved parties. Former physician Andrew Wakefield published a linkage between MMR vaccines and autism with overtly substandard statistical and experimental methods, while hiding how his financial compensation was tied to the very hysteria he helped unleash.

Let’s ask some hard questions. Should it have taken 12 years for the Lancet to retract Wakefield’s article? Do we exercise sufficient caution and skepticism when an author claims to have derived the newest miracle stem cell? Should we assume professional scientists have an adequate understanding of research ethics, experimental design, and transparency?

These questions don’t just apply to instances of blatant misconduct, but also to the ongoing, highly visible problem of reproducibility in science. Most often, reproducibility concerns stem from issues with statistical and data analyses. What if a study’s reproducibility and compliance with established guidelines were assessed comprehensively, before it was published?

When it comes to enforcing compliance, there is an established method that any taxpayer or corporate accountant has a healthy fear of: the audit. We propose a systematic and independent audit of research manuscripts before they are reviewed by a journal’s panel of referees and editors.

Here we outline an approach that draws on the arms of the Internal Revenue Service (IRS) and corporate auditing methods, adapting the concept for the unique needs of scientific research.

  • The IRS does not audit every taxpayer (thank goodness!); in fact only around 1% of taxpayers faced an audit last year. However, this low risk of audit still keeps tax code violations down to around 17% of filings. We propose the formation of an examining body mandated to audit a small percentage of submitted scientific manuscripts. Unlike tax audits that are triggered by predictable taxpayer filing behaviors, scientific audits would be randomized so that all authors perceive an equal risk of being examined.
  • To maintain impartiality, the examining body must be independent of author-affiliated institutions (universities, hospitals, etc.) and journal editorial boards.
  • The examiners should be empowered to request raw data from the authors, perform their own verification of the authors’ data analysis and conclusions, and send an audit report to the editors of the intended academic journal.
  • One possible embodiment of the examining body is a collective of consulting specialists including statisticians, professional graphic artists, audio engineers, etc. This group would offer multidisciplinary, ad hoc expertise that cannot realistically be replicated by a journal’s editorial board.
  • To reinforce the presence and value of these audits, the published article should include a brief acknowledgement that the manuscript was independently reviewed by the examining body, along with a link to the unabridged audit report.

The Independent Audit System has several benefits. For one, it provides an unbiased verification that experiments are conducted ethically, results are calculated appropriately, and conclusions are based on the dataset rather than potential newsworthiness. It also prevents bad papers from getting out — a pre-publication audit of Andrew Wakefield’s discredited study could have revealed statistical irregularities, conflicts of interest and methodological flaws, and prevented serious harms to public health.

An audit system also yields opportunities for specialists in image manipulation, statistics, big data, etc. to participate in scientific review on an as-needed basis. This group would be a logical home for the domain expertise of postdoctoral fellows, many of whom have already participated in traditional peer review. We envision a flexible, lean organization, unencumbered by traditional limitations on geography or work schedules. The group, by definition, would draw on diverse skill sets. Finally, an audit could ultimately create an opportunity for authors to choose to be audited as an expression of confidence in their experimental design and conclusions, in the same way that architects and building managers willingly undergo audits for LEED certification.

Our proposed auditing body also has the advantage of providing an independent third party review, rather than imposing extra commitments onto the editorial boards of participating journals, oversight committees of universities, or overworked reviewers.

There’s another big question: Who pays for this audit? Funding could take many forms – major organizations for scientific inquiry such as the National Academy of Sciences, the American Association for the Advancement of Science, the Gates Foundation, the National Institutes of Health, the National Science Foundation and the Office of Research Integrity make logical partners, and could stipulate that their grant recipients would be subject to audit. Employers of the actual auditors can also contribute to this endeavor by simply maintaining salary coverage during the brief periods that their employees are contributing to scientific review. Lastly, small fees could be obtained from the authors under review, as well as their intended journal. These fees would be offset by the provision of a novel deliverable: a professional scientific audit report.

We are not the first to propose enhanced scrutiny of scientific publications; the Wakefield scandal ultimately led to calls for “an external regulator overseeing research integrity,” although the suggested placement of that position within each university strikes us as a potential conflict of interest. While there is no guarantee that our team of auditors would catch the issues missed by traditional peer review, the opportunity to deliver objective, multidisciplinary, critical review is an important step forward in scientific transparency. Unlike traditional peer reviewers who may prioritize this responsibility well below their own endeavors (or delegate it to a graduate student), each ad hoc auditing panel would be convened for a specific manuscript and can therefore operate within a reasonable window of time to minimize the delay experienced by the authors. A final issue is access: how does the auditing body find and obtain pre-publication manuscripts in a fair and transparent manner? We propose a random number generator that selects a journal, a day, and a time for inquiry. The selected journal board is then asked to submit the corresponding non-reviewed manuscript to the audit body. This avoids the need to access a mega-repository of manuscripts from multiple journals. If journals choose not to comply, that can be publicized as well, perhaps here at Retraction Watch.

The auditing body we propose is designed to elevate the standards for gaining, analyzing, and disseminating information. As stated expertly by Begley and Ioannidis, “It is not at all clear how reasonable recommendations will be implemented or monitored while they remain voluntary.” By avoiding change we simply enable future scientific misconduct, which leads the public to cast aspersions far beyond the actual perpetrators. If we wish to build a better relationship with the public, then increased transparency is non-negotiable. Sunlight is said to be the best disinfectant.

Viraj Mane, PhD, provides project management, fundraising, and commercialization services to scientists in Toronto, and is also a designer and purveyor of absurd clothing. Amy Lossie, PhD, is a Health Science Administrator at NIH and the co-founder of the Beautiful You MRKH Foundation, Inc.

Update 2/6/16 4:34 p.m. eastern: Update: The authors would like to acknowledge the previous work in this area by Adil E. Shamoo; the concept of random auditing has also been raised in the past by Drummond Rennie, deputy editor (West) at JAMA.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.

30 thoughts on “Do scientists need audits?”

  1. A logical extension of this fine proposal is that it should be possible – as it is with the IRS – to “nominate” someone for audit by providing a reasonable justification (for ex.: evidence of image manipulation) that they [or in this case the paper] should be audited.

    1. Thank you for your note, and I totally agree. As you point out, transparency should extend to the auditing body as well and having a blend of randomized audits as well as non-random, *justifiable* audits may enhance its value.

  2. Absolutely!!!! By a team private detectives, lawyers, CPAs etc. Not PhDs, MDs etc ===they are in the “tank”.

    But more importantly dropping the hammer when a fraudster is caught.

  3. A disastrous suggestion. What is a “life sciences commercialization manager”, anyway? In my opinion, another parasite on the system. More lawyers, more parasites, for science’s carcass. Science already has its auditing system: it’s called peer review. But that system has become corrupted. There is nothing wrong with peer review (i.e., the concept itself), but the management of peer review, and of PPPR, needs refinement.

      1. anon@anon, making all publicly funded science an open source is what I have been advocating for for years. But how do you force the hand of the major publishers whose primary objective is profit? The current system allows commercial publishers to exploit copyright at unregulated prices, or charges abominable prices for open access. If you go to the supermarket in a developed country, there will be price regulation on the price of most commodities and a regulatory body to oversee that this does not get abused. As it stands, the extraordinary profits we see by the top 4 publishers is precisely because there is no such regulatory body. These publishers are their own regulatory body, in terms of pricing, peer review, rules and regulations. So, I agree, peer review is broken (as I suggested in my comment and elsewhere) and needs fixing. But not by this new wave of non-scientists.

    1. The key control in audit is its independance. Peer review can’t work as an audit precisely because it is done by academics.

  4. Would like to see some comment on operational issues. As just a few examples: How will you assert authority to see all of the data that an audit suggests is needed, and what is the “record examined”; what do you plan to do if/when the audit turns up uncomfortable evidence of possible falsification; will you have authority to report to proper officials; can you protect auditors from possible litigation? One positive aspect is that audits might get institutions serious about enforcing data retention policies. Finally, I suspect that Adil Shamoo may have published useful comments, as he pushed audits many years ago.

  5. As often, a big problem will be the funding. Here in Geneva, we have got a biostatistics service that we can contact either for planning an experiment and designing it to answer the question we ask, or for reviewing our work and ensure we are using the right statistics for the design of our experiment. The solution may be to impose this pre and post review at the beginning of an experiment and before sending for publication. An internal review make the access to raw data and to the authors’ explanation easier. The internal reviewers should be liable for any misconduct he did not filter. I am sure that any university is able to pay for that especially if it gives a guarantee of quality.

  6. we may need to budget this in the grant application for funding. Auditing can be expensive! How much it would be? I need include this in my next grant application!

  7. Ed is absolutely right, fraudsters should be prosecuted and judged not by scientists, but according to criminal law.

  8. I admit to feeling uncomfortable with the idea of auditing scientists, period. More importantly, I sense a certain degree of cultural myopia in this type of proposal. Given the international scope of the scientific enterprise, how would such a system function across borders?

    1. Great question. Logistically & practically speaking, I would envision an auditing body that functions just in one region (i.e. one country). There, the concept can be validated, and if it’s successful we now have a template to expand to other regions.

  9. Revenge is not the desired outcome. We don’t have established data standards that make fraud impossible. For example, we are now in the data-rich age. Many drug studies require video of each and every mouse/rat/animal trial that form a data point/bit in a study dataset. Why not set up a data certification system, an adjunct to peer review, such that data submitted for publication through the Open Data Review meets standards of audibility and open analysis (anyone can re-do the statistics and see what they get) ? Many experiments are automated. How much more work is it to put a little cheap video camera in each animal cage? Already we have petabytes of security video. Why not petabytes of pet video?

  10. I had a similar idea, but it works like this: as part of grant funding from a major agency, such as NIH, researchers will be required to repeat a major finding from another NIH-funded researchers published work. The cost of doing this work will be included in the grant funding and it will be considered a responsibility of every such researcher to participate. The finding to be repeated will be selected by the grant review committee that is most familiar with the work to be repeated. It does not have to be an extensive repetition of published work, simply one of the key findings from an agency-funded work. Failure to replicate will trigger another two efforts from independent labs. Multiple failures to replicate will count against future funding. Feedback from the lab originating the result will be included in this process, to enable to most informed attempts at replication. Seems to me that this process, if carried out correctly, will provide a disincentive to all sorts of dodgy behavior currently devaluing the literature.

  11. Upon rereading the post, I see that my concern with the cross-border application of the audit may not be much of an issue when it comes to traditional numerical data, though it may be somewhat problematic in other disciplines when applied to, for example, descriptions of events, behavior that are written in other languages or coded with unique software and the like. Be that as it may, and as well-meaning as this proposal may be, the possibility of being tax-audited or, for that matter, audited in any other respect, is so disconcerting to me that I still feel very uncomfortable with the proposal.

  12. “[…] a pre-publication audit of Andrew Wakefield’s discredited study could have revealed statistical irregularities, conflicts of interest and methodological flaws, and prevented serious harms to public health.”

    Really? So, why the peer review has not worked in that case? I was under the impression that The Lancet is a very serious journal (IF = 45) with a solid and reliable editorial board…

    I do not want to be nasty, but first of all, audit the journals, the publishers, and the Editors-in-Chief.

  13. I agree with Miguel. Not so much with the auditing, but with the spot-auditing. It has to be an all or nothing approach. So, either all are audited, or none, not some random choice because random is always biased and those who are audited will be furious with the auditors and with those who are not audited. As a scientist myself, if I knew that I was randomly selected for auditing by some non-scientists, I would be furious. We really need to alert scientists that these types of initiatives are sprouting because we will become the victims, so we are thankful that RW has brought this to our attention. What also surprises me is to read in a few of the comments above on supposedly scientists calling on additional grant money or funding to support the audit as part of their budget. Please understand that money for such budgets and such audits would most likely be tax-payer’s money. If I were a tax-payer I would definitely not be pleased to know that my taxes were used to fund the research, and then also fund some scheme to check it again. A firm no to this scheme from me. There are some excellent free services like PubPeer available, that allow scientists to audit scientists through their final products, published papers. As PubPeer strengthens and grows, it will serve as a powerful deterrent, i.e., a form of auditing.

  14. Scientific publications are already audited – by the scientific community. Bogus and worthless papers are ignored, or if they are exceptionally outrageous they are are flagged by concerned readers and steps taken to correct/retract them. What should really be audited are grant applications, since that is where dubious practices can pay off big time and have a major impact on the zero-sum game of research funding. If bogus work is used to secure funding it should be prosecuted as felonious fraud.

  15. The idea of audit looks at first appealing, but indeed the science culture of being critical already is self-cleansing. Besides, from honest errors we all learn.
    The real question is how the Yoshitaka Fujii’s, Diederik Stapels, and Hendrik Schöns can be detected and suitably punished. The audit system here does not give an answer to that.

    1. Bringing the thugs to the sunlight is indeed the goal.

      The PLoS journals appear to be leading on open data – if PLoS rules are followed it would seem that any scientist can request data about any paper past and present.

      An audit should be no big deal at all – what could be wrong about showing the source data?

  16. The real problem with scientific literature is not fraud but non-reproducibility. This results from lack of any incentive for accuracy and stringency in performing the work, because there are currently no real checks on published research reliability. Most published work is never replicated and in fact there are negative incentives to do this (no direct funding for example). The system I outlined above would provide the incentive to replicate and reward those who stringently tested their findings before publication. Science culture is not self-testing right now, that is the problem. anyone who actually works in science and has to try and replicate others findings will tell you that. I can’t count how many “key findings” I’ve never been able to replicate.

  17. I am delighted of the discussion and many good questions raised on this website. There are many scholarly papers written on these topics. Between 1990-2000, two dozens of my colleagues and I published over thirty papers on the subject, a book, and three international conferences in the US and Europe. The best way to find the publication on a specific topic is Googling. If you find you need a copy some of my papers, please write to me and I will send it to you.

    1. That sounds very interesting. Indeed, your recent paper “Data Audit as a Way to Prevent/Contain Misconduct” (2013) matches exactly this thread. The key point, at least for me, is that “data audit” and “audits of manuscripts” are two different things.

      Please could paste here the DOIs of the most important references?

    2. Yes, Adil, you did pioneer this view in the 1990s and beyond. You even hosted the Second International Conference on Research Policies and Quality Assurance, in Rome, Italy (in May 1991, which I attended), at which you keynoted this pioneering view.

  18. Thanks everyone for continuing this important discussion. There are certainly some other issues, such as tools for enforcement or sanctions, that still need to be addressed (but in my opinion, these are outside the scope of the small auditing body I propose).

    Dr. Shamoo and many others have clearly laid the groundwork for this concept. My intention was to add a modern and entrepreneurial spin onto the logistical side it. So for anyone that may want to hear a bit more detail about how reviewers would join this effort, please check out my interview with Canada’s CBC Radio: https://www.youtube.com/watch?v=BmeHr-8cSzI

    In it, I also discuss my contention that the outdated desire to restrict manuscript review to only academic reviewers is misguided and arrogant.

  19. Audit can be too expensive to perform on a regular basis.
    Industry uses good manufacturing, laboratory and documentation practices (GMP, GLP and GDP) as a routine practice, why not replicate this in publicly funded unis?
    To date, no Canadian uni has instituted anything remotely similar. May be time for change? No GLP/GDP, no funding renewal….. would be a great incentive.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.