KPMG government report on research integrity makes up reference involving Retraction Watch founders

An August 2023 report on research integrity by consulting firm KPMG, commissioned by an Australian government agency, contains a made-up reference, Retraction Watch has discovered.

Reference 139 of the report, “International Research Integrity Policy Scan Final Report: Compilation of information about research integrity arrangements outside Australia,” reads:

Gunsalus CK, Marcus AR, Oransky I, Stern JM. Institutional and individual factors that promote research integrity. In: Macrina FL, editor. Scientific Integrity: Text and Cases in Responsible Conduct of Research. 4th ed. Washington, DC: ASM Press; 2018. p. 53-82. 

A book with that title exists, but the four authors listed did not contribute a chapter, and the 2018 edition does not appear to contain a chapter with that title. We – Adam Marcus and Ivan Oransky – have indeed published with CK Gunsalus, but nothing resembling this reference.

We spot-checked about 20 of the other references in the book, and while there are some punctuation errors, the rest of the citations we reviewed exist. So this seemed to be a single error of unclear provenance. We wanted to let KPMG know, and find out what they would do about it.

We contacted KPMG Australia’s press team on Jan. 28. They acknowledged receipt of our email the same day, but when we didn’t hear back a week later, we followed up. On Feb. 6, we sent some questions we wanted on-the-record answers to, and waited. 

Although as journalists we prefer to speak directly with people involved, none of this was unusual in seeking comment from companies and government agencies.

On Feb. 12, we followed up to check on progress, and were told scheduling had been difficult, and that answers would be forthcoming. We followed up on Feb. 14 and received no response. We then followed up again on Feb. 18 to say we were getting the impression the company had decided not to respond but that we would be publishing soon.

We have not heard back since. Which is puzzling – if there’s a clumsy error in a single reference, why not just acknowledge that, explain it, and move on?

A spokesperson for Australia’s National Health and Medical Research Council (NHMRC), which commissioned the document, thanked us for raising the issue with them, and said the agency “will work with KPMG to correct the report.” The consulting firm had not yet contacted NHMRC, the spokesperson said.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly updatefollow us on Twitter, like us on Facebook, or add us to your RSS reader. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at [email protected].

16 thoughts on “KPMG government report on research integrity makes up reference involving Retraction Watch founders”

  1. Looks like a ChatGPT hallucination.
    I have seen similar cases where authors, title, and venue all made sense, but the combination did not exist.

    1. Some intern wrote the report using ChatGPT and the Australian Government was billed 750,000 dollars.

    2. Yeah, feels like GPT to me. It knows who are famous in this field, what a valid title in this field of study should sound like, and which journals/books are often cited in this field, and it just randomly combine these elements.

  2. The comments above are big reason why I read Retraction Watch, i.e., to make me more aware of techniques & methods for creating & offering false information. When I read the Retraction Watch article above, I wondered why someone would manufacture a reference.
    That it could be a ChatGPT hallucination makes perfect sense! (but is not something I thought of). Now going forward, I’ll have this idea as part of my mental tools to detect misinformation. (obviously, the Retraction watch articles themselves are interesting as well). Thanks much for offering the insights above!
    Retraction Watch has influenced me in a positive way. In addition to a full-time job as a scientist in industry, I’m Adjunct faculty for several universities outside the US in a non-English speaking country. While I’m glad to review grad student’s papers, I refuse the chance to be a coauthor because I can’t readily verify the origin of data. And the reputational risk related to being an author on a retracted article is far worse than the small credit for getting a published article. Retraction watch has built this awareness in me (otherwise, I would be too trusting).

  3. This seems like only half the story. It would be nice to have more details on the context of the invented reference. The missing context is WHY invent a reference? What was the motive? What type of argument was the invented reference supporting? Does there appear to be an ideology or partisan narrative that benefited from an invented reference?

    1. To Michael’s comment, there is no “why” to why AI writes something, other than KPMG asked ChatGPT or other generative AI to write them a report on research integrity. KPMG likely asked the AI to include references, which it did.

      AI does what it is asked. Generative AI can make up things to fit the user’s request. It did.

      1. To give additional insight on my comment: In response to a request to Chatgpt to write a holiday card without the use of “e”, one sentence it wrote was “May this season bring you closnss with family and fri4nds, and may you find comfort in all that surrounds you.”

        WHY did ChatGPT misspell “closeness” and “friends”? Because I asked it not to use “e.” AI does what it is asked. Generative AI does so creatively. (please forgive that this comment isn’t necessarily retraction watch related. It may help readers spot misinformation by explaining how AI “thinks.”)

        As people get better at using AI, it will likely be harder to spot misinformation.

  4. When I proof read or review I often check random references to see if they really tally. then there are the plagiarisms – I don’t get them all, but often I see the wording does not match, or references claim implausible things due to lack of awareness over the history of when something was written e.g. I spotted one where someone was making a claim about money backed by gold, yet at the time the author was vociferously defending the pound sterling being backed by silver…

  5. The first thing I discovered when I tried out Chat GPT was that it made up references – in fact on the questions I asked it made up all of the references, which supported a demonstrably untrue narrative. So not using that any more.
    This looks like someone used a LLM of some ilk to create a framework for a report and then failed to check all references during the editing phase.
    Which of course raises the big issue: if this report is, even partially, a rewrite of a LLM output then it isn’t only the references that are problematic, it’s the actual text of the document itself that is suspect.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.