What to do when you make a mistake? Advice from authors who’ve been there

cpp-150After a group of researchers noticed an error that affected the analysis of a survey of psychologists working with medical teams to help pediatric patients, they didn’t just issue a retraction — they published a commentary explaining what exactly went wrong.

The error was discovered by a research assistant who was assembling a scientific poster, and noticed the data didn’t align with what was reported in the journal. The error, the authors note, was:

an honest one, a mistake of not reverse coding a portion of the data that none of the authors caught over several months of editing and conference calls. Unfortunately, this error led to misrepresentation and misinterpretation of a subset of the data, impacting the results and discussion.

Needless to say, these authors — who use their “lessons learned” to help other researchers avoid similar missteps — earn a spot in our “doing the right thing” category. The retraction and commentary both appear in Clinical Practice in Pediatric Psychology.

Their first piece of advice in “Retraction experience, lessons learned, and recommendations for clinician researchers” — assume errors will happen, and not vice versa:

1. Be mindful that the likelihood of making errors in a number of research endeavors is high and common. Assume that errors will be made rather than not! Risk for errors is higher in our current research climate where there are often larger study teams, the members are in different locations, and may represent individuals from different disciplines with diverse skillsets.

Other advice: Assign authors overlapping tasks to avoid “gaps in accountability,” regularly check data entry and analysis, and set aside large blocks of time for research to avoid missing details. There were a few tidbits that seemed especially important, from our perspective:

Own your errors and avoid defensiveness by covering them up or diverting responsibility. Handle errors when they are discovered. Although challenging and humbling, errors should be handled and promptly corrected when discovered. Keep in mind how much worse it will be if your errors are discovered by your editor, a reader, or your institution.

Other especially noteworthy advice: Model ethical conduct for your students by doing the right thing.

After the initial shock of discovering our data analysis error at such a late hour in the publication process, it has evolved to become a source of considerable pride to use this opportunity to educate and warn our colleagues and trainees of this all-too common risk.

Primary authors should take responsibility for errors, rather than blaming students and trainees, they note:

We, as primary investigators and coinvestigators, must take full responsibility for student errors and hopefully prevent the conditions that lead to such errors by providing them with appropriate training and supervision.

Finally, they have advice for journals:

9. Editors should create a culture of support, not punishment, around reporting of errors. From personal experience, we can attest that the hardest e-mail to write is the one where you have to tell your editors about your error, knowing that it may throw off their publication plans and schedule. The support and encouragement provided by the Clinical Practice in Pediatric Psychology (CPPP) editors, including their request that we write the current article, has been crucial to this process, and their reinforcement of our honesty and integrity in revealing our error has made this process much less painful than it could have been.

10. Journal editors, associate editors, and reviewers need to pay detailed attention when reviewing manuscripts and be very suspicious when study findings do not seem to make sense. Professionals in these roles serve as a sieve and are in a position to query the authors on issues of methodology, data analysis, and interpretation. Such queries in the review process may actually lead the authors to review and reanalyze their study, correct the undiscovered errors, and eventually have their now valid findings published and shared with their professional colleagues. In our case, a reviewer did raise the question of whether or not trainees understood the concept of “burnout” given the surprising finding, which could have been a signal to the authors and editors to take a closer review of the data before publication.

Reporting such mistakes is the only responsible way to conduct research, they conclude:

Although authors may be hesitant to report errors out of fear of the risk to their professional reputation, this is highly unlikely if the authors expeditiously self-report the error and nurture an atmosphere of transparency, honesty, and integrity in the conduct of research. Only through such practice, which in all likelihood will result in a rise in retractions to our scientific and professional journals, can we ensure the integrity of our work and increase our vigilance in fostering honesty in research practice and reporting.

The authors also resubmitted a corrected version of the paper, which appeared in the same issue as the commentary, both published late last year.

First author Kristin A. Kullgren told us in a phone interview that the journal asked her to write the commentary, after editors had “extensive discussions” at the American Psychological Association’s annual conference about the issues with the paper and what other readers could learn from the experience.

There was some discussion apparently at this conference about the idea of addressing this issue on a larger scale. Editors approached us about writing this commentary.

Kullgren confessed she had “mixed feelings” about airing out the details of such a “traumatic experience:”

Doing the right thing doesn’t always feel good, even though it’s the right thing. I really weighed the pros and cons about it.

Ultimately, the “pros” won, she added:

In the end, we felt like we learned from the situation – believe me, I’ve changed my practice and how I do things, and we hope other people do as well.

So far, she said the response has been positive, especially from clinicians trying to balance research with patient care, who say they appreciate the authors’ attempts to address ongoing issues in clinical research:

I’ve had a really good response from people.

We agree — it’s great to see authors be honest about what went wrong with their research, to help others avoid similar mistakes.

But there were a few spots of the commentary we paused over.

Namely, the authors note that rates of retraction in social sciences such as psychology are “much lower” than in biomedical research — we have seen data that suggest that is not true at all. Moreover, they assert that most papers are retracted for honest mistakes, while a 2012 paper we often cite notes that most retractions in biomedicine are due to misconduct.

Here’s the entire retraction notice for the original article, “Inpatient pediatric psychology consultation-liaison practice survey:”

The retraction is at the request of the authors. When analyzing the data for presentation at an upcoming conference, the error was discovered by a research assistant and immediately brought to the attention of the authors. Unfortunately, this problem was not identified in time to prevent the manuscript with the erroneous analyses from being printed and being mailed to subscribers. Specifically, one section of the article reported data on psychologist impressions of consultation liaison practice. These data were not reverse scored as intended prior to analyses. As a result, the conclusions in the Discussion were incorrect. The authors accept and share responsibility for not identifying the error prior to publication. All other analyses in the article have been reviewed and are correct.

The data regarding psychologist impressions of consultation liaison practice has been recoded, reanalyzed, and conclusions have been updated. These updated results and interpretation will be resubmitted for publication in a separate article.

All authors of the original article joined in the request for the retraction.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy.

6 thoughts on “What to do when you make a mistake? Advice from authors who’ve been there”

  1. When you made a mistake, correct it right away. Make sure that whatever secondary gain you’ve acquired from such mistake is surrendered or given up.

  2. With respect to item 1, it makes sense to me that that the risk of errors should always be there and that such risk likely increases as data, their collection, and analyses become more and more complex. I suspect that, on the one hand, most researchers unconsciously recognize the greater-than-chance likelihood that such errors exist in their data, yet their scientific descriptions of data and their analyses typically project an air-tight degree of precision that, in many cases, might not be there. I wonder to what extent the discrepancy between our suspicions about our less-than-perfect data and the requirement to operate with the highest degree of precision and accuracy is at the heart of researchers’ reluctance to share their data (precedence of discovery, future data mining and similar issues aside).

    1. Miguel Roig
      I wonder to what extent the discrepancy between our suspicions about our less-than-perfect data and the requirement to operate with the highest degree of precision and accuracy is at the heart of researchers’ reluctance to share their data (precedence of discovery, future data mining and similar issues aside).

      This is a very astute observation! Also if you don’t keep a tight reign on things, it’s easy to end up with multiple copies of data and analysis (eg Excel or SPSS files), with little good documentation explaining the different data transformations or analyses done. I wonder how many authors would struggle to go back and generate a single, comprehensive data set for OA?

      1. That some journals make it mandatory to provide underlying data along with the manuscript may motivate researchers to store their data more responsibly. Data sharing initiatives, apart from helping to maintain transparency in science, are likely to help researchers in keeping a track of their own data.

  3. With respect to #10, this is one case where it would be very interesting to see the reviewer comments and the authors’ responses. The authors state that a reviewer did indeed raise questions about their findings:

    “In our case, a reviewer did raise the question of whether or not trainees understood the concept of “burnout” given the surprising finding, which could have been a signal to the authors and editors to take a closer review of the data before publication.”

  4. In your introduction to the piece, you write “… assume errors will happen, and not vice versa.” – it is not possible to use “vice versa” here. I guess what you mean is “… assume errors will happen rather than they will not.”, or similar.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.