RAND re-releases withdrawn report modelling child mistreatment

A think tank has re-issued a report on child welfare in the U.S., six months after it pulled the document amidst criticism from dozens of researchers.

The report offered policy recommendations for improving the child welfare system, based on numerical modeling conducted by researchers at the RAND Corporation.

RAND pulled the initial version of the report in June, after researchers — including Richard Barth, dean of the University of Maryland’s school of social work and Emily Putnam-Hornstein, of the University of Southern California — criticized the model for underestimating the rate of maltreatment over a child’s lifetime.

The report, reissued Dec. 11, contains updates detailing how the RAND researchers addressed the criticism. However, Jeanne Ringel, a senior economist at RAND and the study’s lead author, told us:

The core findings remain relatively similar. Some of the underlying numbers have changed here and there but the basic story is consistent.

Ringel told us that after her team received the initial salvo of criticism, they ran some tests looking at how increased lifetime rates of maltreatment would affect the model’s output. She said:

We didn’t think it would have a large impact, but we weren’t sure so we wanted to take the time and do a review of the model. We pulled [the report] down while we undertook that process.

After the team decided to revise it, Ringel said it did a literature review and performed other data analyses, which took a few months. Besides addressing the main issue, she said the team found other adjustments to make:

they were more fine-tuning than major revisions. We made some improvements to the process to calibrate the model to get the baseline to better match the observed data.

A spokesperson for RAND noted the last time the organization had pulled a report was in 2011:

…the child welfare analysis is not the first report to be retracted, but such action is rare. 

Errors, fixed and compounded

One critic agreed that the authors had addressed the lifetime maltreatment estimate raised in May, but felt the paper still suffered from problems stemming from the overall approach. Barth had criticized the report  — titled “Improving Child Welfare Outcomes: Balancing Investments in Prevention and Treatment” — when it first appeared and co-wrote a letter with Putnam-Hornstein, among others, that spelled out their concerns regarding the lifetime maltreatment estimate. He told us he hadn’t reviewed the new report thoroughly, but nonetheless observed:

the report endeavors to integrate information across a wide array of child welfare research sub-domains and studies without really checking back with the authors of those studies. This is a scientific path that allows errors to creep in and, ultimately, become compounded…

Although I see that the major problems related to the grossly underestimated lifetime prevalence estimates are better addressed the idea of sending this report in without getting more feedback from expert child welfare reviewers is one that I find hard to comprehend.

The new report makes frequent note of the changes made. In the preface, it states:

The results in the report have been updated to incorporate feedback we received on the prior version regarding the choice of model inputs used to produce the baseline results. We had previously used annual rates as proxies for the lifetime rates (between birth and age 18) of events along the child welfare system pathway (e.g., referral to the child welfare system, investigation of maltreatment report). We now use a combination of literature and data analysis to generate model inputs that more closely reflect lifetime rates. In addition, we used this opportunity to make several additional changes, including improvements to the process used to calibrate the model, adjustments to the cost calibration points, use of a different discount rate for calculating costs, and correction of minor programming errors identified during the code review.

Many of those changes are spelled out in an appendix to the report.

Disagreement about impact

Ringel told us she hopes the report will have an effect at the national level on how resources are allocated to addressing child mistreatment.

Traditionally the welfare system focused resources on treatment side. Some have argued that additional prevention would be important. But what’s new and different about our report is that our model shows you can do both together and have lower costs overall, reducing the number of kids in system and reducing costs with their time in the system.

The model, she said, is a tool for testing different kinds of policies. Ringel added that the researchers consulted outside experts before publishing the final version of the report:

We did, as part of our review process, have additional people review the report and received input from them and made change based on their comments and suggestions.

Barth told us he thought the report would have benefited from more outside input:

the authors continue to fail to reach out to a pool of child welfare scholars who understand the strengths and limitations of the studies that are cited by the Rand team …

I would not suggest that this report be relied on for policy analyses.

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at [email protected].

One thought on “RAND re-releases withdrawn report modelling child mistreatment”

  1. This Retraction Watch report on the re-issued RAND child mistreatment modeling study is exemplary for clearly articulating the original and the remaining problems and limitations of the study. Retraction Watch makes a valuable contribution to would-be users of studies by alerting would-be users about these matters.

    Modeling, like meta-analysis, depends on the research of others and cannot make up for the deficiencies of the studies in terms of design, data collection procedures, analyses, and interpretations of their authors. Sensitivity analyses, on the basis of the results of multiple reports of the same parameters can help–especially if the underlying studies are weighted for sample size and other measures of reliability and validity.

    In any case, on-going consultation with both the original authors of reports and their critics is essential to improve modeling estimates for policy and planning purposes.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.