Last week, Emily Putnam-Hornstein, an associate professor at the University of Southern California, was reading what seemed like a noteworthy new report from the RAND Corporation on the child welfare system. But then she realized that some of the key estimates were off. When she sent the report to some colleagues, they agreed.
Curious, Putnam-Hornstein and some of her colleagues tuned into a RAND webinar on Thursday, May 25, to discuss the report, Improving Child Welfare Outcomes: Balancing Investments in Prevention and Treatment, which had been released two days earlier. They asked the report’s lead author, Jeanne Ringel, about the numbers, and Ringel responded by saying they were on-target. (Ringel recalls acknowledging that the numbers were conservative, but that revised inputs would not change the overall results substantially.) The Pritzker Foundation, which had funded the study, also dismissed the concerns.
Ringel, however, contacted Putnam-Hornstein to suggest a phone call. The Memorial Day holiday weekend was just about underway, so the call was scheduled for Wednesday, the 31st. In the meantime, Putnam-Hornstein and other researchers drafted a letter explaining their concerns. A conference call happened on the 31st, during which the critics shared their concerns, and also said that they’d publish the letter online if the report was not retracted swiftly.
Apparently, the critics were persuasive:
Within a few hours, Ringel emailed to say that RAND believed “that the report does require modification.” And today, after RAND reviewed the concerns, the report was “withdrawn pending further review.”
The 67-page May 22 report — which earned press coverage from public radio station KPCC — found, according to a note posted in place of the report by RAND today, that
striking a better balance between programs to prevent child maltreatment and services for those who have already suffered from abuse could improve long-term outcomes for children while also significantly reducing child welfare system costs in the United States.
The note continues:
However, in the wake of the release of our findings on May 23, 2017, we received some feedback from the wider child welfare research community about several of the inputs used in our model — specifically those related to lifetime rates of child maltreatment and resulting engagement with the child welfare system. We are currently evaluating how altering our assumptions to reflect higher lifetime rates would affect the policy scenario results. Based on preliminary modeling, we don’t think the results will change materially. But RAND’s commitment to accuracy and quality compel us to take these concerns seriously, so we have withdrawn the report and will publish a revised and updated version as soon as possible.
What had happened? According to the letter by Putnam-Hornstein and colleagues:
Unfortunately, the RAND model includes basic assumptions that are, in fact, off by at least a factor of ten. These are not peripheral assumptions – they are the most important pieces of the puzzle in understanding the CWS [child welfare system] system.
In generating their lifetime estimates, from birth through age 18, RAND has severely underestimated how many children are reported to CWS, how many are substantiated as victims, and how many children ever spend time in foster care.
Ringel told Retraction Watch that RAND will alert reporters who have written about the report, or have embargoed copies of it; Congressional staff who engaged with the report; and stop all promotion of the paper, until it is re-issued. Ringel said that she and her colleagues are “always open to constructive criticism that may help to advance the quality and rigor of our research.” In a statement, she said:
In constructing the type of complicated simulation model used in the study, researchers inevitably face difficult decisions and tradeoffs regarding appropriate data and methods (e.g., is it better to use a national data set that is less detailed or a state one that contains more detailed information?). One set of decisions we had to make concerned the selection of input values related to lifetime maltreatment rates and the degree of engagement with the child welfare system.
Given the cross-sectional nature of the databases we relied on for the study, we assumed that annual rates would serve as a reasonable proxy for lifetime rates. However, based on the feedback we have received from several other researchers, we now believe we should have used higher values for several inputs.
Ringel said that after the report was published, she and her colleagues reran the model with those higher values, and found that the results “did not change substantially” — about 1-2%.
All of that said, we believe that the report could be further improved by using some of the input values suggested our colleagues from the wider child welfare research community. At a minimum, doing so will increase the face validity of the report and, we hope, increase readers’ confidence in the model’s outputs.
Consequently, we are currently evaluating how altering our assumptions to reflect higher lifetime rates would affect the policy scenario results. Again, based on preliminary runs with higher lifetime rates we don’t think the results will change materially. But RAND’s commitment to accuracy and quality compel us to take these concerns seriously, so we have withdrawn the report and will publish a revised and updated version as soon as possible.
Rick Barth, dean of the University of Maryland’s school of social work and one of the authors of the original letter, told us:
It is embarrassing to have colleagues point out that your methods are seriously flawed. So I can understand the inclination of Rand to respond slowly and, then, not too be at all specific about the reason for taking the report down. Yet, I think that the Rand brand would have been better served to act more swiftly and with greater clarity about what happened and certainly not to suggest that “Based on preliminary modeling, we don’t think the results will change materially.” That suggests that Rand is continuing to minimize the magnitude of this error.
And the authors of the original letter together tell Retraction Watch:
We are happy that RAND has chosen to withdraw the report while they correct errors in their model. Much is risked by keeping estimates and models in circulation when they are known to have serious flaws. When the flaws inevitably surface, it calls into question any confidence in related implications. We hope that their original findings of cost-savings hold after the models are corrected, but will wait to see.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.