The journal Appetite has retracted a recent paper that purported to show that children whose parents kept a tight fist on the grub were less likely to become obese than those whose parents were more laissez-faire with the feed bag.
The article, “Relation of parenting styles, feeding styles and feeding practices to child overweight and obesity. Direct and moderated effects,” appeared last year. The senior author was Laura Hubbs-Tait, of Oklahoma State University.
According to the abstract:
The purpose of this study was to evaluate the direct and interacting relations of parenting styles, feeding styles, and feeding practices to child overweight and obesity. Participants were 144 mothers and children under 6 years of age. Mothers completed questionnaires about parenting and feeding styles and feeding practices. Researchers weighed and measured mothers and children or obtained measurements from a recent health report. Feeding practices were not directly related to child weight status. Compared to the uninvolved feeding style, authoritative and authoritarian feeding style categories were linked to lower odds of overweight. Feeding practices interacted with authoritative and authoritarian parenting styles to predict obesity: (1) healthful modeling was associated with 61% (OR = 0.39) reduced odds of obesity in children of authoritative mothers but with 55% (OR = 1.55) increased odds in children of non-authoritative mothers and (2) covert control was linked to 156% (OR = 2.56) increased odds of obesity in children of authoritarian mothers but with 51% (OR = 0.49) decreased odds in children of non-authoritarian mothers. Healthful modeling interacted with feeding style demandingness to predict overweight and with responsiveness to predict obesity. Findings suggest the need for research and interventions on mechanisms mediating between feeding practices and obesity in families characterized by non-authoritative parenting styles.
Here’s the notice:
This article has been retracted at the request of the Author and Editor in Chief due to serious errors in the data.
“Serious errors in the data” sounds pretty ominous. But the reality, according to Hubbs-Tait, is more mundane:
On October 24, I notified the action editor for “Relation of Parenting Styles, Feeding Styles and Feeding Practices to Child Overweight and Obesity: Direct and Moderated Effects” that I had found an error in the data for the manuscript and had tracked down the source of the error to a column switching mistake in copying data from one spreadsheet to another by a research assistant. The error was difficult to detect because the coefficients for internal consistency of all measures were acceptable as were the descriptive statistics for all measures.
As the lead investigator the responsibility for the error is mine. This error and our retraction are a reminder to all researchers that even if others have checked every item for every subject for implausible values, the lead investigator should re-run those analyses.
We regret the error and took immediate action to retract the paper.
Spreadsheets can be used for data communications, shouldn’t be used for analysis and publications.
I don’t think the spreadsheet is the issue here. It’s the tendency of novices to combine data sets by copy&paste jobs where later you have no way of knowing whether this was done properly. Stuff like that should be documented via syntax in statistical packages so that the origin and processing steps are transparent and retraceable for all data.
It’s not only novices. I think most scientists don’t have wide skills in Excel, SigmaPlot or any other data, math or statistics package. It should be noted that Excel has all the tools necessary to make processing steps transparent and retraceable.
I use Matlab, which is overkill for the basic data manipulations I do. I copy original data files into a folder, then import them into Matlab and save them as a Matlab workspace. I then write a script that opens the data files and plots the figure or figures I need. The manipulated data is not saved. If I need a different figure I add it to the script. Each time I look at the data the script is run on the original data. There is very little chance of corrupting the data as it is always sourced to the original files and no overwriting is done. But it doesn’t preclude the possibility of errors in the script.
For publication what I do is crude. I run the script, use a third-party antialiasing script (myaa) then do a screen capture and paste it to Photoshop. In the ‘shop I add the axes, legend, and size the figure for the journal. The output isn’t very good, but it is the best I can do to get consistent, acceptable figures.
If you exported your matlab figures as eps and added the legends etc in illustrator instead of photoshop, you would get much better result, as everything would be vector based and fully scalable.
And that leaves me wondering why on earth you wouldn’t add legends and annotations in Matlab directly…
I’ve used a similar workflow; you can also save as a pdf from MATLAB and then pull it in to Illustrator or Inkscape as an infinitely scalable vector image.
All fine and good, but note that all of those packages involve the NON-REPRODUCIBLE STEP of adding stuff via Photoshop/Illustrator/Inkscape. The vector image part is irrelevant. Most people do not use a scriptable image editing program, such as Python with Gimp.
You CANNOT do that with Excel. When you COPY a column, once done, it’s not reproducible. This is similar to many of the issues with Potti and the Duke situation, in which spreadsheet manipulations, which cannot be traced or documented, led to his professional demise.
Wrong. OF COURSE you can do that in Excel: you can write VB scripts to merge based on values.
Most people don’t, but that doesn’t mean you “can’t”.
“You CANNOT do that with Excel. When you COPY a column, once done, it’s not reproducible.”
I don’t use Excel much myself (I consider it to have come straight from Satan’s bottom).
But I’m pretty sure you never need to copy or past data, even between different spreadsheets. Formally you should reference the relevant cells without copying them. Then all steps and calculations would remain auditable.
Other than wrangling about data tools, can we get a word of thanks to the authors for being honest and forthcoming? I wonder if the corrected data still shows an effect and if they will publish again.
Thank you, StrongDreams, for not losing sight of the integrity at work here, despite the illuminating tangents about Excel and scripts and whatnot. I agree, the author should be commended here. Not only did she initiate the retraction, she is taking responsibility for the error behind it. Kudos to her.
I don’t think they should be thanked. They shouldn’t have buggered it up in the first place. Simple correlation studies about complex social processes are of limited utility anyway. The least people who go down that road can do is have decent quality control. Now perhaps the profession needs to set out standard software processes to use that have the approval of statisticians and investigators of bogus science. Then scientists who know their science area don’t have to in effect make their own tools. Standardize the stats tools, including how to put it in the computer so it can be checked by others later. Always be ready for that. Like wearing a seatbelt. Just a thought.
How exactly would “the approval of statisticians” help prevent errors like this? It’s like saying linguists should approve word processors to prevent typos.
“Word processors”? I’m glad I’m not the only one who remembers the ’80s. I don’t think the article was about a typo. Or if it was, a typo in a spreadsheet that had a domino effect. What I meant to convey after reading all the software-filled comments was that scientists should be better served by software. There should be a standard kind that doesn’t destroy/lose data and makes standard data files that sit in your computer in case there are are ANY questions later, including ones that turn out to be baseless. The approval of statisticians, I could have said ‘helpful scrutiny’, would be to make sure the software fitted the statistical techniques scientists use. I mean spreadsheet programs were invented for accounting weren’t they? Yes, none of that would prevent a mistake in data entry. I was addressing the comments that suggested a) some software doesn’t keep some of the steps in processing and that makes mistakes harder to spot and to correct. And b) that statistics software and data retention in science work is weirdly non-standard. I’ll add scientists should not have to go fiddling around with, trying to find software and software procedures (to use Photoshop or not to, in submitting) in the ways reflected in the comments. What and how data set software does what it does should be part of the infrastructure of science. The same way the actual statistical techniques are.
This is probably the best punny title on this blog, but the journal’s title was asking for it.