We couldn’t help noticing that the past few weeks have seen calls to retract two papers on food, from different sides of the political spectrum. One paper actually looked at the effects of genetically modified organisms (GMOs), while the GMO link in the other paper seems mostly to be in activists’ minds. Consider:
On the right, we have Henry I. Miller writing on Forbes.com about a study of rats fed genetically modified maize: “The honorable course of action for the journal would be to retract the paper immediately“:
It also deserves mention that the publication of this article represents an abject, egregious failure of peer-review and editorial competence at Food and Chemical Toxicology, the journal in which it appeared. The honorable course of action for the journal would be to retract the paper immediately – a point on which the editors have thus far been silent.
There are a lot of weaknesses in the study, which Miller — a fellow at the conservative Hoover Institution at Stanford — points out. And the way the scientists handled the pre-publication embargo — forcing reporters to sign a non-disclosure agreement that barred them from seeking outside sources — left more than a bit to be desired. But retraction?
Now on the left: “Retract the Flawed ‘Organic Study’ Linked to Big Tobacco and Pro-GMO Corps,” a call on Change.org to yank a recently published study, “Are Organic Foods Safer or Healthier Than Conventional Alternatives?”
As Rosie Mestel of the Los Angeles Times wrote in an article about the petition:
The article focused specifically on health aspects of organic food versus conventional food; in an interview, the first author said that she and the senior coauthor, both doctors, often get asked by their patients if eating organic food is healthier, so they decided to look at it.
The scientists weren’t studying genetically modified foods (though if GMO foods were in the conventional data, one might think that GMO-caused health factors would have revealed themselves in the results). And they weren’t studying high-fructose corn syrup — they were only reviewing fruits, vegetables, eggs, grains, dairy, poultry and meat. Not processed foods.
The article, in other words, wasn’t about the entirety of everything that people think is wrong about the way our food is grown and produced today. It wasn’t even about every type of difference between organic and conventionally grown food.
Check out the rest of her piece. And let us know whether you think either of these studies should be retracted, in our poll below. (We should note that we’re not aware of any problems with the data, be they fraud or errors, in either study.) Based on the heated discussion of PLoS’s blog post on their retraction policy earlier this week, we’re betting there are some strong opinions out there.
Hat tip: Neurodojo
I voted for no retractions if the papers aren’t in error. But then I read up more on the rats paper. From ksj tracker: “Among the problems, for instance, the researchers chose a strain of rats known to spontaneously develop tumors without exposure to any of the products under study. Given a 30 percent mortality rate in their control animals, they also failed to provide ncessary data about which of those animals had developed tumors.” The publisher even posted a list of methodological complaints about the study: http://www.sciencemediacentre.org/pages/press_releases/12-09-19_gm_maize_rats_tumours.htm
That seems like an unusual form of an ‘expression of concern’ to me.
Using a strain of rat known to spontaneously to develop tumours is standard practice in such studies. If you just took normal rats and exposed them to tobacco smoke for 2 years you probably wouldn’t notice any health effects either, but that wouldn’t mean tobacco doesn’t cause cancer.
I thought the Seralini study was quite interesting and their discussion raised some interesting points. Although my gut feeling is that eventuanlly it will be determined that there is no significant risks to human health, as preliminary data it is intriguing.
Congratulations are in order to the editors for ignoring the predictable tantrums and to the authors for undertaking such an investment in time and resources in such an unfashionable field
Thanks, the explanation in your first paragraph is very helpful!
From the articles I read, it’s not standard practice for long term studies, except when testing an anti-cancer drug. Using this strain just make any treatment effect more difficult to see, particularly when you only have one 10 rats control group per sex.
littlegrayrabbit: Pay attention to an important point: Séralini´s paper is scientifically so weak it proves nothing, it suggests nothing. It no use taking this papers as a starting point. You would start from mere conjectures. The GMO issue deserves better studies, not the BS Séralini presented us. It served other goals but Science suffered unduly,
“Séralini´s paper is scientifically so weak it proves nothing, it suggests nothing”
Lots of papers prove nothing and it suggested quite an interesting and intelligent mechanism why this particular GM crop might promote cancer when consumed. The trouble is that very few people ever actually read the paper and fewer of those people have the knowledge to understand what was being said.
What they showed was a substantial increase in tumor incidence, which due to the small size of the study didn’t reach statistical significance. However that leaves open the possibility that a sufficiently powered would reach statistical significance. Now call me old fashion, but I would have thought the onus is on the manufacturer to show their product was non-carcinogenic and not on the consumer.
On another site I got into an argument with an academic who gets a lot of grants from the GM industry and we were running through all the usual arguments. Boy, was he proud when he found a paper in Japanese that claimed no statistical increase in tumor incidence with round-up ready soybeans. As per normal, he hadn’t read the paper, so it was in Japanese, but the tables were in English. And yet again, they showed a substantial increase in tumor incidence but again just falling beneath significance in an under-powered study. So two studies suggesting a potential problem, none suggesting it is safe.
To me, having a new food being promoted in which there is only a 15% probability that it has no increased risk of cancer is not acceptable. The manufacturers need to do sufficiently powered studies to determine if uncoupling the aromatic acid biosynthesis pathway from transcriptional and mRNA stability control (and why you would want to do that is a mystery to me) does not pose a risk to consumers.
“Science”….pffff.
littlegreyrabbit : there isn’t any substantial increase in tumour incidence. The normal groups in Seralini study are the treated groups, which exhibit the same tumour and mortality rates than control studies on this rat strain.
The only group showing unusual results in this study is the control group, which is way under the known tumour rate for this strain.
You are speaking of “an interesting and intelligent mechanism”, but the only thing I see there is a far stretched hypothesis used to cover an increadibly sloopy work.
“And yet again, they showed a substantial increase in tumor incidence but again just falling beneath significance in an under-powered study. So two studies suggesting a potential problem, none suggesting it is safe.”
If it’s substantial, it should be significant. If it’s not significant, then it means the differences are so low that they are of the same order than the intra-group random variations.
Well Gabriel, you are to congratulated. Of all the industry spruikers I have come across going through their talking points, you have managed to come up with something completely novel (to me) albeit insane:
“The only group showing unusual results in this study is the control group, which is way under the known tumour rate for this strain.”
There is no such thing as an ISO tumour rate. Tumour rates are going to vary depending on conditions, regime and minor variations in genotype. Tumour rates will varying from lab to lab and from experiment to experiment – all you can do is try and randomise your selections into groups and limit the number of factors you vary in their treatment regime. Perhaps make such daft claims impresses the rubes or the journalists but keep them away from here.
I am not sure what basis you think that suggesting deregulating the Shikimate pathway might alter the balance of secondary metabolites is a far stretched hypothesis. It seems quite possible to me, although whether or not this is such to increase a risk of carcinogenesis is another matter. I have also seen industry claims that enzymatic feedback control ought to be sufficient, but we know of at least one instance where transcriptional upregulation of this enzyme alters carbon flow through this pathway – in the petunia petal. “Incrediably sloopy” or variants is another talking point, but the data is the data is the data. The experiment design does not allow them to prove anything definitively (I think they were trying to capture a completely different effect) but that is not the same as being sloppy.
“If it’s substantial, it should be significant. If it’s not significant, then it means the differences are so low that they are of the same order than the intra-group random variations.”
Er, no and no. Whether or not you achieve significance depends on the size of the population and the incidence rate.
The answer seems simple, run a study of sufficient magnitude that demonstrates Seralini is wrong, publish that and then the scientific record is complete. I find calls for retraction bizarre and against the spirit of science.
littlegreyrabbit, I thought that the comments policy frowned on ad homs? “spruikers” and “insane” being the objectionable words. I have to add that these examples stood out because the comments threads here are usually very well-mannered. I’m a newbie here and am following this discussion with great interest.
Oh come one : at least do some research about the subject before answering like you are some big shot.
The speed at which this strain of rats develop tumours is quite well known. The French HCB Scientific Committee even got the reference data for the precise stock of rats from Harlan laboratories who provided the rats used in this study : http://www.hautconseildesbiotechnologies.fr/IMG/pdf/HCB_scientific_opinion_Seralini_121019.pdf
And while every other group fall within the prediction intervals calculated for rats of this stock, the female control group is out of every observed result. I dare you to find any study using this strain with a similar control group.
Now, you can write pages about the Shikimate pathway deregulation, it’ll still be a far stretched hypothesis. Yes, you can observe some variability in nutrients and co, but you can find a much higher variability between different standard maïze varieties. And anyway, suggesting such a minor difference can account for such variability in tumour number is preposterous. If Seralini really think a difference in phenolic acids content can explain the 4 fold decrease in tumour he pretended to see, well, he’s out for a Nature. Or an Ig Nobel.
Sorry, but this protocol is still sloopy : there was no chance in hell to get any significant effect while following it. Breaking the 200 rats group down into 20 increadibly small groups would not even allow him to see a 30 to 50% increase in tumour number or mortality. The way the analysis are done is also sloopy. It’s pretty obvious that there’s no scientific reasoning out there.
“Er, no and no. Whether or not you achieve significance depends on the size of the population and the incidence rate.”
Sorry, but it’s still the same : you can’t say it’s “substantial” when it’s not even significant.
“The answer seems simple, run a study of sufficient magnitude that demonstrates Seralini is wrong, publish that and then the scientific record is complete. I find calls for retraction bizarre and against the spirit of science.”
Why would I waste more than 3 millions euros on that, while 1 million euros would be enough to test one REAL potentially cancerogenic factor ?
Concerning the call for retractation, it’s pretty simple : scientific journals are not the Daily Mail. They’re not supposed to be some kind of rumour depository. Seralini submitted his article to get some credibility with medias and politics. Not to submit new knownledge to the scientific community.
um, they used the exact same type of rats that monsanto used for their own studies, so if monsanto views the present case as irrelevant for that reason, then monsanto is logically obligated to view it’s own previous in house studies as irrelevant as well.
specious bs is just industry SOP
Yeah, they use the same strain of rats. But for a long term study, not a 3 month study. That’s completely different.
Moreover :
1) Seralini only quote Hammond et al. 2004 (200 rats), and conveniently ignore Hammond et al. 2006 (400 rats).
2) even in Hammond et al. 2004, there’s 160 rats in control groups. Not 20 like in Seralini et al. 2012.
I have no confidence in Monsanto’s studies, but at least, they look like real studies.
The Seralini group has other discredited anti GMO studies out there. Bad science is their SOP.
The Annals of Internal Medicine systematic review has some serious issues, it should have never been published as is. The issue is not necessarily as to how they’ve done the study, but that they’ve equated the nutritional content of food as synonymous with ‘healthy’. They found no good evidence that the nutritional content differed and then went on to say that organic food is therefore ‘no healthier’ – despite that their own conclusions are “Consumption of organic foods may reduce exposure to pesticide residues and antibiotic-resistant bacteria.”
Pesticides increase the risk of type 2 diabetes, antibiotic resistant bacteria aren’t exactly the kind of thing you want in your food either. Therefore organic food is in fact ‘healthier’ even if the nutritional content doesn’t differ! I seriously can’t believe that wasn’t picked up in peer review or by the editors.
What makes the issue so much worse is that this study also presented a very newsworthy story and so got very widespread coverage in the international media. In part I blame the media for not performing enough scrutiny, but much of the fault lies directly at the feet of the authors.
In the Forbes article I found this comment odd.
“– Séralini et al. argue that the exceedingly long time-frame of their study was necessary to reveal the experimental effects, but animal researchers long ago established that such lengthy studies add no additional meaningful or valid information beyond that which can be collected in shorter times;”
The only reason you would try and keep a study short..was to try and avoid something “expressing” in a longer time interval.The same fallacious reasoning was why the US govt testing of cell lines missed SIV40 contamination since they stopped looking after about 14 days..but when allowed a longer time frame..the cell line could still show contamination.Which the short study would have missed.
Yes, but we’re speaking about rats here. Your sample size decrease with mortality, artefacts are amplified by aging etc…
Maybe you can have an increasing effect with age, but you also get increasing errors.
The only thing “wrong” with the organic food study was the semantic issue of using the word “healthy” instead of “nutritious” in the title. I hardly think a whole paper should be excoriated on the basis of a poorly selected title. In the paper, the authors clearly pointed out the extensive limitations of their analysis and the limited focus of the study: nutritional content (of a handful of nutrients only), pesticide exposure and antibiotic-resistant bacteria exposure. That’s it. They didn’t pretend it was anything more. Nothing was amiss aside from the unfortunate use of the word “healthy.” The people flipping out about the study are making it out to be about a bunch of things it’s not.
The GMO study, on the other hand, was a big hot mess. Check out Emily Willingham’s impressive analysis of flaws in design, methodology and analysis: http://www.emilywillinghamphd.com/2012/09/was-it-gmos-or-bpa-that-did-in-those.html and http://www.emilywillinghamphd.com/2012/09/did-rats-in-gm-corn-study-drink-their.html
If the study’s so limited in focus and possessing of a skewed title, then why the loud and misleading media coverage of it? That’s what people are “flipping out” over. How did a study that found (based on what I’ve read about the study so far..) less pesticide residue, fewer harmful bacteria, and slightly higher content of at least one vitamin (C) in organic versus conventional become the best evidence that organic is no healthier (Should I use quotes on the word “healthier?”)? Now if you really wanna give thanks that your baby was born with both arms find some studies done on the health effects of conventional produce on the farmers that produce it. I’ve seen it. 3 or 4 slightly retarded and deformed kids in a village of 95. The woman in the next village was legendary. She was born with no arms and no legs. You tell me if their corn fields are organic.
You should be thankful for GM crops, they use safer pesticides and less of them than conventional crops.
I’m a newbie on this site and I’m not a biologist either. My reference source on the organic debate is Richard A. Muller’s anecdote about Bruce Ames, who developed the first test for mutagens:
What’s the “official” feedback on Ames’ point of view?
A publication that provides full information on how the study was done methodologically and doesn’t contain fraudulent or erroneous data do not merit retraction. It is the responsibility of each and every scientist to constantly re-evaluate scientific publications taking note on the arguments for the results and their interpretations. If we are supposed to rely on the process of peer-review and actions of editorial boards to only publish “total truths” – we will end up in an “authority-cracy”. The strength of science is the constant discourse about what is a good publication and what is not. As I work in a field suffering from “wooism” where a “scientific publication” (i.e. any type of publication with PubMed-tracking, “academic” affiliation or the like) is considered as proof regardless of methodological flaws or lack of scientific plausibility, I think we don’t need more retractions. We need more discussion of the publications by commentaries, letters-to-the-editors and so forth. Retraction should be a kind of last resort for papers that are fraudulent or misrepresenting the data.
As it is said in biology – nothing makes sense unless evolution is taken into account; which should be applied to scientific production of papers: nothing makes sense without regards to its publication record.
That’s precisely the issue with Seralini et al. 2012 : every information which would allow us to see if their findings are significant aren’t provided.
We don’t know if the single control group can be used for the 0%, 11% and 33% maïze diet, we don’t know the life expectancy of the treatment groups (while it’s mentioned for control group), we don’t know how much every group ate, we don’t have a single test nor standard variation for tumour number or mortality…
In fact, everything is missing.
I was disappointed that this poll didn’t allow me to vote for “All peer-review journals should implement online polls to determine which articles to retract”.
Here Here Paul!
Thanks for the hat tip. For those who skipped the link, I will sum up: a retraction of the paper by Séralini and colleagues would be viewed as a “cover-up” by those opposing GM foods. The Tous Cobayes? movie preview makes it clear that a large part of that narrative is claiming that information is being suppressed.
“The Tous Cobayes? movie preview makes it clear that a large part of that narrative is claiming that information is being suppressed.”
I dont know about “suppressed”..but there is no doubt the general public is worried.
Perhaps if journalists did there job and asked hard questions instead of basically republishing press releases things would be better. 🙂
Three quick examples.
1/You can read about the wonders of Golden Rice..which sounds fantastic..find one published paper by the golden rice foundation on this wonder food…
2/In NSW, Australia..we have a board that judges if GM are “safe” in our state.
The majority of the board have a financial stake in gm products.In an interview, one of the board members stated there were hundreds of papers proving GM crops were safe..after emailing him for months..he finally found one paper.
Many of the boards decisions/reasoning are confidential…
3/We have had a WA farmer getting premium money for his “organic” crops..have his business ruined due to contamination from the farm next to his which is growing gm crops.
4/There should be hundreds of published papers showing the safety of gm crops/foods..where are they.?
“1/You can read about the wonders of Golden Rice..which sounds fantastic..find one published paper by the golden rice foundation on this wonder food…”
Why would you want papers published by this foundation ? The suporting research was made in various research centers :
http://www.goldenrice.org/Content4-Info/info3_publ.php
These two publications have little in common except that they are controversial. The Seralini publication, unlike the Stanford study, appears to be an attempt to manufacture scientific support for the authors ideological position. Positions aside, this kind of behavior is a misuse of science. The scientific community should respond firmly to discourage this sort of misuse because it undermines the integrity and useful authority of science. Scientists like everyone else should follow their conscience and support causes they believe in, but they must not subvert the proper functioning of science in support of those causes.
The moral from this story is that:
The Science today has become like a restaurant where the one who gives the money orders the music.
An article in favor of GM foods — please, here it is.
An article against GM foods — please, here it is.
An article for WHATEVER-YOU-WANT — please, here it is.
However, unlike the restaurants where the song is played ONCE only, academic publishing is forever, since editors/publishers/institutions and even COPE are very, very, VERY RELUCTANT to Do-the-Right-Thing and retract fraudulent papers.
Example for the great reluctance of editors/publishers/institutions and even COPE to Do-the-Right-Thing can be seen in my comments here http://www.retractionwatch.com/2012/09/26/reused-figures-lead-to-two-chemistry-retractions-one-correction/#comments
YKBOA, can you explain why this particular situation you keep on pushing and pushing and pushing and pushing (ad infinitum) is so important to you? We’re talking about rather generic figures that are being reused, not figures containing data and presented as new data.
Marco, are you seriously asking, or you didn’t have your coffee yet?
Serious question.
I have to agree with Marco after looking at the paper “Final Report to WHO…” The two figures in question show flow diagrams that represent the theoretical framework behind the idea of making interventions to reduce ill health related to employment problems (not just unemployment.) These are rather generic figures that don’t represent any data collection but an intellectual framework. Technically, if reproduced identically (or close to it) they should point out that the same figure was used in previous papers, but it’s not like they’re trying to commit any kind of fraud or misrepresentation.
This is a potentially highly politicizable area and it relates to such issues as the proposal to raise the retirement age (which turns out to be a bad idea because it saves little money and discriminates against lower-class workers who tend to die early.)
So, I repeat, Mr YouKnowBestOfAll, can you reflect deeply and tell all of us your motivation behind your “crusade” to impugn the authors of these over-used figures?
(Given that there’s so much REAL fraud flying around.)
Still a serious question; no response yet. Perhaps not an answerable question?
man, you two are MEAN! The guy is probably passionate about a case in which he got personally involved. This makes that fraud quite REAL to him, and seriously, there should be no “small mischiefs” in scientific work. Now he is probably feeling awkward somewhere and got too embarassed to raise his flag again here. OK maybe he needed a little shake after all. YKBOA, my advice to you, never give up but stay on the healthy side — obsession is the source of all maladies. Take a break, try to retract that paper after a longer while: never mind, it will be there for you always.
Yes, we’re mean. But there comes a time to cut your losses (“He who fights and runs aways, lives to fight another day.”)
Having rats develop tumors as large as those shown on the photos in the publication, without a valid scientific reason, is a serious breach of research ethics. This alone would justify retraction in my eyes and the journal should never have considered such a study for publication to start with. It is this kind of studies that feed anti-animal experimentation sentiment, making life difficult even for people doing solid, justifiable, and ethically-sound research.
Animal ethics are, of course, important – although this is not the only long term study with this strain of rat – however first off our priority should be tumours in humans.
But then I am not a scientist, so what the hell would I know?
Oh, I absolutely agree with you that, if necessary for a scientific result, it can certainly be ethical to let rats develop such large tumors. The problem is, there is no reason whatsoever why death should be an endpoint in THIS particular study, or even “development of huge tumors”. “Development of one or more visible or palpable tumors” would have been a perfectly good endpoint for the goals of this study. Going further than that, as these people did, is unethical in my opinion as it leads to unnecessary animal suffering. Of course, one could also debate the ethics of using animals for a study like this and not using adequate statistics to analyze the results. I’m baffled by the lack of comparisons between groups in these studies. It’s impossible to see whether the increased mortality/tumor development rates are statistically significant (and the way the results are presented, I cannot even perform these statistical tests myself).
You would be surprise to learn that ALL studies provided by the firms to the administration (EFSA in EU) lack any statistical test.
Actually, a lot, if not all, the criticisms made against this paper from Seralini may be made against all the studies provided by firms for authorization of import/culture.
This does not mean that these criticisms are wrong : it means that one is led to wonder why is Seralini attacked in such a way, while the other studies are not ?
Take Forbes’ (a strange source for scientific controversies) comment on Seralini : “Séralini has made a specialty of methodologically flawed, irrelevant, uninterpretable — but over-interpreted — experiments intended to demonstrate harm from genetically engineered plants and the herbicide glyphosate in various highly contrived scenarios.”
You can replace Seralini by “studies on GMO” and “harm” by “non-harm”, the sentence would be exact. This is what, for instance, this meta-study, wrotten by people who can hardly be accused of being anti-GMO, says (in the same journal) : http://www.sciencedirect.com/science/article/pii/S0278691511006399 . And despite having pinpointed all the methodological flaws of the studies reviewed, they, all the same, conclude that : no long-term studies are needed…
A little conclusion : it is not only the “general public” that does not understand this controversy, it is also scientists, who think they can enter this debate without having read the litterature on the subject… Scientists are also part of the “general public”, and their opinions and prejudices are not any less important.
“You would be surprise to learn that ALL studies provided by the firms to the administration (EFSA in EU) lack any statistical test.”
Well, that’s false as far as EFSA is concerned : they provide both raw data and they interpretation of the data, and the EFSA may do it’s own analysis, and even ask for supplementary data if necessary.