About these ads

Retraction Watch

Tracking retractions as a window into the scientific process

Controversial Seralini GMO-rats paper to be retracted

with 192 comments

food and chemical toxicologyA heavily criticized study of the effects of genetically modified maize and the Roundup herbicide on rats is being retracted — one way or another.

The paper — by Gilles Seralini and colleagues — was published in Food and Chemical Toxicology last year. There have been calls for retraction since then, along with other criticism and a lengthy exchange of letters in the journal. Meanwhile, the paper has been cited 28 times, according to Thomson Scientific’s Web of Knowledge, and the French National Assembly (their lower house of Parliament) held a long hearing on the paper last year, with Seralini and other scientists testifying.

Now, as reported in the French media, the editor of the journal, A. Wallace Hayes, has sent Seralini a letter saying that the paper will be retracted if Seralini does not agree to withdraw it.

Here’s most of the November 19 letter, including Hayes’ proposed retraction notice:

The panel had many concerns about the quality of the data, and ultimately recommended that the article should be withdrawn. I have been trying to get in touch with you to discuss the specific reasons behind this recommendation. If you do not agree to withdraw the article, it will be retracted, and the following statement will be published it its place:

The journal Food and Chemical Toxicology retracts the article “Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maize,”1 which was published in this journal in November 2012. This retraction comes after a thorough and time-consuming analysis of the published article and the data it reports, along with an investigation into the peer-review behind the article. The Editor in-Chief deferred making any public statements regarding this article until this investigation was complete, and the authors were notified of the findings.

Very shortly after the publication of this article, the journal received Letters to the Editor expressing concerns about the validity of the findings it described, the proper use of animals, and even allegations of fraud. Many of these letters called upon the editors of the journal to retract the paper. According to the journal’s standard practice, these letters, as well as the letters in support of the findings, were published along with a response from the authors. Due to the nature of the concerns raised about this paper, the Editor-in-Chief examined all aspects of the peer review process and requested permission from the corresponding author to review the raw data. The request to view raw data is not often made; however, it is in accordance with the journal’s policy that authors of submitted manuscripts must be willing to provide the original data if so requested. The corresponding author agreed and supplied all material that was requested by the Editor-in-Chief. The Editor-in-Chief wishes to acknowledge the co-operation of the corresponding author in this matter, and commends him for his commitment to the scientific process.

Unequivocally, the Editor-in-Chief found no evidence of fraud or intentional misrepresentation of the data. However, there is legitimate cause for concern regarding both the number of animals in each study group and the particular strain selected. The low number of animals had been identified as a cause for concern during the initial review process, but the peer-review decision ultimately weighed that the work still had merit despite this limitation. A more in-depth look at the raw data revealed that no definitive conclusions can be reached with this small sample size regarding the role of either NK603 or glyphosate in regards to overall mortality or tumor incidence. Given the known high incidence of tumors in the Sprague-Dawley rat, normal variability cannot be excluded as the cause of the higher mortality and incidence observed in the treated groups.

Ultimately, the results presented (while not incorrect) are inconclusive, and therefore do not reach the threshold of publication for Food and Chemical Toxicology. The peer-review process is not perfect, but it does work. The journal is committed to a fair, thorough, and timely peer-review process; sometimes expediency might be sacrificed in order to be as thorough as possible. The time-consuming nature is, at times, required in fairness to both the authors and readers. Likewise, the Letters to the Editor, both pro and con, serve as a post-publication peer review. The back and forth between the readers and the author has a useful and valuable place in our scientific dialog.

The Editor-in-Chief again commends the corresponding author for his willingness and openness in participating in this dialog. The retraction is only on the inconclusiveness of this one paper. The journal’s editorial policy will continue to review all manuscripts no matter how controversial they may be. The editorial board will continue to use this case as a reminder to be as diligent as possible in the peer-review process.

Seralini — whom, as we note, tried to get reporters to sign a non-disclosure agreement when the study was first being released, a move Ivan called an outrageous abuse of the embargo system designed to turn reporters into stenographers — rejected Hayes’ findings, according to Le Figaro. And GMWatch called Hayes’ decision “illicit, unscientific, and unethical.”

There’s a lot to chew on here:

  1. Our read is that Hayes is basically saying that while the paper doesn’t meet the usual criteria for retraction, it should never have been published in the first place. This will likely be quite controversial, and it will be interesting to see how the scientific community reacts. Based on comments here at Retraction Watch, many scientists say that retraction should be reserved for fraud and serious error. Does that hold for a paper that many criticized as deeply flawed — and which challenged GMOs, whose use is supported by many scientists?
  2. Hayes is also saying that expedience is never an excuse for rushing a paper through peer review, no matter how controversial the subject. (Contrast Hayes’ comments with those by the editor of another Elsevier journal, Cell, earlier this year: “It is a misrepresentation to equate slow peer review with thoroughness or rigor or to use timely peer review as a justification for sloppiness in manuscript preparation.”)
  3. Post-publication peer-review — something we’ve championed for a while — is really important.

Update, 11:15 a.m. Eastern, 11/29/13: Here is Seralini et al’s response to Hayes’ letter. It begins:

We, authors of the paper published in FCT more than one year ago on the effects of Roundup and a Roundup-tolerant GMO (Séralini et al., 2012), and having answered to critics in the same journal (Séralini et al., 2013), do not accept as scientifically sound the debate on the fact that these papers are inconclusive because of the rat strain or the number of rats used. We maintain our conclusions. We already published some answers to the same critics in your Journal, which have not been answered (Séralini et al., 2013).

CRIIGEN also posted this press release:

The international journal Food and Chemical Toxicology (FCT) has requested the retractation of our study published more than one year ago (ref) on the long term toxicity of the herbicide Roundup, and of a GM maize tolerant to it. After the analysis of all our raw data, the chief editor signs that there is no fraud nor incorrect data, nor intentional misinterpretation. However, he writes that the data are inconclusive, because of the rat strain and the number of animals used. These critics are unacceptable for us, they have already been answered in a debate published one year ago by the same journal (Séralini & al., 2013, Answers to critics: why there is a long term toxicity due to NK603 Roundup-tolerant genetically modified maize and to a Roundup herbicide. Food and Chem. Tox. 53:461-468). They were promoted by the Monsanto Company in the press, when simultaneously one its directors penetrated the FCT editorial office to be in charge of biotech papers, after our publication. The retractation would not be authorized by the international ethical norms accepted by FCT (called COPE), because there is no error nor fraud. By contrast, a short Monsanto study was published in the same journal to prove the safety of their product contains errors or frauds, and is not the subject of a controversy. It was done with the same strain and number of rats, but its comparators are false because the feed for the control rats is contaminated by GMOs, at doses comparable to the treated rats. This is linked to the very high number of animals requested for the carcinogenesis studies. These double subjective criteria are not admissible and endanger science and public health.

We request to FCT the retractation of the Monsanto study on the same GMO, which has been used for its authorization. If FCT persists in its decision to retract our own study, CRIIGEN would attack with lawyers, including in the USA, to require financial compensation for the huge damages to our group. We question the european authorities to re-rexamine the studies used to authorize GMOs and pesticides, because the GMO and other contaminants presence in control feed as well as in the reference or historical data invalidate these studies.

Hat tips: Martin Pigeon, Keith Kloor

About these ads

Written by Ivan Oransky

November 28, 2013 at 9:13 am

192 Responses

Subscribe to comments with RSS.

  1. The journal Le Figaro says also that Richard Goodman a biologist who worked for several years at Monsanto, has joined the editorial board of the journal, and Séralini accuses Monsanto to be behind the retraction decision.

    HonestScientist

    November 28, 2013 at 10:02 am

    • What about the fact that a close friend of Seralini’s is on the Editorial Board of FCT? That doesn’t bother you at all?

      Maggie12

      November 29, 2013 at 10:34 pm

      • Sure, but that isn’t Steven slightly unusual. There are very few experts in any field, and most of the people who work in a discipline all know each other quite well and are often friends. That doesn’t mean they can’t be objective. Scientists are a cut throat bunch, and in my experience as a geneticist your friends are often your harshest critics.

        Brad

        November 30, 2013 at 2:48 pm

        • give some thought to where seralini’s money comes from and what would happen to that funding if he came out and admitted his data for what it actually is – random statistical noise and gross misinterpretation. The complete lack of even the most superficial acknowledgement of a conflict of interest is very telling

          AB

          December 1, 2013 at 9:44 pm

          • YOu can assert that noise clouds the second half of the study, but the fact remains that Seralini reported 3 male mice out of 60 in his experimental groups and zero in his control groups by age 16 weeks.

            This is in contrast to the historical record which shows that only 3 male Sprague-Dawley rats out of 1284 got ANY kind of cancer before 26 weeks.

            http://tpx.sagepub.com/content/32/4/371.full.pdf

            Early Occurrence of Spontaneous Tumors in CD-1 Mice and Sprague−−Dawley Rats

            >This report is based on 20 rat and 20 mouse carcinogenicity
            at Huntingdon Life Sciences, UK, during the period 1990–2002. Information was gathered from control groups (a total of 1,453 male and 1,453 female mice and 1,284 male and 1,264 female rats).

            saijanai

            December 5, 2013 at 11:02 am

            • The nature of statistical noise is that the 3 rats getting cancer could have been even of a group of only 3 rats! 100% consistency! Yet this just shows the problem with low sample numbers. You can roll 3 sixes coming straight out the gate but given the law of averages that hot streak is likely going to die down, and after a couple hundred rolls only about 16% will still be sixes. Your own citations can be interpreted simply to show that the 3 out of 60 was a similar statistical anomaly evidenced by the fact that when the sample size was 200x bigger there still was only 3 occurrences of cancer.

              dawshoss

              August 29, 2014 at 4:07 pm

    • At the time the paper was accepted, a close friend of Seralini’s was on the Editorial Board and involved in the decision to accept.

      g2-c706dd6676549f1e118b7970c1e9b607

      November 29, 2013 at 10:35 pm

    • And I worked for a hardware store some years ago. That doesn’t mean they are now influencing every decision I make about hardware supplies. Maybe he even left disgruntled.

      dawshoss

      August 29, 2014 at 4:12 pm

  2. Publications——->promotion and tenure———–> post publication peer review———–>retraction.

    aceil

    November 28, 2013 at 10:32 am

  3. Correction : the parliamentary hearing happened in November 2012, not this month.
    Seralini is all over French Press and even TV. Probably more episodes to come

    Toto Totoro (@TotoroInParis)

    November 28, 2013 at 10:38 am

    • Fixed — thanks.

      ivanoransky

      November 28, 2013 at 10:41 am

  4. Is this standard practice for journals in such situations not to disclose review panels membership? Considering the controversy surrounding the appointment of R. Goodman at FCT, and more generally the fact that anonymous decisions are rarely helpful contributions to controversies, it would have been helpful to know who took part in the review…

    Martin

    November 28, 2013 at 10:49 am

    • I see this in a more conspirational light as much more political and corporate. Imagine the French parliament was debating the case (as if there were not more pressing financial issues to be discussed). This is unprecedented. What powerful group could actual sway a national parliament to debate a scientific paper? I suspect that this is a nasty kick in the nuts for Monsanto, and I wonder how much corporate pressure is taking place in the background (one need only look at the scandalous manipulation of Prop 37 in California: http://rt.com/usa/monsanto-california-37-measure-182/) to see how corporate giants control science, and governments. “As of 2009, sales of these herbicide products represented about 10% of Monsanto’s revenue due to competition from other producers of other glyphosate-based herbicides; their Roundup products (which include GM seeds) represents about half of Monsanto’s yearly revenue.” (http://en.wikipedia.org/wiki/Roundup_(herbicide)). Allow me to repeat, half (50%) of Monsanto’s profits. A retraction based on one of their products could be devastating for the company image, even if the article itself was already damaging to the company. This could be the Bhopal moment for Monsanto and for GMO research (see for my analogy: http://www.theguardian.com/environment/2009/dec/04/bhopal-25-years-indra-sinha). As somewhat of a specialist in aspects of plant GMOs myself, I could give a long debate about select cases of advantageous cases of plant GMOs. But the corporatization of technologies, the exploitation of developing nations to prop up fake institutes like the WHO and the IMF, and the greed and power plays at the expense of nature, genes and values inherent to all living beings is abhorrent. I was amazed not to see the word Monsanto once in any comments by the editors, but it makes you wonder (even if it is only a conspiracy theory). If you had a company, and there was a study (in a fairly respectable journal) that somehow showed that your star product was toxic to rats, and that product was responsible for 50% of your profits, wouldn’t you just love to get rid of that paper? Just saying, that’s all… maybe Monsanto might like to join the conversation?

      JATdS

      November 28, 2013 at 11:31 am

      • Well said

        JodiKoberinski

        November 28, 2013 at 1:58 pm

      • I now see why Prof. Gilles Seralini’s research findings of his small group of individuals threatens a Multi-Billion GMO Corporation as JATdS eloquently elaborates.

        GM food was put in a category labeled as “GRAS” – and we now know how that goes (Butter vs. Margarine Wars): http://www.fda.gov/forconsumers/consumerupdates/ucm372915.htm

        And

        http://www.westonaprice.org/know-your-fats/the-oiling-of-america#rise

        Those who don’t believe in Precautionary Principle can go ahead and feast on GM food, but for some of us would rather Err on side of caution, just like the British Parliamentarians (life is a one time chance, and I would rather bet my life on eating the same food these law makers have selected to eat for safety reasons): http://www.dailymail.co.uk/news/article-2345937/GM-food-menu-Parliaments-restaurant-despite-ministers-telling-public-drop-opposition.html

        The following information reveals nothing new. This attack on “the size and the type of rats” used in Prof. Gilles Seralini’s research was raised right after the findings were published, and he rebutted them with the argument that he used the “the size and the type of rats” used by the GM Corporation, except that he went on for 24 months instead of 90 days.

        Why not those arguing against him, disprove his work by using a large group of rats of their own breed and let their raw date stand the scrutiny.

        http://gmoseralini.org/professor-seralini-replies-to-fct-journal-over-study-retraction/

        The Food and Chemical Toxicology Editor-in-Chief essentially states that Prof. Gilles Seralini’s research findings prove correlation not causation; in the following:-

        “Unequivocally, the Editor-in-Chief found no evidence of fraud or intentional misrepresentation of the data. However, there is legitimate cause for concern regarding both the number of animals in each study group and the particular strain selected. The low number of animals had been identified as a cause for concern during the initial review process, but the peer-review decision ultimately weighed that the work still had merit despite this limitation.

        Given the known high incidence of tumors in the Sprague-Dawley rat, normal variability cannot be excluded as the cause of the higher mortality and incidence observed in the treated groups.

        Ultimately, the results presented (while not incorrect) are inconclusive, and therefore do not reach the threshold of publication for Food and Chemical Toxicology.”

        samN

        November 28, 2013 at 7:54 pm

        • The rat strain is important exactly because of the length of the study. These rats have a 2-year life expectancy and a natural tendency to develop tumors. Thus, doing a 2-year study means that you will have major mortality and tumor development due to the study length alone. There’s variability in that natural mortality and tumor development, and thus you’d need many more than 10 rats to see the impact of your treatment. If the expected variability in natural mortality and tumor development between the groups is taken into account, the whole study conclusion falls apart, exactly as the proposed notice states. This issue will be known to people working with these rats (and the general problem of doing in vivo experiments with ‘aging’ populations in general), so you don’t need to spend more than 1 hour before realizing that the study design is flawed.

          Now, in a 90-day study this mortality and tumor development is not an issue at all, so Seralini’s supposed rebuttal that he used the same rats is in reality an admission that he does not understand proper study design of animal experiments.

          It’s really not that hard to understand this, so I think some people who defend this part of the study should seriously take a look at their own potential bias.

          Marco

          November 29, 2013 at 6:11 am

          • Obviously the study was underpowered – to say the least. It appears to me is that they were looking for an effect on renal function and were somewhat surprised to find a potential effect on tumorgicity. But that is science, it is sometimes messy. But that is not a reason to censor the data – which is what you appear to suggest.

            The rest of your comment is simply incorrect. If you are looking for an effect of increasing tumours then you need a long period and strain that is susceptible to tumours. Otherwise you will conclude that cigarette smoke is not carcinogenic since very few cancers will appear after two years in animal not pre-disposed to developing cancers. Of course, it would be fascinating to plant one half of South America with Round-Up Ready maize and the other half not and see the effects on cancer – it just wouldn’t be ethical.

            If you have any doubt on this matter I invite you to study the methodology of any of the following studies that did just that:

            http://www.ncbi.nlm.nih.gov/pubmed/?term=National+Toxicology+Program%5BCorporate+Author%5D+sprague

            littlegreyrabbit

            November 29, 2013 at 7:06 am

          • “It’s really not that hard to understand this, so I think some people who defend this part of the study should seriously take a look at their own potential bias.”

            I dunno, these guys don’t seem to understand it either.

            http://www.ncbi.nlm.nih.gov/pubmed/?term=National+Toxicology+Program%5BCorporate+Author%5D+sprague

            littlegreyrabbit

            November 29, 2013 at 7:10 am

          • a) Where do I suggest censoring data? I only said the design was bad for the purpose of looking at tumor development. Defending that part is just not right, and in particular the way it is defended.

            b) I think you ignored the issue of the group size that I specifically mentioned.

            Take the newest study in your link:

            http://www.ncbi.nlm.nih.gov/pubmed/21383778

            80 per group, 8 times more than in the Seralini study
            There were also groups of 50 and 30, for very specific experiments where this would not significantly affect the statistical power; e.g., the group with 30 was a shorter experiment (not 2-years)

            How about the second?

            http://www.ncbi.nlm.nih.gov/pubmed/21383777

            For the 2-year study 50 rats per group were used.

            And the third?

            http://www.ncbi.nlm.nih.gov/pubmed/21031006

            Again 50 rats per group.

            So it seems that these people indeed know what they are doing: for proper statistical power you need many more rats for longer study durations.

            Marco

            November 29, 2013 at 11:47 am

          • It’s not a red herring when Seralini himself (and many of his supporters) made such a big issue out of the tumors. Care to tell us that if the study was not about tumor development why there were several pictures of the mice developing tumors? Most interestingly, why no pictures of the control group (who also developed tumors)?

            And where is the statistical analysis of the toxicological stuff?

            Marco

            November 29, 2013 at 4:31 pm

          • Jodi, I see Seralini knocks down a lot of strawmen. Using some fancy advanced method that gives some idea of correlations and then do only comparison between two groups is a problem. It isn’t hard to do all comparisons, but that would likely have raised too many alarm bells about the paper and too little about the outcome.

            Marco

            November 30, 2013 at 1:37 am

          • The Seralini paper also states the extra tumors elicited were different — earlier appearing and faster growing– than the tumors typical of the SD strain.

            Anthony C. Tweedale

            November 30, 2013 at 10:34 am

            • They provided no evidence whatsoever that was true, and the pictures show the mammary adenomas that are extremely common in this strain.

              g2-c706dd6676549f1e118b7970c1e9b607

              December 1, 2013 at 1:51 am

      • My experience is that rats and mice, male and female tumor studies do not predict one another. Why should they predict humans? Are we any advanced over the Romans reading the entrails of chickens?

        ssy

        December 30, 2013 at 10:27 am

      • You should be careful of citing RT for support, many journalists have left that news organization because of the heavy influence by the kremlin. Also it’s somewhat insulting to the voters of ca to insinuate money magically changed their votes, a more plausible narrative could be that many were previously relatively uninformed on the issue of GMO labeling, and in the issue being raised this led to deeper investigation and education bringing to light facts and legitimate arguments that eventually swayed the majority against their previously held opinions.

        dawshoss

        August 29, 2014 at 4:20 pm

  5. Hayes is a signatory of the controversial “Letter From the Editors” attacking the EFSA policy towards endocrine-disruptors mentioned here:

    http://www.environmentalhealthnews.org/ehs/news/2013/eu-conflict

    See response of other editors here: http://www.ehjournal.net/content/12/1/69

    It isn’t surprising that he tends to be industry-oriented in other matters as well….

    saijanai

    November 28, 2013 at 11:17 am

    • In addition to the two groups of editors rebutting the traditional toxicologists’ opinion on risk asssmnt. for EDCs, Env Health News carried an analysis by journalist Stephanie Horel finding 16 of 18 of the editors saying “no problem” had financial links to industries producing EDCs…e.g. FCT’s Hayes (Gillette and RJR Nabisco, at least).

      Anthony C. Tweedale

      November 30, 2013 at 10:37 am

    • thanks, this is very useful. Am still interested in knowing whether anonymous reviewing is standard practice though :)

      Martin

      November 28, 2013 at 11:49 am

      • Anonymous reviewing is standard practice in all scientific journals. I am a scientist myself and have many times been both reviewed or reviewing. Sorry, but that’s nothing you can base an conspiracy theories on… Retracting papers that later turn out to be flawed is standard practice, too. There doesn’t necessarily need to be fraud involved to justify a retraction.

        Raik

        November 28, 2013 at 5:06 pm

        • I wouldn’t agree that anonymous reviewing is standard practice at ALL scientific journals. I’m a geologist and I do some reviewing for journals in that field. In most cases, reviews of my own papers are signed by the reviewers, and I always (at least so far) sign the reviews that I write. There is the option to remain anonymous, but this is usually not done without a compelling reason. Reviewers are commonly publically identified in the acknowledgments.

          Personally, I’d like to see all reviews signed by the reviewer, but would acknowledge that this comes with its own problems.

          wilsontown

          November 29, 2013 at 4:45 am

        • That’s not what the COPE guidelines call for, as GM Watch pointed out in relation to this retraction: fraud, plagiarism, or unethical behavior (my summary). Uncertainty — the hallmark of science — does not justify a retraction; ergo this is a stitch-up (the hallmark of profit biasing science).

          Moreover, I’ve seen little focus in the responses so far about the massive hypocrisy of FCT journal in not retracting many other papers with the same specific “flaws” Hayes cited in retracting this one.

          There are other arguments justifying the methods used to produce the paper’s results in CRIIGEN’s response to Hayes, which I’ve asked RW to feature equally with the Hayes letter to CRIIGEN.

          Anthony C. Tweedale

          November 29, 2013 at 10:36 am

        • There are two issues, I guess. What journals do in the normal review process (where there are, in my opinion, arguments both ways for reviewers being anonymous, although I’d prefer that they weren’t), and what should happen in the case of retraction of a paper. I would have thought the process of retraction should be as transparent as possible. If we can’t know who was on the review panel, we should at least have access to their detailed reasoning.

          As Anthony says above, the justification given here is not enough for retraction of the paper. If it was, then it would be impossible to keep track of which papers were still officially current as there would be so many retractions.

          Note that I make no comment here about the science in the Seralini paper, I’m only talking about how the process of retraction ought to work.

          wilsontown

          November 29, 2013 at 11:27 am

  6. Ideological bias + small sample size–works half the time.

    Charlie

    November 28, 2013 at 11:27 am

    • don’t forget! No double-blind controls, and measuring effect with “expert” measuring tumor size using the “hand palpitation method”.

      Seriously, the paper is flawed beyond belief. But – I think retraction is inappopriate. Let the paper stand as an example of atrocious methodology.

      Yonemoto

      December 1, 2013 at 8:54 pm

      • That’s what retraction is for. The article usually stay online, while harboring a “retracted” mention.

        Gabriel HMIMINA

        December 4, 2013 at 5:18 pm

  7. Seralini did not try to have journalists sign an agreement. They DID sign. The article was sent to journazlists in charge of environment, neither science journalists nor medical journalists have had the chance to read it before publication.
    And Seralini had a deal with a weekly which published pictures of 3 rats harbouring abdominal tumors, supposed to be from the treated group. The best way to convince readers of the results !

    JD Flaysakier

    November 28, 2013 at 11:34 am

    • And he also released a book telling the whole story at the same time. This guy knows how to create buzz. Not sure that these practices show a good image of the way science works to the general public.

      I’m not judging the validity of the study, just the communication around it…

      Deillevid

      November 29, 2013 at 6:42 am

      • Seralini has published a number of paperbacks scaremongering about GMOs, so he is obviously highly motivated to convince people they are dangerous in order to increase his own royalties, yet his supporters fail to see this as a vested interest.

        g2-c706dd6676549f1e118b7970c1e9b607

        November 29, 2013 at 10:39 pm

  8. This retraction is entirely politically motivated and is an affront to the scientific tradition. The FCT had no grounds under integrity guidelines the FCT is a signatory of to pull this study or the Brazilian paper also critical of GMOs…

    Funny… before Dr. Goodman, a pro-biotech voice with ties to Monsanto, was appointed after the Seralini paper was published to the brand new biotech assistant editor role… don’t be mistaken: industry apologists wasted not 24 hours before critiquing the work and calling for the journal’s editors’ heads. Rather than cave the journal printed Seralini’s line by line rebuttal of the straw man arguments and weak criticisms of the study, standing behind their decision to publish. Did you know Seralini’s results were so sure to cause controversy the journal had 5 not the standard 3 panelists review prior to publishing?

    It seems Seralini’s biggest fault was failing to conclude what his study was never designed to conclude: his work simply showed that toxicological effects were statistically relevant after 90 days. Period. This ought to be a launchpad for serious scholarship and scientific inquiry! Real scientists with science and not propaganda as their agenda would be curious about the work.

    Seralini has registered his raw data with a notary. He has offered to release it for full scrutiny the minute Monsanto agrees to do the same. No- if this were about science and not money, we would be having a discourse that doesn’t involve throwing every scientist brave enough to question the bought and paid for status quo under the bus…

    JodiKoberinski

    November 28, 2013 at 12:46 pm

    • ¿why should Monsanto had to sign something? Not to disclose raw data when it has been requested by other scientist is not good,

      elnauhual

      November 28, 2013 at 1:26 pm

    • Monsanto data is routinely in the public domain – you can see all of it on the web site of FoodStandards Australia New Zealand for every GM crop approved for sale in Australia and NZ. The Seralini study was retracted because it is bad science, poorly conducted, used the wrong model, over-interpreted random biological noise to suit a particular ideological bias and slipped into FCT through highly questionable procedures. Rats are a poor substitute for a HPLC. If any of the tripe written about the dangers of GM were true then conventional mutagenesis based breeding would be many orders of magnitude more dangerous than GM. This is a political debate and the fact that some of the political activists wear lab coats doesn’t make their position science. The FCT have shown admirable ethical and scientific courage to admit their mistake and make the correct choice. We should not forget how many children have been killed or disabled due to the fanciful paper on vaccination causing autism that was allowed to remain in the Lancet for so many years or the number of the most disadvantages children on earth who will be hurt by the anti golden rice nutters.

      AB

      November 28, 2013 at 6:22 pm

      • If it is “routinely in the public domain” I am curious why Seralini had to sue in Germany for access to the NK603 study used for the original approvals in the EU? It took 6 years. And why won’t Monsanto release to the science community their raw data? Everyone has asked Seralini for his- whixh sits still at a Notary waiting for cooperation and release… plus Monsanto (and all gmo patent holders) have a ridiculous agreement that must normally be signed if one is to do research on their patented technology which includes clauses restricting publication of results… so I disagree Monsanto’s work is transparent. In 2009 Seralini published a study where he used the Monsanto design he sued for access to as the starting point and simply repeated their design. That study was published in the International Journal of Biology: de Vendômois JS, Roullier F, Cellier D, Séralini GE. A Comparison of the Effects of Three GM Corn
        Varieties on Mammalian Health. Int J Biol Sci 2009; 5:706-726. Available from

        http://www.biolsci.org/v05p0706.htm

        JodiKoberinski

        November 29, 2013 at 4:39 am

  9. For anyone interested in an evidence-based, reasoned discussions regarding GMOs, I invite you to GMO Skepti-Forum: https://www.facebook.com/groups/GMOSF/

    A good place to get started is our headliner threads which discuss a range of common issues of GMO discussion: https://www.facebook.com/notes/gmo-skepti-forum/links-to-headline-topic-threads/309170252555565

    Further, we have an ongoing thread discussing Seralini and collecting resources for the discussion: https://www.facebook.com/groups/280492318756692/permalink/309844945821429/

    The goal of GMO Skepti-Forum is to promote reasoned discussion of genetically modified organisms and anything that might help people discuss issues of GMOs and their roles in society. The forum is set up to answer questions, provide information, evaluate sources, and practice skepticism. Discussion should focus on facts, credible sources, and scientific literature. For a productive discussion, each person should adopt the principle of charity and help create an open atmosphere encouraging a mutual exchange of ideas. The forums are a collective puzzle solving activity rather than an arena of gladiators vying to defeat opponents. Some puzzle pieces might not fit so well, but flipping the table isn’t going to help anyone see the bigger picture.

    Knigel

    November 28, 2013 at 1:06 pm

  10. Funny how nobody has looked at the paper to see whether it is really rubbish or not! It is not my field, but the numerous letters to the Editor pointing out serious scientific weaknesses suggest to me that there are problems with this work besides the possible intervention of interested parties.

    Toby

    November 28, 2013 at 1:47 pm

    • I read it, back when it came out, at the request of a college buddy. It was really underpowered. We do need to do experiments like these, but we also need to be thorough about our experimental designs and have patience while the results accumulate. Animal work is time-consuming to perform, and must have ethical oversight as well.

      Allison (@DrStelling)

      November 28, 2013 at 1:53 pm

      • I also read it… this is the longest running trial of its type, simply the most thorough work to date. Seralini has a little talked about 2009 study where, after suing Monsanto for open access to its seeds, Seralini repeated the research as designed by Monsanto and compared his results: Monsanto did not even follow their design guidelines, which in the conclusions Seralini identifies as designed so poorly as to mis a medium to major impact 40 percent of the time. Despite the design’s 90 day cut off, Seralini’s study showed impacts on all major systems and concluded the 90 day cut off was too short to be reliable for safety. From there, he started the study under question. Just because someone says his science is poor doenst make it so. Just because the study is not conclusive doesn’t make it bad science. On the contrary this work achieved it’s design goal and it’s conclusions were further study was required and that 90 days is inadequate to ctach toxicological impacts.

        JodiKoberinski

        November 28, 2013 at 2:05 pm

        • I’m not an expert in this area (animal testing). But if I look at the statistical analysis, I can’t find a good correlation between tumors and diet. And if anyone takes a little time (like 5 minutes) to know the Sprague-Dawley rats, will find a paper (Prejean et al, 1973) to compare to Seralini’s, and find out that the tumors ocur spontaneously.

          LM

          November 28, 2013 at 3:14 pm

          • SD rats have large numbers of tumors in the second year of life, but in fact, tumors in the first year of life are almost completely unheard of. Up until Seralini’s study, the specific type of tumor reported in month 4 of his study had occurred in 3 male SD rats in an historical survey of 850 male rats in various control groups in various studies. Taking the aggregate of all 60 experimental animals, there were 3 out of 60 male experimental animals with tumors compared to 0 out of 10 male control animals in the Seralini study. 3 out of 850 vs 3 out of 60. Suggestive, I think.

            saijanai

            November 28, 2013 at 4:03 pm

          • And if one dug just a little deeper, one would see Monsanto used only 2 doses (11 and 33 percent) which is statistically invalid. Seralini added a 22 percent line. Your logic would also render ALL industry studies irrelevant to prove safety as this rat and sample size are the absolute norm. He literally used the very same parameters accepted as standard for this kind of work. And the study was designed to test for toxicological effects post 90 days- not to “prove” cancer. The study showed a lot more than tumours: enlarged and damaged organs, hepatic system impacts, lymph, reproductive system damage….

            JodiKoberinski

            November 29, 2013 at 4:46 am

          • Few who rant-on about the spontaneous tumors of SD rats acknowledge the obvious fact that the neg. controls were SD rats…as with all experiment, what counts is the relative increase after exposure! So thanks for this data!

            As others are pointing out, it stinks as a rotten fish to retract, when science nirmaly settles Qs with more communication, instead of trying to censor. Money?

            Anthony C. Tweedale

            November 29, 2013 at 10:55 am

          • That’s not correct, Anthony Tweedale.

            It is probable, not just possible, that the control group by chance happens to develop fewer tumors than the other groups. This is the nature of small groups. For example, if you throw a coin 100 times, there will likely (if I remember correctly, P is around 0.4 or 0.5) be a sequence in which you throw heads six times in a row. If you look at 10 throws at a time, you could thus easily get a potential variation between 60% and 40% of heads. But seven or even 8 times the same in such a small group has a significant probability, too. There’s also variability in the group of 100 throws, but not nearly as much.

            This means that relative increase does not mean anything, unless the probabilities that such increases are by chance are taken into account. For that the study simply lacks statistical power.

            Marco

            November 29, 2013 at 11:59 am

          • Fair point on statistical signif., Marco. I know the study’s statistical analysis has been critiqued extensively ….my uneducated impression is that the elevation in tumor & other pathologies risks were significant. But for sure they reported & has not been rebutted that the tumors were qualitatively different from the controls. In which case, the comparison is ‘n vs. 0′ (assuming no controls had such lesions…significant for sure, given the group sizes?! And what about S.’s claim that you can aggregate effects and exposure groups to increase power, and the results were even more significant?

            Anthony C. Tweedale

            November 30, 2013 at 10:56 am

          • Actually, you cannot just aggregate groups to increase significance. It’s the large uncertainty in the control group that rules it all, and you don’t change that (in fact, it is mostly unknown).

            Regarding the “qualitatively different” – I have yet to see evidence of that. It’s funny to have all those rats-with-tumor pictures of several treated groups, but no picture of the control, and no real analysis of those supposedly “qualitatively different” lesions. I see a distinct possibility for the experimenter bias here: those in the lab expect to see something, and therefore indeed see it.

            Marco

            December 2, 2013 at 1:29 am

        • What I find interesting is that none of the pro-Seralini comments that I have read thus far have mentioned the odd fact that it is claimed that both Round-Up and the transgene produced the same effects. That is like finding that fire and the water used to put it out do the same damage, and not bothering to at least scratch your head and wonder how that is possible. To me, that alone seems to point to the natural tendencies of the S-D rats rather than to the experimental conditions. And no, I have no ties to Monsanto.

          Sam Harris

          December 4, 2013 at 10:06 pm

    • Who leveled criticisms? Who did they work for? Who defended the paper? Who did they work for? (remember the actual editor, not the parachuted-in assistant biotech editor appointed after the journal refused to cave to the bullying, published Seralini’s line by line rebuttal to the critique). One famous critique is the rat strain: not ok in Seralini work although it is the very strain used in Monsanto research. Or the other popular dismissal- too few rats analysed… which if the study were a 10M euro study looking at the connection between specific health outcomes and GMO/ roundup cocktails would be legit criticism. as this study was about proving toxicological impact post 90 days, the 3M euro study was sufficient to prove that yes, toxicological impacts are observable post 90 days. The critics is politically/ financially motivated, we ignore Seralini’s work at our own peril.

      JodiKoberinski

      November 28, 2013 at 1:54 pm

      • Well put…Seralini was trying to replicate the original design of the 90 day Monsanto feeding study, but continue it for two-year period. That’s why he chose that strain of rat and the number of animals per cell. If these are shortcomings in Seralini’s study, they are also shortcomings in the original Monsanto feeding study. Ironically, prior to this politicized “retraction”, the European Food Safety Authority (EFSA) had just validated Seralin’s experimental design and protocols (with a few minor changes) as the *standard” for long term GMO feeding studies going forward. Here’s Seralini’s account of this action: http://gmoseralini.org/seralini-validated-by-new-efsa-guidelines-on-long-term-gmo-experiments/

        Grant Ingle

        November 28, 2013 at 4:11 pm

        • The use of the strain of rat in a 90 day study is valid. The use in chronic studies however is wrong because the strain has such a high background rate of tumours – which is not relevant for a 90 day study. In any case whole food studies are inherently under-powered toxicologically, are unscientific and therefore unethical under the ECs own animal ethics laws. A rat is a poor substitute for a HPLC. These types of studies are essentially hypothesis free and employ thousands of data points. Statistics 101 tells us that if you have 100 parameters and use a decision point of p<0.05 then you will expect to get 20 statistically significant differences purely by random chance and surprise surprise this is exactly what Seralini "achieved".
          No study in recent history has attracted so much criticism from so many peak scientific bodies for so many basic scientific and interpretative flaws. The only aspects of the retraction that are open to criticism is that it took so long to come and has been so heavily edited by the lawyers.

          AB

          November 28, 2013 at 6:37 pm

      • EFSA’s statement on Séralini et al 2012: “Considering that the study as reported in the Séralini et al. (2012) publication is of inadequate design, analysis and reporting, EFSA finds that it is of insufficient scientific quality for safety assessment.” Complete document at http://www.efsa.europa.eu/en/efsajournal/pub/2910.htm.

        JAJ

        November 28, 2013 at 4:21 pm

    • I have read the paper several times and as a double Board-certified toxicologist with years of experience in running studies in rats I can confirm that the paper is absolute rubbish and should never, ever have been published in the first place.

      g2-c706dd6676549f1e118b7970c1e9b607

      November 29, 2013 at 10:41 pm

      • Can you explain the glaring difference between Seralini’s study and the Monstanto study that it was meant to replicate, other than the fact that Monsanto’s study was 13 weeks while Seralini’s study was 2 years?

        There’s a HUGE difference in the protocols used.

        saijanai

        December 4, 2013 at 6:21 am

        • The incompetence of the interpretation, and the selective and inappropriate way the data are reported. I wrote several pages on this when the paper originally came out. I see no point in explaining it to people who have a closed mind on this subject.

          g2-c706dd6676549f1e118b7970c1e9b607

          December 4, 2013 at 3:37 pm

          • >>Can you explain the glaring difference between Seralini’s study and the Monstanto study that it was meant to replicate, other than the fact that Monsanto’s study was 13 weeks while Seralini’s study was 2 years? There’s a HUGE difference in the protocols used.

            >The incompetence of the interpretation, and the selective and inappropriate way the data are reported. I wrote several pages on this when the paper originally came out.

            That’s not part of the study protocol, but part of the way the report is written.

            The change that Seralini made is very glaringly obvious.

            saijanai

            December 4, 2013 at 5:32 pm

        • Indeed. The protocol used in the seralini study makes no sense.

          Gabriel HMIMINA

          December 4, 2013 at 5:27 pm

          • >>Can you explain the glaring difference between Seralini’s study and the Monstanto study that it was meant to replicate, other than the fact that Monsanto’s study was 13 weeks while Seralini’s study was 2 years? There’s a HUGE difference in the protocols used.

            >Indeed. The protocol used in the seralini study makes no sense.

            All these so-called experts, and not one can answer a trivially obvious question.

            saijanai

            December 4, 2013 at 5:34 pm

          • The mere idea of using the same strain in a 2 year trials with LESS rats per group is stupid. Even if it’s “only” for toxicological evaluation.

            Moreover, even if he didn’t planned to test for carcinogenicity at first, he finally did it. What he planned is irrelevant : what matters is what he did and his claims.

            Gabriel HMIMINA

            December 4, 2013 at 6:48 pm

    • The most astonishing thing about this study is the discussion chapter, where hypotheses magically become facts.

      Gabriel HMIMINA

      December 4, 2013 at 5:23 pm

  11. Thank goodness. This paper has been a centerpiece of fear mongering around marginal data and sensationalized conclusions. The three lumpy rats in Figure 3 without a CONTROL are a clear indicator that this is a political statement, not science. There is no data from the tumor-laden, suffering rats. It is all shock value, especially when Table 2 shows that control rats got tumors too!

    The journal has a responsibility to referee what it publishes. If reviewers and editors fail, it is good to see that the journal and history will correct it.

    Kevin Folta

    November 28, 2013 at 1:53 pm

  12. Small sample size is not a ground for retraction – it is challenge to the industry to fund an arms length study of sufficient sample size to refute these preliminary conclusions.

    It is breathtaking that when there are cases of ACTUAL fraud the amount of begging and pestering to get the journal to take action and how they will always try to pass the buck, but when there is no fraud but an embarrassed multi-national – instead of taking the scientific path, proving the paper wrong, they use their muscle to censor the offending paper.

    Sadly there are too many scientists either too dependent on patronage or too in love with their technology that cheer this abuse of process on.

    littlegreyrabbit

    November 28, 2013 at 5:06 pm

    • There was a counter article who wrote about him abusing the animal ethic code.
      It is an animal abuse using this type of mice for longer research period just to get a scary picture for publication.
      These type of mice should only be used for short term research (i.e 3 months).
      Even if he wants to see if there is a tumor formed, this should be able to be seen much earlier (before 6 months for example) by doing a biopsy and showing the localized tissue symptom and killed the rat before the real tumor develop and give pain for the mice. But evidently, the idea is not to show to scientific people but to give a show to the less educated public how scary is the image of swollen rat, despite breaching the bio ethics.

      Yoyo

      November 28, 2013 at 6:22 pm

      • Saying it is so, don’t make it so.
        Firstly it is completely wrong to say that 2 year studies in this strain of rats don’t take place – they do and by the National Toxicology Program no less

        http://www.ncbi.nlm.nih.gov/pubmed/21383778

        Secondly, there is no mention of animal ethics in the proposed retraction notice, so it is completely beside the point.
        Thirdly, if animal ethics is a concern it should be first taken up with the institutional ethics committee who approved the study. They then need to determine if their ethics guidelines were adequate or if they were adhered to – only in the case that their guidelines were not adhered to should they consider whether the breach was of significant magnitude to warrant contacting the journal. The journal should only act after any ethics concerns have been referred to the institution and the institution has got back to them.

        Some people seem to laboring under the misapprehension that the millions of scientific publications are brimming with strongly positive findings and that everything else is rejected. That, of course, is not the case. If we want to cut down on the incident of scientific fraud we need to publish not just the strongly positive, but also the negative or the equivocal data as well. The data that Seralini presented is the data he got, there is no reason for it to be hidden or banned from the scientific community. It is the 2nd under-powered that suggested a correlation (not reaching statistical significance) between Round-Up Ready transgenic plants and tumour incidence. The other one was with soy-beans. Seralini also presented a potentially plausible explanation – namely by placing the transgene under a strong constitutive promoter they thereby uncoupled the phenolic amino acid biosynthesis pathway from biological control and had resulted in perturbations in the levels of plant secondary metabolites.

        Food and Chemical Toxicology may wish to take an editorial line that they only wish to publish strongly positive findings – but that needs to be applied BEFORE publication and not afterwards

        littlegreyrabbit

        November 28, 2013 at 9:00 pm

        • The NTP also used Fischer 344 rats for many years despite the fact that they have a spectrum of background tumours of their own. The fact that the NTP use a given strain of rat is no indication that it is a good choice. I have personally run NTP studies and I think their study designs are generally poor and that their studies are not really GLP, although they claim they are. They won’t allow the USFDA to audit their studies to determine whether they are GLP or not. As a study director of NTP studies I found that if I observed clinical signs that were not on their limited list of permitted observations, I was not allowed to record or report the findings. NTP studies are not trustworthy.

          g2-c706dd6676549f1e118b7970c1e9b607

          November 29, 2013 at 10:44 pm

          • NTP assays untrustworthy…that’ll be a shock to many researchers! GLP = red herring. As you know, GLP is not a test method, it is only a set of lab practices to enhance reproducibility (specificity)–an accountable party, transparent & detailed records; etc.

            You seem to have conflated GLP with the actual test methods of OECD, the Test Guideline (TG–some countries as USA use related test methods, but all coordinate). This is natural because both are called for in many laws, regs & agency guidances. In fact both are de riguer for pre-maret risk assmnt, which has the MASSIVE effect of excluding academia’s results from most RA…leaving the manufacturer’s studies as the basis for finding the safe dose!

            As the S. paper indicates, the TGs are insensitive…you can ALWAYS falsify a TG based NOAEL with a PubMed found LOAEL…once the agent has been studied enough by independent academics. But the tragedy is that RA sails on, oblivious of the biosphere and its sensitive organisms, due to the subtleties of biochemistry!

            Anthony C. Tweedale

            November 30, 2013 at 11:13 am

            • The NTP approach follows neither the OECD or the FDA guidelines. They have their own. For example they dose only on week-days, 5 days a week, which is appropriate for industrial chemicals but completely inappropriate for the many dietary contaminants, household chemicals and alternative medicines they have studied for a number of years.
              My colleagues in risk assessment generally agree that there are problems with NTP studies. Having said that, I thought they were probably meaningful until I actually worked as a study director running NTP studies, at which point I realized that they are deeply flawed.

          • I’d like to hear more about the failures of the NTP assay methods — it’s not too off-topic, as the Ser. paper was based on the competing TG methods. I know NTP methods are sometimes criticized, but have a suspicion some critiques are because they don’t follow TG methods…which irreducibly are insensitive and so not suited for RA.

            Anthony C. Tweedale

            December 1, 2013 at 7:25 am

    • Indeed, sample size is not a ground for retraction in itself. But in this case, the incredibly small sample size and the resulting interpretation of artefacts IS an error. It’s a good ground for retraction.

      If such an article whose conclusions are ALL false isn’t retracted, what’s the point ?

      Gabriel HMIMINA

      December 4, 2013 at 5:33 pm

      • >>Small sample size is not a ground for retraction – it is challenge to the industry to fund an arms length study of sufficient sample size to refute these preliminary conclusions. It is breathtaking that when there are cases of ACTUAL fraud the amount of begging and pestering to get the journal to take action [...]

        The control group data for Seralini’s study and Monsanto’s study using the same rats fed the same general feeds could be compared. That seems worth doing.

        saijanai

        December 4, 2013 at 5:36 pm

        • Using proper methods is the challenge of all scientists. Not just the industry.

          When you say “preliminary conclusion”, what are you refering to ? There’s no valid conclusion in the article.

          The Seralini control group has already been compared to historical data, and is an artefact. See the HCB analysis.

          Gabriel HMIMINA

          December 4, 2013 at 7:17 pm

  13. So, it seems I have been doing science wrong all these years. I was under the impression that if I see a study I disagree with, I work out what I believe are the flaws in the study, and design my study to show why the authors got the original result they did – then publish the results of my study explaining that the original study got the results it did because of a, b, c etc.

    PWK

    November 28, 2013 at 9:07 pm

    • Exactly!

      DEUS ex MACHINA

      November 29, 2013 at 3:18 am

      • http://gmoseralini.org/professor-seralini-replies-to-fct-journal-over-study-retraction/#respond

        Double standards

        A factual comparative analysis of the rat feeding trial by the Séralini’s group and the Monsanto trials clearly reveals that if the Séralini experiments are considered to be insufficient to demonstrate harm, logically, it must be the same for those carried out by Monsanto to prove safety. Basically, all previous studies finding adverse effects of GE crops have been treated by regulators with the attitude: only those studies showing adverse effects receive a rigorous evaluation of their experimental and statistical methods, while those that claim proof of safety are taken at face value. All studies that reported no adverse effects were accepted as proof of safety regardless of these manifest (but deemed irrelevant) deficiencies of their methods.

        Scientists Debate New Study on GMO-Fed Pigs
        Study highlights issue of GM seed research restrictions
        By James Andrews | June 13, 2013

        http://www.foodsafetynews.com/2013/06/study-says-gmo-feed-may-harm-pigs/#.UphLn2OxE1h

        bedsidereadings

        November 29, 2013 at 3:59 am

      • Yes, so old-fashioned your approach of building a body of science! Dr. Weaver says we used to have “evidence-based decision making”… we now have “decision-based evidence making”. this retraction a biting example. Such actions by journals like FCT do nothing to repair the increasingly tarnished image of “science” as nothing more than a tool of the eilte and monied, and it fuels the “anti-science” flames amongst a rightly concerne public who see “science” being used to further a corporate agenda while legitimate questions about the premature release of this very new technology on the ecosystem go unanswered…

        JodiKoberinski

        November 29, 2013 at 4:14 am

    • Studies of this kind cost hundreds of thousands of dollars so unless you are bankrolled by Greenpeace, as Seralini’s CRIIGEN Institute is, you won’t be able to afford them.

      g2-c706dd6676549f1e118b7970c1e9b607

      November 29, 2013 at 10:46 pm

      • Seriously, NGOs have bigger bank accounts than for-profit multinationals? I’ve seen enough of the former and read enough of the later–rubbish.

        Anthony C. Tweedale

        November 30, 2013 at 11:17 am

        • I am responding to PWK’s comment about running one’s own study. This is a strawman argument.

          g2-c706dd6676549f1e118b7970c1e9b607

          December 1, 2013 at 1:46 am

        • This study was funded by Carrefour, which has a bigger bank account than nearly everyone. Monsanto included.

          Gabriel HMIMINA

          December 4, 2013 at 7:18 pm

    • Not even needed in this case. The letters to the editor were enough to disprove the whole study.

      And come on : we can’t throw 3 million euro and 3 years down the gutter everytime some random scientist submit such an abuse of science.

      Gabriel HMIMINA

      December 4, 2013 at 5:36 pm

  14. There seems to be unusual amount of traffic here and particularly unusual is the amount of thumbs up and down, mostly in favor of views supporting Monsato related interests. Unfortunately some people here seem to think that different standards apply for independent studies and studies conducted by commercial interests. It seems to be difficult for those people to grasp that ones you start criticizing this study, the original studies by Monsato also suffer due to same strain of rats being used, as well a large number of other studies. All research equal, some research more equal that others?

    Bystander

    November 29, 2013 at 3:57 am

    • Having now looked at the paper I have formed the opinion that the conclusions are not supported by the evidence provided. I believe this paper should have never been accepted for publication. But, having passed the scrutiny of the reviewers and being published, I think it is the time to disprove (or not) its findings with hard evidence. The case is analogous to the arsenic-eating bacteria paper in Science not long ago: probably flawed and indecently hyped by the authors themselves. It took however carefully conducted experiments to put hypothesis to rest. This is what should happen with Seralinis´s group claims. The cooperation of the industry should of course be total in terms of availability of materials, research findings etc., otherwise we might come to the conclusion that some of the people commenting here are being persecuted rather than being paranoid!

      Toby

      November 29, 2013 at 8:52 am

    • The retraction notice is very poorly worded. Based on it, I also thought there had been a miscarriage of justice. But I have now read the paper and correspondence carefully, and they justify the key sentence in the retraction: “… the raw data revealed that no definitive conclusions can be reached … regarding the role of either NK603 or glyphosate in regards to overall mortality or tumor incidence”. In other words, there are no statistically significant effects. This should have been checked by the reviewers – it was at least likely, from looking at the results as presented in the paper, that the results were not statistically significant. The reviewers should have insisted on a statistical analysis in the initial review – at which stage, I strongly suspect the paper would have been withdrawn.

      The retraction notice goes on to talk about the sample size and characteristics of the Sprague-Dawley rat. The explanation is klunky, to say the least. The structure of Serafini’s trial (size, cancer susceptibility of SD rats) were such that it was only likely to show statistical significance if the effects were very large – they weren’t, so it didn’t. But that was Serafini’s risk in doing the work that way – it’s not part of the reasons for retraction (so would have been better left out of the retraction notice). I’m sure Serafini realises by now that he should have used larger samples, especially in the control group; had he done so, we might not be having this debate – because he would have found either incontrovertible associations, or none.

      Regarding the Hammond et al paper, its claim was that there was no substantial metabolic difference between the various groups of rats at three months; that seems also to have been clearly true. Since there’s no way to statistically test such a negative, statistical criticisms aren’t relevant. On the other hand, Serafini’s work shows that there might well have been differences if the rats had been followed longer, hence at least casts some doubt on the relevance of the paper. In my view, there should be some kind of notice on Hammond et al. regarding this issue, noting that there is some weakly suggestive evidence of effects later in life.

      Bob M

      November 29, 2013 at 9:30 am

      • Are you familiar with the way GMO rodent toxicity studies are done compared to the way chemical toxicity studies are done? If you have read any of the studies done by Monsanto, you should have noticed the 6 auxiliary control groups which were tested against in case a significant difference was found by testing against the actual control group.

        You should also be aware that in chemical rodent toxicity studies, statistical significance is generally considered “low man on the totem pole” as far as importance in determining potential toxicity while with GMO rodent toxicity studies, ONLY statistical significance is considered.

        saijanai

        November 29, 2013 at 6:41 pm

        • What’s the issue with the multiple control groups ? Indeed, they compared the treated group to multiple control group in order to assess the variability due to different non-GM line.
          It’s properly explained, and relevant.

          Gabriel HMIMINA

          December 4, 2013 at 7:22 pm

      • Well, this part of the article was also dubious : the observed significant differences were probably due to an artefact (an unusual control group). See the French HCB report ; they redid the analysis while using historical data to describe the strain caracteristics.

        Gabriel HMIMINA

        December 4, 2013 at 5:42 pm

    • What about the fact that Seralini’s CRIIGEN Institute is heavily funded by the vehemently anti-GMO organisation Greenpeace?

      g2-c706dd6676549f1e118b7970c1e9b607

      November 29, 2013 at 10:47 pm

      • Interesting, where can we find more information on this?

        Bystander

        November 30, 2013 at 8:15 am

      • “There is nothing interesting about the development of mammary adenomas in Sprague Dawley rats, and there is never any excuse for inhumane treatment of laboratory animals.”

        When tumors show up in SD rats before 6 months, it is always of interest, no matter which sex, no matter what kind of tumor:

        See table 1 and table 2:

        Early Occurrence of Spontaneous Tumors in CD-1 Mice and Sprague–Dawley Rats

        http://tpx.sagepub.com/content/32/4/371.full.pdf

        saijanai

        December 4, 2013 at 6:35 am

        • What I meant was, there is nothing of interest in the way they grow that justifies keeping the rats alive to the point at which the rats cannot move normally and the tumors are getting abraded. That is downright inhumane. Seralini et al should not be allowed to do live animal studies if that is the way they treat rats. For that matter, what is Seralini doing, doing live animal studies when his qualifications are in molecular biology?

    • yep paid shills abound…
      “Surging Disinformation Analysts Commenting On Your Favorite Websites To Emotionalize and Antagonize
      As reported by Sustainable Food News, more than 50 of these front groups, working on behalf of food and biotechnology trade groups, have formed a brand new alliance called Alliance to Feed the Future.. The stated aim of the alliance is to “balance the public dialogue on modern agriculture and large-scale food production.” The Alliance’s effort appears to be an attempt to squelch the growing consumer perception that modern food production can have a negative impact on the health of humans and the environment as espoused by the organic and sustainable food movement.”

      http://fromthetrenchesworldreport.com/surging-disinformation-analysts-commenting-on-your-favorite-websites-to-emotionalize-and-antagonize/45848

      susanstop

      December 1, 2013 at 2:34 am

  15. Everything about this retraction is quite extraordinary — not least the crass and heavy-handed manner in which has been done. A black day for science and a black day for this particular journal, which now becomes a laughing stock. When Prof Seralini refused to withdraw the paper, the Editor put out a statement which makes the following clear:

    1. There is no fraud, no misinterpretation, and no misrepresentation in the paper. In other words, the research was conducted meticulously, and the data support the conclusions.
    2. Seralini cooperated fully in making his data sets available to the strange FCT “kangaroo court”, which in spite of inbuilt hostility found nothing amiss.
    3. There has been no new experimental evidence which brings into question the Seralini findings. If there had been, that might have constituted grounds for a retraction. But no follow-up study has been done — I winder why? (Shades of the infamous Pusztai affair, following which Pusztai’s results still stand because everybody is scared to death of replicating his experiment…….)
    4. The Editor’s criticism appears to be levelled not at Seralini and his team, but at the paper’s reviewers! It is nothing short of scandalous for a paper to be retracted because of shortcomings in the review process….
    5. In the very careful wording of the retraction statement, it is apparent that the Editor still thinks that the Seralini study was a carcinogenicity / mortality study. It was not. It was a long-term toxicology study, as Seralini has tried to point out over and again. Are these supposedly top scientists incapable of understanding the difference between the one and the other?
    6. On the surface, the reason for the retraction is that the study is not “spectacular” or “important” enough to justify publication in the first place, because the results are deemed by the Editor and his cronies to be “inconclusive.” How many papers published in this and other journals are actually “conclusive”? That’s not how science works. Papers are published all the time which suggest conclusions and which invite further research — and that is exactly what Seralini and his colleagues have done.
    7. The real reason why this research paper has now been retracted is that the Seralini results were seriously inconvenient to the biotechnology industry, and because its rottweilers have been snapping at the heels of the Editors, and making their life hell, ever since publication.

    According to the COPE guidelines there are absolutely no grounds for this retraction — and this is a black day for science. I suppose we should have seen it coming — when Monsanto and its friends in the biotechnology industry control the means of publication / communication, nothing which might threaten their commercial ambitions can be allowed to see the light of day.

    Brian John (@Angelmountain5)

    November 29, 2013 at 7:47 am

    • These claims are misguided or flatly incorrect from start to finish.

      1. “No [evidence for] fraud, no misinterpretation, and no misrepresentation” in no way implies “meticulous” or correctness in the same way that describing someone as “not obviously a murderer” fails to suggest that the target of one’s observation is an upstanding citizen.
      2. This is a basic scientific responsibility, not something that deserves special praise.
      3. New evidence is rarely cause for retraction of an existing article unless e.g. it causes a researcher to discover that a critical piece of equipment was miscalibrated. Theories and hypotheses are in dialogue with each other but articles typically live and die on their own merits.
      4. “Scandalous” is overheated. Pre-publication peer review can and does fail despite the diligent efforts of all involved. Are you suggesting wrongdoing or incompetence? Those would be strong claims.
      5. Well, then it’s odd that Seralini made such hay of discovering tumors in his rats if he wasn’t doing a tumor study. It doesn’t matter what you call it since the study wasn’t powered well for tox, either; the only thing you’d expect to be able to use this study for is generating gruesome pictures, which indeed we have in spades.
      6. I’m not sure about “spectacular” or “important”; let’s talk about “useful.” Authors that “suggest” conclusions should demonstrate that they are plausible if they hope to be taken seriously; we have not cleared even that low bar. It’s clear to me that the reason for the retraction is that the paper cannot hope to justify the claims and insinuations it makes.
      7. Given the choice between “an international conspiracy of shadowy figures wholly without ethics orchestrated a coup” and “discussion has shown that the findings were not meaningful,” I submit that Occam’s razor shows a preference.

      I’m not positive that retraction is formally the correct way to handle this work, but it seems inevitable that failing to withdraw it from the scientific record would only continue to incite and confuse those who are attempting to use it to further arguments it cannot support. Certainly I won’t shed a tear for its loss.

      Tim D. Smith (@biotimylated)

      November 29, 2013 at 9:19 pm

      • “5. Well, then it’s odd that Seralini made such hay of discovering tumors in his rats if he wasn’t doing a tumor study. It doesn’t matter what you call it since the study wasn’t powered well for tox, either; the only thing you’d expect to be able to use this study for is generating gruesome pictures, which indeed we have in spades.”

        Pictures aside, ALL GMO studies are underpowered, but the regulatory bodies accept “no significant difference” in an underpowered study as justification for giving a greenlight to selling GMO foods.

        Sauce for the goose, and all that.

        saijanai

        December 4, 2013 at 6:26 am

        • “Pictures aside, ALL GMO studies are underpowered”
          I checked the ones Seralini quote, and sorry, but they’re not. They’re in compliance with the OCDE guideline.
          And I’ve read numerous ones which weren’t underpowered (notably, long-term studies).

          Gabriel HMIMINA

          December 4, 2013 at 7:24 pm

    • The issue is not the fact that it’s inconclusive. The issue is that Seralini used these data to conclude anyway.
      That’s true, there’s no proof of INTENTIONAL error, misrepresentation of data or misinterpretation.
      But this article is still full of misinterpretations and errors. Not even one of its conclusion is actually supported by the data.

      I think there is a misunderstanding there : retraction isn’t just some kind of punishment for dishonest scientists. It’s a way to mark papers whose methods or conclusion are FALSE. Like this one.

      Gabriel HMIMINA

      December 4, 2013 at 5:49 pm

  16. There seems to be some confusion in the comments thread here between legitimate peer review, which is often anonymous, and the post-peer-review post-publication retrospective ‘review’ of the study. I think few scientists object to anonymous peer review, since if the reviewers reject the paper, the author hears about it in confidence and he is free to revise or submit the paper elsewhere without damage to his career.

    What happened here is that the paper passed peer review and stood for over a year in print, and then was retracted after a non-transparent ‘review’ of the legitimate peer review process and the paper, performed by who knows who, with who knows what conflicts of interest. Maybe the ‘review panel’ consisted of the cleaning lady at the journal, no one knows.

    And whereas peer reviewers have to carefully justify their arguments, here there is no justification of the review panel’s beliefs about the paper and no response from them to the author’s past criticism of his paper. This isn’t science, it’s a secret tribunal.

    monkeyface

    November 29, 2013 at 7:53 am

    • sorry, penultimate line should read, “… to the author’s responses to past criticisms of his paper.”

      monkeyface

      November 29, 2013 at 7:55 am

      • Given some things I’ve seen in anonymous peer review in biology, physics, and chemistry society journals; I’d say a tad bit of post publication scrutiny is warranted that this stage. Esp. when it’s about our food.

        There’s a lot of interest from the public (with good reason), so things tend to get hyped to “sell the story” in the press. And, there is a lot of money on both sides of GMO research. (I take a mixed approach, since I’ve been cloning since I was a teenage girl. I think caution and careful, slow, detailed, meticulous research into crops is a good idea at this stage in our development of genetic technologies. They are the basis of the food chain, after all. Meanwhile that GMO salmon sounds delicious.)

        Allison (@DrStelling)

        November 29, 2013 at 9:12 am

  17. Funny Dr. Seralini accused of not understanding animal studies (the substance of why his rat strain relevant post 90 days already addressed in this thread) given he was the EU’s reviewer when the first industry studies were submitted for GMO approval. That’s right: his job in the late 1990s (given he was then a leading published geneticist) was reviewing GMO submissions. Guess how he lost that job? Refusing to approve GMO corn because the studies weren’t adequate to show safety. I am in constant question of my own bias, always a good reminder to check oneself. Clearly no study is perfect, one is challenged to ever control for all factors, and in part the reason we need a body of literature to build scientific consensus- since it is for reasons I’ve elaborated near impossible to conduct, let alone publish, research critical of GMO dogma, we need many many more Seralini’s and their quality science to give a fuller picture of genetic manipulation/ pesticide combinations on human and ecosystem health.

    JodiKoberinski

    November 29, 2013 at 9:33 am

    • Well, Seralini’s field has nothing to do with toxicology to start with. Yes, he was part of the CGB, a regulatory board in charge of evaluating GMO. It’s not a “job”, and he wasn’t there as a specialist of animal studies -which he’s not !-.
      Finally, he wasn’t fired. He resigned, because his point of view was constantly ignored (not scientifically sound).

      Gabriel HMIMINA

      December 4, 2013 at 5:54 pm

  18. Well argued… accept the study wasn’t designed to draw conclusions: it was designed to show toxicological affects appear after 90 days that require further study. The hypothesis was accurately tested. The study he is being criticized for not doing was a 10M euro, not 3M euro study. His feed trial included a 3rd dose (22 percent) to the original 2 doses (rendering the data not statistically relevant and therefore no decision of “safe”! ought to have been reached with the original Monsanto data). Curious why he is being “tried” for his study failing to prove inconclusively a connection between GMOs and cancer! That was not his agenda: besides tumours, there is a whole host of damning data in this study on all major systems…

    JodiKoberinski

    November 29, 2013 at 9:43 am

    • I would be curious if some of your, both conservative and liberal bloggers, could provide a definition for the word fraud as pertains specifically to some of the following aspects: a) research; b) publishing; c) authors; d) publishers; e) editors; f) the science-related industry (e.g., pharmaceutical companies, Monsanto, etc.). I have seen the word being used alot on this blog (and on other blogs) to describe both sides of the publishing world (scientists vs publishers), but this could always be open to a very subjective stance. I define fraud, without having consulted Wikipedia on this, as the “intentional manipulation of something that it is not with the intent of misleading the final user” (even if it is in inverted commas, it is my own quote). So, in my mind, the lack of transparency of data by a company, the lack of detailed explanations by authors or editors would be a form of fraud “light” (because the truth is being intentionally masked). It is amazing how much people fear the word, in fact, when in fact its intrinsic meaning is so crystal clear. And it is amazing how other people take advantage of that fear to try and implement new value systems upon others that are based on fear, rather than on logic and rationale. I am of the belief that fraud is rampant in science, and that it is being practiced, in different forms, by all six categories of players I list above, albeit at different levels. The word “fraud” should not be a feared word, or should not be censored as the N, the S or the F word are, but it should always be explained and put into context. In this blog thread I saw the word fraud used at least once, which peaked my curiosity about how others define it (without being influenced by lawyers, wikis or anything else). I believe that the term should be used as freely as the word plagiarism, and that euphemisms that are being used so frequently to describe plagiarism are also being used to avoid using the word “fraud”. So, if a scientist claims to use 50 samples when in fact they used only 20, is this fraud by the scientist, or error? If an editor claims to have conducted peer review, but then it is revealed that the scientific quality of the paper is abysmal, then is this fraud by the editor? If a publisher claims to have retracted a paper based on significant similarities in text, but then does not quantify the overlap, then is this fraud by the company? And if a company patents life, or any aspect of it (such as a gene), genetically manipulates that life, claims proprietary rights (through the copyright cartel) to all technologies in pure or derived form from that form of life, imposes it on a nation, or community while claiming simultaneously to be humanities’ savior, then is this also not fraud by such a company? Fraud, what is it in the context of science? (Incidentally, no I am not interested in the text-book, artificially drafted definition by COPE)

      JATdS

      November 29, 2013 at 12:52 pm

      • I forgot one absolutely important example: if a publisher claims to use plagiarism-detection software, then retracts a paper based on plagiarism, is this not a form of fraud by the publisher?

        JATdS

        November 29, 2013 at 12:55 pm

        • Why? You can easily have a plagiarised source that is not electronically available. I know of a case where someone copied from a book that, as far as I know, is not available online.

          Marco

          November 30, 2013 at 1:10 am

    • I can accept that the study wasn’t designed to draw conclusion. That’s a given : it was poorly designed.
      But the discussion chapter is made up of “conclusions.”
      That’s the issue : while the data shw nothing, Seralini used them to support his claims anyway.

      Gabriel HMIMINA

      December 4, 2013 at 5:58 pm

  19. I could easily fake a “proof” that Coca-Cola is “carcinogenic” by feeding it to Sprague-Dawley rats. Because these rats always develops neoplasia (i.e. all of Seralini’s test animals already had cancer), the extra sugar in Coca-Cola would help tumors grow faster and bigger. That is trivially explained by the anaerobic metabolism of tumor cells. It would not mean that Coca-Cola actually caused the neoplastic transformation. A similar argument could be made about the starch in maize. Using Sprague-Dawley for this bioassay was a fundamental mistake that makes the results impossible to interpret. It is neccessary to use an animal model with a normal resistance to development of neoplastic disease. Otherwise we cannot know if the test substance was carcinogentic, or if it just improved the growth of tumors that were caused by a genetic error in the animal. In addition, the experiment design was also faulty with respect to group sizes, lack of controls and (if I remeber correctly) insufficient blinding. One could also make the argument that if a post-mortem shows a Sprague-Dawley rat is free of tumors, the pathologist did not look carefully enough. There is no need to think in terms of conspiracies to explain this retraction. It is simply a bad paper that should never have been published.

    Sturla Molden (@nedlom)

    November 29, 2013 at 1:36 pm

    • It. Was. Not. A. Carcinogens study. It was a toxicological effects study, to test for anomalies post 90 days. Tumours are but one development in the extended study. It was not “the job” of this study to prove a carcinogenic affect caused by GMO/ pesticides. It certainly raises the question we don’t know the answer to: are GMO feeds safe? Who wants to fund the 10M euro follow up that is needed that everyone is asking for? This is the point of building a body of literature.

      JodiKoberinski

      November 29, 2013 at 1:44 pm

      • Sprague-Dawley is useful for testing acute toxicity (e.g. LD50). But for testing long-term exposure it is useless for the reason I already explained. After a certain age (about 6 months), nealy all SD rats can be considered to suffer from some sort of neoplastic disease. You don’t need to specifically look for cancer, but that is what you will always find. That is why a rat strain like Wistar Han, which has a low spontaneous tumor rate, is preferred for testing chronic toxicity. You will also see the same effect in aging experiments, where spontaneous neoplasms in aged SD rats will totally confound any other effect.

        Sturla Molden (@nedlom)

        November 29, 2013 at 2:43 pm

        • Appreciate your perspective. As the control group was just as susceptible then to spontaneous tumours it’s interesting the control did not have rampant tumours- and yes, testing a larger sample size would quell this controversy a little- again not the intent of the study, and would’ve been wonderful if we could’ve found the other 7M euros needed to create the statistical depth the critique focuses on. I’m satisfied the results tell us 1) toxicological impacts are present after 90 days 2) further study of the mechanism of events is needed 3) presuming safety based on 90 day testing inadequate to detect safety concerns.

          JodiKoberinski

          November 29, 2013 at 3:59 pm

          • If you already have spontaneous tumors, then anything that cells can use for glycolysis (i.e. sugars and starch) will feed the tumors. That is a problem with the animal model, regardless of sample size. Sprague-Dawley is arguably the most popular rat strain, but not a good choice for this experiment. Wistar would be better, Wistar Han would be much better. There is no valid reason for choosing Sprague-Dawley over Wistar here, so I would consider this a design flaw. They have simply used an animal model without careful thinking.

            As for the experiment design, the control group is far too small. Consider 3 spontaneous tumors in 60 male rats vs. 0 spontaneous tumors in 10 male rats: Split the big group of 60 randomly into 6 groups of 10 animals. How many of those will have 0 tumors? At least half of them! That is the design problem with this experiment. They need much bigger control groups. And they need to do proper statistics. I see that you say in a previous post that 3 out of 60 vs. 0 out of 10 is “suggestive”. But it is really suggestive of nothing. If the groups were similar, you would expect to see this difference on more than half the occations.

            Unfortunately, they did not do a properly designed experiment or proper statistical analysis. They prefered to show big pictures of tumors to get media attention. Instead of doing valid research they tried to create a hype. And so it got retracted. There are many bad papers that slip through peer review every day, and just are forgotten. The problem with the Seralini paper was all the media attention it got, and thus retraction was more prudent.

            Sturla Molden (@nedlom)

            November 29, 2013 at 6:16 pm

        • Funny that you should claim that SD rats should never be used in carcinogenic studies since the EFSA guidelines for a 2-year joint toxicity/carcinogenic study explicitly say that the same breed of rats be used for both preliminary 90-day studies and any 2-year studies that follow.

          As well, there are many l SD carcinogenicity studies, some published in the past year or two, and there are at least 6 two-year studies that use SD rats listed in the pubmed database.

          saijanai

          November 29, 2013 at 6:46 pm

        • 6 months for SD rats to show tumors? That’s not what the research shows. 12 months is when almost all tumors start to appear in SD rats. When tumors in SD rats show up before 6 months, it is automatically taken as a reason to look more closely at the data, and tumors between 6 and 12 months are still quite rare.

          See table 1 and table 2:

          Early Occurrence of Spontaneous Tumors in CD-1 Mice and Sprague–Dawley Rats

          http://tpx.sagepub.com/content/32/4/371.full.pdf

          saijanai

          December 4, 2013 at 6:31 am

      • If it was not a carcinogens study, why did the authors make so much of the development of the mammary carcinomas, even to the point of photographing them long after the rats should have been terminated for humane reasons?

        g2-c706dd6676549f1e118b7970c1e9b607

        November 29, 2013 at 10:50 pm

        • Clearly stated post-publ. (if not in paper), they followed non-cancer test protocols to choose the group sizes.

          Granted, they were interested to find that nevertheless, excess tumors were developing. Don’t know when they choose to sacrifice (recall some statements about that issue), but can readily imagine their interest in their development, cruel tho it is…

          Anthony C. Tweedale

          December 1, 2013 at 2:51 pm

          • There is nothing interesting about the development of mammary adenomas in Sprague Dawley rats, and there is never any excuse for inhumane treatment of laboratory animals.

            g2-c706dd6676549f1e118b7970c1e9b607

            December 2, 2013 at 2:45 am

          • Yes, they followed a non-cancer protocol, and the consequence of that is that the study just could not become a cancer study after the fact.

            In the comments I’ve seen about the study, the remark came back a number of time that the number of tumors was not actually abnormal for a two years study of SD rats, *especially* if the rats have been allowed to feed freely, and were not under a strictly controlled diet.

            It also was that there was little difference in the number of tumors between the no-GMO and the 33% GMO group, which was suggestive either totally random results because of the insufficient number of rats, or like if a larger amount of GMO was actually protecting the rats. Even if the study has a low statistic strength, there’s a strange similarity appearing in the results within the explicitly 33% maize fed rats (the 33% non-GMO group and the 33% GMO group), and within the explicitly 11% maize fed rats (the 11% GMO, and 11% GMO +RoundUp), just like if for example the feeding wasn’t balanced to make sure the rats were always getting 33% of maize in their food intake.

            jmdesp

            December 2, 2013 at 5:18 am

            • “In the comments I’ve seen about the study, the remark came back a number of time that the number of tumors was not actually abnormal for a two years study of SD rats, *especially* if the rats have been allowed to feed freely, and were not under a strictly controlled diet.”

              Two points:

              1) The SD rats started showing tumors earlier in the experimental than control group, and in fact, an historical survey of 850 control rats from various studies over the years shows taht the particular kind of tumor found in the male rats in Seralini’s study almost never shows up before 12 months of age. Out of 850 SD rats, only 3 rats ever had tumors of that type in the first 19 weeks, while in Seralini’s study, by age 16 weeks, 3 rats in the experimental groups had such tumors, and zero in the control groups.

              In a normal GMO animal study, the historical record is generally used to show “no significant difference,” but in this case, the historical record can be used to show that there IS a significant difference.

              2) The new EFSA guidelines for combined toxicity/carcinogenicity studies explicitly say to allow unlimited feeding.

              “It also was that there was little difference in the number of tumors between the no-GMO and the 33% GMO group, which was suggestive either totally random results because of the insufficient number of rats, or like if a larger amount of GMO was actually protecting the rats”

              In fact, previous (and more recent) work by Seralini and others has suggested that some ingredient or ingredients found in Roundup may act as an endocrine-disrupting compound, and by their nature, EDCs tend to show non-monotonic response curves, and it fact there exist real-world examples where a small dose of an EDC appears to (or perhaps even DOES) act as a prophylactic against a deleterious effect of a smaller dose.

              saijanai

              December 2, 2013 at 5:57 am

        • “Carcinogenesis is stochastic, which means that clusters happen completely by chance.”

          Yes it is, but as I said elsewhere:

          After 12 months is when almost all tumors start to appear in SD rats. When tumors in SD rats show up before 6 months, it is automatically taken as a reason to look more closely at the data, and tumors between 6 and 12 months are still quite rare.

          See table 1 and table 2:

          Early Occurrence of Spontaneous Tumors in CD-1 Mice and Sprague–Dawley Rats

          http://tpx.sagepub.com/content/32/4/371.full.pdf

          saijanai

          December 4, 2013 at 6:38 am

      • “It. Was. Not. A. Carcinogens study.”
        Then, why did Seralini claim that he see an increase in tumor due to the GM feed and the roundup in this article ?
        IT WAS a carcinogen study since it claims to prove an increase in cancer due to the treatments.

        Gabriel HMIMINA

        December 4, 2013 at 6:02 pm

        • “They say to use at least 10 rats for the toxicology branch”
          >20 rats per group/sex, and study length limited to 1 year.
          >http://www.oecd-ilibrary.org/content/book/9789264071209-en
          >His study doesn’t even comply with the toxicology branch standard.

          The whole food extension for OECD TG 453 misquotes TG 453 and says to use 10 animals per sex per dose.

          http://www.gmwatch.org/index.php/news/archive/2013/14882-seralini-validated-by-new-efsa-guidelines-on-long-term-gmo-experiments

          Confusingly, Monsanto used 20 animals per sex per dose, but for many tests, only reported the results on 10 rats.

          My own opinion is that Seralini should have allowed for the known attrition rate of SD rats and used 50% more rats than his protocol required. For 10 rats, that would have meant 15 rats per sex per dose. FOr 20 rats, that would have meant 30 rats per sex per dose. Either way, he seriously blew it on the number of rats required for a 2-year toxicology study. Even so, that doesn’t invalidate reporting unexpected tumors in the experimental groups.

          >For combined toxicity and carcinogenicity studies (which is what Seralini did), it’s 50 rats per group/sex.
          >http://www.oecd-ilibrary.org/environment/test-no-453-combined-chronic-toxicity-carcinogenicity-studies_9789264071223-en

          >And I don’t care if Seralini aim was to test for carcinogenetic effect or not : he did test it, so he should have followed the proper protocol.

          Seralini didn’t test for carcinogenetic effects. He noted them as part of the (as yet unpublished) 2-year toxicology study. The paper that was retracted was NOT the toxicology study, but his sensationalized report on the unexpected tumors he found in the course of his 2-year toxicology study. Reporting unexpected tumors in a toxicology study is very much an accepted practice. The WAY in which he reported them is controversial.

          saijanai

          December 9, 2013 at 2:08 am

          • That’s still completely wrong.
            The point is : you can’t pretend to interpret cancer occurence when your study isn’t properly designed, and is clearly underpowered.
            As for the toxicological data, Seralini just shouldn’t have made so many treated group. That’s common sense.

            But the whole issue isn’t the fact that the study was underpowered. It’s the fact that Seralini choosed to interpret random noise, and concluded anyway. There isn’t even one conclusion which can be proven using the reported data.

            By the way, you’re still referring to the 2004 study in a misleading way : the 20 rats are indeed followed, and data are reported.

            Gabriel Hmimina

            December 9, 2013 at 10:14 am

        • >That’s still completely wrong.
          >The point is : you can’t pretend to interpret cancer occurence when your study isn’t properly designed, and is clearly underpowered.
          >As for the toxicological data, Seralini just shouldn’t have made so many treated group. That’s common sense.

          EFSA toxicology study guidelines say to balance the concerns for using to many animals with the concerns of not enough control groups, by keeping the ratio of control groups to experimental groups within 1:1 to 1:4

          Seralini only used 1 control group for 9 experimental groups. A logical division would have been to use 3 control groups, one for each main arm of the study. This would have given him a ratio of 1 control group per 3 experimental groups.

          >But the whole issue isn’t the fact that the study was underpowered. It’s the fact that Seralini choosed to interpret random noise, and concluded anyway. There isn’t even one conclusion which can be proven using the reported data.

          As a board-certified toxicologist, you are well-aware that exploratory toxicological studies aren’t about making conclusions but about finding patterns of data suggestive that there is a toxic effect present.

          >By the way, you’re still referring to the 2004 study in a misleading way : the 20 rats are indeed followed, and data are reported.

          20 rats per sex per group were used, but only 10 rats per sex per group were tested:

          http://gmoseralini.org/wp-content/uploads/2012/11/Hammond2004.pdf

          Results of a 13 week safety assurance study with rats fed grain from glyphosate tolerant corn

          2.6. Clinical Pathology

          2.6. Clinical Pathology
          Blood was collected under light halothane anesthesia, via the retro-orbital plexus from 10 rats/sex/group after week 4 and again (under CO2 anesthesia from the pos- terior vena cava) just prior to sacrifice. Animals were fasted overnight (18–23 h) but did have access to water. When possible, blood samples were collected from the same 10 animals at both collection periods.”

          So the question arises: why weren’t ALL rats tested?

          saijanai

          December 9, 2013 at 10:51 am

    • Criticisms from the very beginning is suspect, instead of challenging him by refusing to go beyond 90 day trials.
      We are to learn from the past: Cigarettes were once ‘physician’ tested, approved- http://www.healio.com/hematology-oncology/news/print/hematology-oncology/%7B241d62a7-fe6e-4c5b-9fed-a33cc6e4bd7c%7D/cigarettes-were-once-physician-tested-approved

      The Oiling of America: http://www.westonaprice.org/know-your-fats/the-oiling-of-america

      Everyone knows the wars of Margarine Vs. Butter! http://articles.mercola.com/sites/articles/archive/2013/11/20/trans-fats-hydrogenated-oil.aspx

      “If there is a 1 in 1,000 chance that Professor Seralini is on to something, we should be replicating and building on his studies as a matter of urgency – even a small risk of harming the health of billions of people should be explored openly,” said Pete Ritchie, the director of Nourish Scotland. http://gmoseralini.org/debunking-stale-gm-lies-over-seralini-study/

      bedsidereadings

      November 29, 2013 at 4:36 pm

      • Currently the ILSI (the industry think-tank responsible for changing the EFSA toxicity guidelines concerning GMOs) is advocating that NO “whole food” studies be done any more. That is, niether 90-day rodent toxicity studies, nor longer studies such as the about-to-be-retracted Seralini study should ever be conducted because they are useless, according to the GMO Industry’s think tank.

        http://www.ncbi.nlm.nih.gov/pubmed/24164514

        saijanai

        November 29, 2013 at 6:59 pm

    • Sugar: Don’t think so–don’t tumors sustain & grow via different metabolism–Warburg effect, etc?

      Despite your aspersions, it seems somewhat clear the ‘exposed’ tumors were qualitatively & quantitatiely different than the neg controls…

      Anthony C. Tweedale

      November 30, 2013 at 11:30 am

      • I’ve read the paper, and this is not inside it. There’s is no peer reviewed data about the exposed tumors being qualitatively & quantitatively different from the neg controls.

        If Séralini actually has precise data about this and wishes to release it, he is welcome.

        jmdesp

        December 2, 2013 at 5:23 am

        • “I’ve read the paper, and this is not inside it. There’s is no peer reviewed data about the exposed tumors being qualitatively & quantitatively different from the neg controls.

          The historical record on SD rats and tumors shows that the type of tumor that showed up after 16 weeks in the experimental group male rats (and not at all in the control rats of either sex) is extremely rare: 3 out of 850 rats in an aggregate of control groups showed such tumors in the first 19 weeks of the studies. That’s an obvious quantitate difference.

          And using historical data in a GMO toxicity study is perfectly fine.

          saijanai

          December 2, 2013 at 6:02 am

          • Carcinogenesis is stochastic, which means that clusters happen completely by chance.

          • saijanai, Séralini has said in one interview that the rat were fed “ad libitum”. This is documented to increase and accelerate tumors in SD rats, see this study : http://toxsci.oxfordjournals.org/content/58/1/195.full
            Since it’s known since quite a while that you should not feed SD rats “ad libitum” for long term studies, how many have done it in the aggregate group you reference ? Using the same A04 diet ?
            On Fig 1. in the study, we can see there’s actually only 2 rats that die early in the way you describe, a little after 100 days, isn’t that a little bit few to make the kind of strong claim you’re doing ? And in both case it’s in the group with not that much GMO maize either the 11 or 22% group, not in the 33% group.

            Actually, in particular in the male case, we can see no real difference between the 33% GMO group and the control. Séralini defended that the GMO would work as an endocrine disrupter, but then we would see a saturation effect, not a lower effect at high dose. Seeing this, the way the text of the study says the rat were fed with either 11, 22, 33% GMO maize *or* 33% standard maize becomes quite suspicious. The diets ought to be balanced by complementing the GMO maize with standard maize to always have 33% maize in the feeding, or else we are not comparing similar diets, and we could be just seeing that A04 is very bad for “ad libitum” feeding, and that adding more maize makes it less carcinogenic (by reducing protein input probably).

            Or then, all of this is just completely random, and I’m spending too much time trying to extract anything of significance out of a study that’s just way too underpowered, given the way we can see also “GMO+Roundup” for female is already much better than “GMO only” for them, and that Roundup drinking has no effect on the males but “results” in a quite earlier and stronger mortality for females, all things that don’t make any real sense.

            jmdesp

            December 2, 2013 at 5:16 pm

        • “saijanai, Séralini has said in one interview that the rat were fed “ad libitum”. This is documented to increase and accelerate tumors in SD rats, see this study : http://toxsci.oxfordjournals.org/content/58/1/195.full

          EFSA guidelines for combined long-term toxicological/carcinogenic studies say to use the animals that were used in shorter studies AND say to feed them _ad libitum_.

          These guidelines were clarified after Seralini’s study was published and are, in fact, what he did in his long-term toxicology study.

          ‘Since it’s known since quite a while that you should not feed SD rats “ad libitum” for long term studies, how many have done it in the aggregate group you reference ? Using the same A04 diet ?’

          It is immaterial how many other studies do this. Seralini was using the guidelines set up by the EFSA and clarified after he published: they say to use _ad libitum_ feeding in combined long-term toxicology/carcinogenicity studies.

          http://www.efsa.europa.eu/de/efsajournal/doc/3347.pdf

          Considerations on the applicability of OECD TG 453 to whole food/feed
          testing1 (July 2013)

          “EFSA is of the view that these recommendations of OECD TG 453 are applicable also in the case of whole food/feed. In addition in the case of whole food/feed [studies] it is **recommended that animals should be fed ad libitum**.”

          They say to use at least 10 rats for the toxicology branch, as Seralini did (though he should have used 2-3x that many to allow for attrition) and at least 50 rats in the carcinogenicity branch (which Seralini wasn’t trying to perform).

          saijanai

          December 4, 2013 at 6:52 am

        • “Actually, in particular in the male case, we can see no real difference between the 33% GMO group and the control. Séralini defended that the GMO would work as an endocrine disrupter, but then we would see a saturation effect, not a lower effect at high dose. ”

          By definition, a “non-monotonic dose response curve” doesn’t work like a normal dose-response curve.

          A “saturation effect” is a monotonic dose response curve as the the slope of the curve never changes sign.

          Here’s some common NMDR curves:

          http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3365860/figure/F3/

          Notice that curves found in A and B are considered standard, while curves in C are considered characteristic of endocrine-disurptors. Curve D is a binary response curve.

          Dose response curves found in C are almost impossible to detect using standard toxicological studies and that is why there is such controversy concerning them with the food, chemical and GMO industries (and editors like Hayes who has retracted teh Seralini study) claiming that the rules for safety testing shouldn’t be changed while the Endocrine Society insists that they must be.

          If one or more ingredients or combinations of ingredients in Roundup have an endocrine-disrupting effect, it is very possible that such an effect won’t be detected in standard 90-day rodent toxicity studies.

          IN fact, not only may such dose response curves require longer time to detect, but they may require many more data-points on the time-scale or doseage scale in order to detect them.

          The worst case scenario is when an NMDR curve looks like this:

          http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3365860/figure/F5/

          and, as I understand it, there are known cases of EDCs that produce curves even more complex, such as an on-going sin wave.

          saijanai

          December 4, 2013 at 7:47 am

  20. But for sure we know this:-
    GM food off the menu in Parliament’s restaurants despite ministers telling the public to drop their opposition

    Read more: http://www.dailymail.co.uk/news/article-2345937/GM-food-menu-Parliaments-restaurant-despite-ministers-telling-public-drop-opposition.html#ixzz2m4FCw7iR

    samN

    November 29, 2013 at 3:14 pm

  21. There seems to be a lot of misunderstanding of a study’s power, including by the journal editor, Hayes. Power does not depend just on the sample size, but also on the effect size. If the effect size is large (e.g. exposed rats get many more tumors than control rats), then this can produce a “statistically significant” result even in a study with small numbers of rats. It is impossible to predict the power of a study if one doesn’t have good a priori information about the effect size. Toxicity studies such as Seralini’s generally don’t have such information.

    It is incredible that a journal would withdraw a paper post-hoc, because they decided a portion of it was under-powered. Furthermore, it is not even clear that Seralini’s mortality and tumor study is under-powered. It all depends on one’s definition of “statistically significant”, and which effects one considers important. The commonly used alpha of 0.05 is completely arbitrary.

    Haye’s withdrawal letter has nothing to say about the other main portion of Seralini’s study, which found clear differences in biochemical indicators, including hormone levels, between the control and exposed rats. This portion of the study could not be called under-powered. The editor is “throwing out the baby with the bath water”.

    Finally, I’m surprised nobody has looked up Haye’s employer: Spherix Consulting. Spherix serves industry, especially companies like Monsanto. Here is how they describe their services:

    “Product Regulatory Compliance

    Spherix represents clients before regulatory authorities including the FDA, EU regulatory authorities, the EPA, and CSPC; conducts assessments under California Proposition 65; and resolves FDA & FTC regulatory issues regarding the U.S. Dietary Supplement Health and Education Act (DSHEA) of 1996. The company help clients pursue GRAS (“generally recognized as safe”) determinations; and prepare and submit notifications under the FDA’s food contact notification (FCN) program. Spherix assists in the establishment of regulatory strategies for clients, including utilizing the traditional petition process or new regulatory strategies. In addition, company consultants help address recent regulatory developments; develop data waiver strategies, and perform Good Laboratory Practice (GLP) inspections.”

    How is this not a conflict of interest?

    Power depends on effect size

    November 30, 2013 at 12:04 am

    • I wonder if all other FCT papers are being debated in national parliaments (paid for by national tax-payers) and being given such deep scrutiny. Had the product being tested not been a Monsanto product, would it have attracted such attention? With such controversy, strange how no Elsevier executive doesn’t step in here on this blog to offer a formal explanation. It’s not exactly like Hayes is the upper level of power involved with this decision… this is, after all, an Elsevier journal.

      JATdS

      November 30, 2013 at 7:12 am

    • Haye’s own book on toxicology explicitly says that patterns of changes are more important than statistical significance. It also goes on to say that the use of historical data, including data obtained by feeding animals different foods in whole food/GMO studies in toxicology may have some use, but it has the danger of diluting the findings. In a reddit.com debate on this topic, I quoted Haye’s book and someone consulted Hayes, who basically implied that the danger of dilution “may not apply” when the data is from “concurrent” animals fed different types of fooed, rather than animals who lived a few months or years ago.

      In other words, Hayes refused to acknowledge what his own book explicitly said, and reinterpreted it to fit his own needs as an editor who approves GMO studies of the type that Monsanto routinely submits.

      saijanai

      December 4, 2013 at 5:50 pm

    • If you want to play the “conflict of interest” game, Seralini study was partly funded by an anti-GMO NGO. Seralini released a book and a movie on this study right after publication (and the movie was filmed DURING the study). Yet, the article states “no conflict of interest”.
      That in itself would be enough to justify retraction.

      As for the sample size, you may have missed the point. Fact is this incredibly low sample size rendered the data completely useless. Nothing was in fact significant. None of the claims made in the discussion were proven.
      When the editor says that the data is inconclusive, what does it means about the conclusions of the article ? Well, they were just false.

      Gabriel HMIMINA

      December 4, 2013 at 6:21 pm

  22. Greenpeace and other civil society groups who are critical of the ecosystems release of new proteins and altered life forms without appropriate safety research.

    Greenpeace supported CRIIGEN’s successful, 6 year legal battle to get access to both the original Monsanto data to do a peer review (recall first in 2009 in the Int Journ. Biol. Seralini was part of a team who did what pro-biotech people ought to with the 2012 study and repeated the study) – or we wouldn’t KNOW how it went with that approval data.

    Civil society, already reeling from the impacts of deregulation and the corporatization of the food system and of research generally, has been left to first sue then pay for the research to ask the safety questions the current (not peer-reviewed mind you) data from Monsanto et al do not ask or answer.

    What is biotech’s interest? Making money for shareholders. What is Greenpeace and other civil society organizations interest? Precautionary Principle, learning from our past and recognizing the ecological imperatives that are factual and limit our activities if we can think beyond our own current comforts. Not sure the ratio of funding and for what- Greenpeace was instrumental in getting access to data and seeds before any of this work could occur and all scientists ought to be thankful organizations exist that protect science from corporate manipulation and perform science as government’s once did- in the public interest.

    JodiKoberinski

    November 30, 2013 at 9:59 am

    • (where there is smoke – there is fire) – The Goodman Affair: Monsanto Targets the Heart of Science:-

      The journal did not retract the study. But just a few months later, in early 2013 the FCT editorial board acquired a new “Associate Editor for biotechnology”, Richard E. Goodman. This was a new position, seemingly established especially for Goodman in the wake of the “Séralini affair”.

      Richard E. Goodman is professor at the Food Allergy Research and Resource Program, University of Nebraska. But he is also a former Monsanto employee, who worked for the company between 1997 and 2004.

      http://www.independentsciencenews.org/science-media/the-goodman-affair-monsanto-targets-the-heart-of-science/

      SamN

      November 30, 2013 at 1:40 pm

  23. Is this not the elegance of science? Seralini et al criticized 1) for reaching conclusions that challenge the industry conclusions; 2) criticized for not reaching the conclusion of a carcinogens study when he conducted, clearly, a toxicology study; 3) for incompetence, despite being a world leading geneticist himself and a former reviewer for the EU of GMO applications (which based on his review he rejected and was fired); and 4) for using innovative statistical methods to produce the most useful data for the task at hand (measuring toxicological effects) and goes to great lengths to explain why his measurements and methodologies were correct for this study design.

    So as a leading geneticist, he (successfully) argued with his own advisory to pursue data collection in this way. Remember he used the old school approach with the 2009 study- over 90 days, impacts on hormones, organ size and general inflammation was observed. So this 2 year study adopts some of his team’s critiques of the earlier work (including their own) and leaves room for critique of this study. Amazing the level of expectation we don’t see applied to the rest of the work in this area. The next step is to repeat his work with the improvements, and hold corporations to the same scrutiny…

    JodiKoberinski

    November 30, 2013 at 10:15 am

    • Even a leading geneticist can do sloppy work. This study does not adhere to the standard we expect of papers published in Food and Chemical Toxicology. It just creates media buzz, but that is not an indication of scientific quality. The safety of GMOs have been a major concern, and therefore it is important that these studies are carried out properly – regardless of the conclusion.

      Sturla Molden (@nedlom)

      November 30, 2013 at 11:08 pm

      • The precedent here is EXTREMELY dangerous. I can count the studies that I’ve read during graduate school that weren’t underpowered on my nose.

        1) Effects on a type of animal are limited to that type of animal (remember external validity, folks?). You argue that all ___ animals have tumors by age ___. Do humans? I’m covered in tumors. I imagine a careful micron by micron investigation of most humans would find many cells in some stage of transformation. A scientist is bound to carefully reading the literature and interpreting data in context. Everything you’ve mentioned exists in the literature and is a fair critique. However, if we are to demonstrate FOR REALSIES that treatment X cause tumors in vivo, not that it just speeds proliferation, it’s gonna be damn near impossible to ensure that most animals don’t have even a few cells in various stages of proliferation that are instead just accelerated or helped along the way.
        2) Power is irrelevant if p is less than threshold.

        Anyone who wants to post hoc edit the scientific literature because his journal may have had crappy peer review is stepping WAY over his/her bound. If they want to have a semi-reputable journal, try doing good editorial/peer-review work prior to publication. Post publication peer review is excellent.. and clearly highlights flaws in this study which ARE attached to it via commentary in the literature… but the day I read a flawless paper… oh wait. I’m sure that every study that you’ve ever publish personally and handled as an editor had a priori and post hoc pi > 0.95?

        I see no good reason why a study that adhered to the rules (no fraud/mistakes or cheating of peer review by authors) should not be left in the literature. Any critic, and their mother, is welcome to reproduce this study identically with greater n and multiple strains/species. If we are going to start retracting studies because they lack external validity beyond a single strain (don’t get me started on people who think cats = rats = mice = mice + ABC-1 knockdown), are over interpreted, or lack power), we’d have to retract pretty much every paper published!

        QAQ

        December 1, 2013 at 1:45 am

        • Care to show us one of those underpowered studies that has been hyped as much as Seralini’s?

          That’s where the difference is. It’s being used by interest groups to make policy demands that simply do not follow from its data. So, when you state “A scientist is bound to carefully reading the literature and interpreting data in context” you are right, but this is not done by the interest groups, and worse of all, not by the scientists who wrote the paper. If Séralini had warned not to overinterpret his paper, he’d get some slack from me. But no, he and his organization hyped the paper.

          Marco

          December 1, 2013 at 2:54 am

          • “It’s being used by interest groups to make policy demands that simply do not follow from its data.”

            Welcome to democracy, the least worse system. The answer is not to try and shut the dissidents out, but to prove the dissidents wrong.
            You – and Monsanto – have a view that constitutive expression of a key enzyme in the aromatic amino acid biosynthesis pathway could not possibly cause any perturbations in plant biochemistry that would have deleterious impacts on health from long term exposure.

            Then prove the people who claim it might wrong.

            littlegreyrabbit

            December 1, 2013 at 3:56 am

          • Interest groups? CRIIGEN’s “interest” is public safety: what is industry’s (yes, the #1 interest group driving and paying for the bulk of the science) “interest”? Profit. Those publishing critical data are not dismissible because they are “special interest groups.” The reason Seralini began this work is he saw the original data being used to approve GMOs and found the research inadequate to assess safety. it took him ten years to both reproduce the original inadequate research (2009, Int Journ. Bio) and perform this study. Industry is a special interest group.

            JodiKoberinski

            December 1, 2013 at 11:11 am

          • littlegreyrabbit, in science you need to do good work. The Seralini paper isn’t. That’s fair enough, but if it is then used by the authors themselves to make exaggerated claims, we, as scientists, should be upset.

            The view I may have on what the constitutive expression of a key enzyme in the aromatic amino acid biosynthesis pathway may or may not have is irrelevant. Bad science is bad science. I reject papers that do bad science, regardless of whether their outcome fits my scientific views or not.

            Marco

            December 1, 2013 at 12:01 pm

          • So now we’re in the business of retracting papers because of hype and over interpretation?

            Interests groups, by definition, have interest. They latch on and over interpret data *all the time.* The best studies and the worst studies are constantly taken out of context. Maybe if we didn’t have idiot policy makers elected by idiots who could discern a study that raises concern from a carefully crafted piece of BS, we wouldn’t have to worry about this! But who to blame? Perhaps the scientists, all of us, who do a poor job of educating the public.

            QAQ

            December 1, 2013 at 12:08 pm

          • Yes, CRIIGEN is an activist or interest group. It makes this quite clear (fortunately) on its website:
            “To achieve its objectives, i.e. to protect the environment, biodiversity and health, the Association can go to court and/or associate in a court action with the public prosecutor.”

            An organisation that states it will initiate court cases is an activist organisation. Nothing wrong with that as such, but we should not be hypocritical in recognizing this vested interest.

            I myself have no connection whatsoever with companies that produce GMO foods, nor with organisations that promote such foods. Care to openly declare here any of your own vested interests in this specific matter?

            Marco

            December 1, 2013 at 12:18 pm

            • No vested interest whatsoever on my part. However I have spent years running properly-designed toxicology studies in rats and other experimental animals, and it frankly appals and angers me when people like Seralini waste the lives of experimental animals on poorly-designed, badly-executed and inhumane studies. There is enough opposition to live animal testing already without people like him providing fuel to the opponents of live animal testing. I believe live animal testing is still essential, but in my opinion we owe it to the subjects to ensure that scientific value is derived from their lives and deaths. Wasting rats’ lives on stupid studies is disgraceful, and making females drag around huge mammary adenomas long after they should have been humanely terminated is abhorrent.

              g2-c706dd6676549f1e118b7970c1e9b607

              December 2, 2013 at 2:48 am

          • Marco says: “Bad science is bad science.” Wrong, it differs. Some bad sci is rejected, some is published. Unnumbered poor studies are not retracted, including many in FCT, some of them by Monsanto.

            Anthony C. Tweedale

            December 1, 2013 at 3:01 pm

            • Thanks so much, I am glad you concur with me on ” clear reasons for a retraction”.
              COPE guidelines state that the only grounds for a journal to retract a paper are:

              Clear evidence that the findings are unreliable due to misconduct (eg data fabrication) or honest error
              Plagiarism or redundant publication
              Unethical research.

              Prof Séralini’s paper does not meet any of these criteria and Hayes admits as much. In his letter informing Prof Séralini of his decision [link here], Hayes concedes that an examination of Prof Séralini’s raw data showed “no evidence of fraud or intentional misrepresentation of the data” and nothing “incorrect” about the data.

              So the question is this -“Why would anyone support double standards when it comes to Prof Séralini’s paper?”

              samN

              December 1, 2013 at 3:49 pm

          • “Care to show us one of those underpowered studies that has been hyped as much as Seralini’s?”

            ALL industry-performed GMO toxicity studies are “underpowered” and are used to justify allowing GMOs to be sold in the EU. How’s that for hype?

            In the case of the specific “Roundup Ready” Soy study that Monsanto presented tot he EFSA that was used to justify the sale of Roundup Ready Soy in Europe, it turns out that while Monsanto states that they used 20 animals per sex per group, THEY ONLY REPORTED DATA ON 10 ANIMALS per sex per group.

            Care to show a study in any other field other than “GMO safety” where researchers can neglect to report the test results for 50% of the animals and not get challenged?

            saijanai

            December 4, 2013 at 6:56 am

          • Monsanto’s original 90day rodent toxicity study used 20 animals per sex per group but only reported data on 10 animals per sex per group. This paper WAS used by the EU to justify the policy of allowing Roundup Ready Soy on the market in the EU. Care to explain how Monsanto’s paper is better powered than Seralini’s paper?

            saijanai

            December 4, 2013 at 7:19 am

        • Are there tons of studies which conclude that some food double rats mortality based on 4 rats divided by 2 rats ?
          Yep, Seralini did that.

          Gabriel HMIMINA

          December 4, 2013 at 6:24 pm

      • “This study does not adhere to the standard we expect of papers published in Food and Chemical Toxicology. ”

        Monsanto’s own 90-day rodent toxicity paper on Roundup Ready Soy used 20 animals per sex per group, but only reported data on 10 animals per sex per group.

        Is this the kind of standard you are alluding to?

        Why attack Seralini for sensationalism while ignoring completely bogus practices from Monsanto in research on the same topic using the same kind of rats?

        saijanai

        December 4, 2013 at 7:11 am

        • “Monsanto’s own 90-day rodent toxicity paper on Roundup Ready Soy used 20 animals per sex per group, but only reported data on 10 animals per sex per group.”

          Not true. In Fig 1 and 2, and in table 6 and 7 I can see that N=20.

          I’ve read this particular study. And sorry to disgress, but it’s quite interesting compared to Seralini bs.
          You should also check out Haryu et al. 2009 for reference.

          Gabriel HMIMINA

          December 4, 2013 at 6:30 pm

          • Actually, I was wrong. It was the NK603 study on maize that reported data from only 10 rats.

            saijanai

            December 4, 2013 at 10:16 pm

          • “Actually, I was wrong. It was the NK603 study on maize that reported data from only 10 rats.”

            This is the one I was talking about:

            http://www.sciencedirect.com/science/article/pii/S0278691504000547

            And no : the 20 rats/group/sex are indeed used.

            Gabriel HMIMINA

            December 7, 2013 at 9:42 am

            • 20 rats per sex per group were used, but only 10 rats per sex per group were tested:

              2.6. Clinical Pathology

              “Blood was collected under light halothane anesthesia, via the retro-orbital plexus from 10 rats/sex/group after week 4 and again (under CO2 anesthesia from the pos- terior vena cava) just prior to sacrifice. ”

              So the question arises: why weren’t ALL rats tested, and did they test a random sampling, or did they alternate the rats tested or did they choose the healthiest looking rats or did they actually test all of them and chose those to include data rats that fit the outcome they desired to find?

              saijanai

              December 7, 2013 at 10:38 am

          • That’s false. See fig 1 and 2, and the last two tables. N=20.

            The bloodwork was done on 10 rats/group/sex, which isn’t surprising (those tests are indeed expensive). But the whole group are followed and tested.

            Gabriel Hmimina

            December 9, 2013 at 10:20 am

    • 1) Seralini wasn’t fired of the CGB. He decided to quit after seing that he wasn’t able to get his opinion through in the reports.
      2) You’re telling us a nice story about the leading geneticist (which he’s not !) adapting his protocol. But it’s completely made up. The 2009 study was entirely based on Monsanto data. That’s why it was a 90 days study.

      It’s not that he raised his level of expectation. He simply tried to do something better than the Monsanto tests he used before, but he completely screwed up.
      Why did he screwed up ? Because he’s completely clueless when it comes to trial design and statistics.
      He just didn’t know there were standard requirements based on the N needed to perform valid comparison.
      He didn’t know that this particular strain of rats is prone to tumor.
      He didn’t even known that sample size must increase with trial duration.

      Seralini is an old school molecular biologist. He has no trainning in toxicology, and probably no trainning in animal studies nor statistics.

      Gabriel HMIMINA

      December 4, 2013 at 7:41 pm

      • >Statistical significance is a prerequisite. No one conclude based on statistical significance only (except, in fact, Seralini).

        In fact, ALL GMO toxicity studies are based on statistical significance-only. The auxiliary control groups is used in such a way that if a significant difference is found between experimental and normal control group, the difference isn’t counted unless it is also outside the range of +- 2SD of the auxiliary control groups. THis conceals any suggestive patterns that might arise where variables related to a specific bodily function or organ show important changes as a group, but all fall within the norm established by the auxliary control group. Hayes’ book discusses this issue also.

        >But no honest scientist would conclude based on non-significant differences !

        Quoting from Hayes’ own book again:

        _If used, statistical tests should be viewed simply as a tool to help identify differences between groups and not as the principal justification for decisions concerning potential test material effects_

        >That’s what Seralini did. When Seralini says that a two fold increase of mortality was seen in the treated groups, it means “4 dead rats divided by 2 dead rats = 2″. This kind of thing should NEVER be published anywhere.

        Eh, Seralini should have used the “best practices” paper concerning unexpected tumors found in a toxicology study to guide his own writing, I agree, but you appear to be suggesting that Seralini shouldn’t have reported the unexpected tumors at all.

        >You should indeed read this book. But don’t expect to find something like “it’s ok to interpret non-significant differences as you like”.

        That’s not what I said, but as Hayes’ book says,

        __**If used**, statistical tests should be viewed simply as a tool to help identify differences between groups and not as the principal justification for decisions concerning potential test material effects_

        What is important in an exploratory toxicology study is the *pattern* of test results, combined with the pattern of other measures, NOT whether or not some measure or measures are statistically significant from the control group.

        saijanai

        December 9, 2013 at 11:05 am

  24. The criteria used by Elsevier/FCT to retract the Seralini study is flat out bogus! Monsanto used the SAME STRAIN OF RATS AND THE SAME NUMBER OF RATS in their 90 day study!!!! Is that being retracted?

    “EUROPEAN NETWORK OF SCIENTISTS LASHES OUT AT RETRACTION OF THE SERALINI STUDY. The arguments of the journal’s editor for the retraction, however, violate not only the criteria for retraction to which the journal itself subscribes, but any standards of good science. Worse, the names of the reviewers who came to the conclusion that the paper should be retracted, have not been published. Since the retraction is a wish of many people with links to the GM industry, the suspicion arises that it is a bow of science to industry. ENSSER points out, therefore, that this retraction is a severe blow to the credibility and independence of science, indeed a travesty of science.””

    We should be able to see the “thorough and time-consuming analysis of the published article” and the people involved…

    Journal’s Retraction of Rat Feeding Paper is a Travesty of Science and Looks Like a Bow to Industry:

    http://www.ensser.org/fileadmin/user_upload/ENSSERcommentsretraction_final.pdf

    Journal retraction of Séralini study is illicit, unscientific, and unethical *

    http://gmoseralini.org/journal-retraction-of-seralini-study-is-illicit-unscientific-and-unethical/

    Monsanto and the life science industry to seize control of science?” Goodman Affair

    http://independentsciencenews.org/science-media/the-goodman-affair-monsanto-targets-the-heart-of-science/

    susanstop

    December 1, 2013 at 1:51 am

    • Perhaps the question that should be asked at this moment, now that you have proved the clearly biased nature of this Elsevier journal’s (FCT) editor and/or reviewer policy and/or decision, is: is there any link between Monsanto and/or Elsevier and/or its parent company Reed-Elsevier? We are talking about two global, power-wielding corporate giants and at some point, do their corporate interests become tangential? If yes, then science (and publications) are just a side-show and distraction tool to prove something only when it is convenient (and useful) to do so while retractions are simply deflectors.

      As I see the number of retractions increasing, especially those related to hot debates or questionable methodology, I ask: is it right to retract such papers since they form nothing but building-blocks in the evolution of a scientific paradigm? Papers with clear cases of significant (p>0.05) plagiarism, or those with duplicated tables or figures or severe methodological or data fraud should be retracted (no discussion). But those that have stoked fiery debate such as the Seralini paper (the French King’s head, etc.) I claim should stay in the literature as they actually constitute important historical documents that will fuel fiery debate long in the future. What this latest fashionable wave of retractions is doing is causing irreparable damage to science. In some cases, we may be doing irreparable damage to humanity (through the scarring of science). Revenge, coverups, legal challenges and silencing, disagreements, political maneuverings, bias and corporate influence have now become part and parcel of science publishing, but these need to be stamped out. Most of us scientists who just want to do our science, and who see this massive power play playing out at levels way above where we stand are enraged that we are being treated as the pawns on the chess-board. Not only should the fraud committed by scientists be exposed, but so too should the rot induced by greedy and unethical publishers, corporations and biased editors who seem to be ignorant of the political agenda they serve and of their own ignorant and biased “editorial” decisions.

      JATdS

      December 1, 2013 at 3:11 pm

  25. Not sure if this has been mentioned… but I wonder what the effect of the retraction will actually be…

    Any editor who thinks this is a bogus retraction can still allow a scientist to cite this paper. Nothing illegal about that.

    I believe that the copyright of the study still falls to elsiver, so can they, if they so chose, prevent the authors from republishing this work in a different journal?

    Finally, will this retraction have any effect whatsoever on government? Peer reviewed publication has no direct mandate for policy. People who believe the study will just claim there was a politically motivated retraction… politicians certainly don’t have the know to discern…

    qaq

    December 3, 2013 at 4:52 pm

  26. Anyone who reads the paper will immediately know how poorly it was done. Simple standards were not followed in reporting data. I expected to see p-values and/or hazard ratios for the kind of work being done/claims being made, and instead got confusingly-drawn plots that lump disparate data types together (eg “tumors over 25% body weight, more than 25% weight loss, hemorrhagic bleeding, etc.” in Fig 1). You would also never see Fig. 3 in a respectable paper. It is very clearly done for sensationalist reasons. The paper was not retracted merely for being bad science, but for being unethically written and presented.

    John

    December 3, 2013 at 11:09 pm

    • “Anyone who reads the paper will immediately know how poorly it was done. Simple standards were not followed in reporting data. I expected to see p-values and/or hazard ratios for the kind of work being done/claims being made, and instead got confusingly-drawn plots that lump disparate data types together (eg “tumors over 25% body weight, more than 25% weight loss, hemorrhagic bleeding, etc.” in Fig 1). You would also never see Fig. 3 in a respectable paper. It is very clearly done for sensationalist reasons. The paper was not retracted merely for being bad science, but for being unethically written ”

      In fact, in toxicology studies, statistical significance is considered a very poor tool to use to evaluate potential toxicological effects.

      In the book on toxicology edited by Hayes, the editor who has now retracted the Seralini study, the chapter on evaluating toxicology research explicitly says “IF” statistical analysis is used:

      [Google books search to same section in 5th edition](http://books.google.com/books?id=vgHXTId8rnYC&pg=PR23&dq=Principles+and+Methods+of+Toxicology&hl=en&sa=X&ei=7JVzUqe8JYG2iwKozYDADw&ved=0CDoQ6AEwAA#v=onepage&q=%22statistical%20analysis%20of%20clinical%22&f=false)

      Chapter 21
      Principles of Clinical Pathology for Toxicology Studies
      Robert L.Hall

      _Principles and Methods of Toxicology_, Fourth Edition,

      edited by A.Wallace Hayes.

      [...]

      >**Statistical Comparisons**

      >Statistical analysis of clinical pathology data is commonly performed in toxicology studies, and it often results in identification of several statistically significant differences between control and treated groups. However, all effects caused by a test material need not be statistically significant, and all statistically significant differences do not necessarily represent true or lexicologically significant effects. _If used, statistical tests should be viewed simply as a tool to help identify differences between groups and not as the principal justification for decisions concerning potential test material effects_ (20, 26). It is important to remember that the power of a statistical test is affected by the number of animals per group. Fewer test subjects increases the likelihood that statistical tests will fail to identify a true effect. Since the number of animals/per group is usually quite small for studies with dogs or monkeys (e.g., 4/sex/group, or less), it is imperative that the data for each animal at the different test intervals be examined to lookfor patterns of change over time among the treated animals that are absent among the control animals. As the number of animals per group increases, the frequency of identifying statistically significant differences of very small magnitude increases. In rat studies with 15 or more animals per sex per group, it is common to observe statistically significant differences that have little or no effect on the health of the animals and are not lexicologically relevant.

      saijanai

      December 4, 2013 at 7:06 am

      • Where does the book say that it’s ok to conclude based on simple ratio ?

        Gabriel HMIMINA

        December 4, 2013 at 6:36 pm

        • Where in the book does it say its ok to conclude based merely statistical significance? The book explicitly warns against doing that.

          saijanai

          December 9, 2013 at 1:44 am

          • Statistical significance is a prerequisite. No one conclude based on statistical significance only (except, in fact, Seralini).
            But no honest scientist would conclude based on non-significant differences !
            That’s what Seralini did. When Seralini says that a two fold increase of mortality was seen in the treated groups, it means “4 dead rats divided by 2 dead rats = 2″. This kind of thing should NEVER be published anywhere.

            You should indeed read this book. But don’t expect to find something like “it’s ok to interpret non-significant differences as you like”.

            Gabriel Hmimina

            December 9, 2013 at 10:23 am

    • You would also never see Fig. 3 in a respectable paper. It is very clearly done for sensationalist reasons. ”

      How about this figure?

      http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2682588/figure/F1/

      That image has been reproduced myriad times including in presentations to the EFSA concerning endocrine disruptors (http://www.efsa.europa.eu/en/events/documents/120614l-p07.pdf), as well as many, many news outlets, without objection.

      Letting a mouse grow so large without terminating it… surely that is inhumane…

      The objection really is because it is a negative study about GMOs, not because the animals are being abused because the researchers failed to terminate them in a timely manner.

      saijanai

      December 4, 2013 at 8:25 am

      • There is nothing inhumane about allowing a mouse to grow that large as long as it can move, eat and excrete normally. The size gain is distributed over the whole body rather than in massive tumors that get in the way of the legs. The huge mammary adenomas in the Seralini study, on the other hand, are clearly of a size that would impede normal movement and are clearly becoming abraded by being dragged around as the rats try to move. Furthermore once the mammary adenomas of Sprague Dawleys get to that size they can become necrotic because they are crushing their own blood supply.

        • This retraction is outrageous. It indisputably a result of naked corporate pressure and its ramifications are huge huge huge. Here is a leading scientific journal saying that its scientific standards are for hire by the highest corporate bidder. If the editors were deliberately looking for a way to discredit their own journal and science in general in the eyes of the public, they could have done no better job. Bravo.

          Academic Daylight

          December 5, 2013 at 10:38 am

          • How would letting such a biased study published without any notice be better ?

            As far as “discredit” is concerned, the wrong was done when they decided to publish Seralini’s article anyway. This retraction is just a poorly worded correction.

            Gabriel HMIMINA

            December 7, 2013 at 9:45 am

  27. For a thoughtful perspective on this issue from a respected scientist and founder of University of Guelph’s Organic Agriculture Program: http://www.cban.ca/Resources/Topics/Human-Health-Risks/Open-Letter-to-Canadian-Consumers

    JodiKoberinski

    December 8, 2013 at 3:58 pm

    • I’m not so sure that a respected scientist would refer to that Mezzomo paper as supposedly being critical of GMO. There’s a good chance that she, and/or many of her current colleagues in the organic farming business, use the same Bt-based toxins as tested in that withdrawn paper. In GMOs you only have the pure protein, these poor mice were fed some poorly characterized mixture of protein and “other stuff” (culture media, perhaps).

      Marco

      December 9, 2013 at 11:10 am


We welcome comments. Please read our comments policy at http://retractionwatch.wordpress.com/the-retraction-watch-faq/ and leave your comment below.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 35,805 other followers

%d bloggers like this: