Yesterday we reported that Cell was looking into problematic images in a recent paper on human embryonic stem cell cloning. We’ve now heard from the journal about the nature of the inquiry.
Mary Beth O’Leary, a spokeswoman for Cell Press — an Elsevier title — tells us that:
Based on our own initial in-house assessment of the issues raised in PubPeer and in initial discussions with the authors, it seems that there were some minor errors made by the authors when preparing the figures for initial submission. While we are continuing discussions with the authors, we do not believe these errors impact the scientific findings of the paper in any way.
O’Leary also dismissed the notion that rushing the article into print compromised the review and editorial processes:
A number of comments about these errors in articles and blogs have drawn connections to the speed of the peer review process for this paper. Given the broad interest, importance, anticipated scrutiny of the claims of the paper and the preeminence of the reviewers, we have no reason to doubt the thoroughness or rigor of the review process. The comparatively rapid turnaround for this paper can be attributed to the fact that the reviewers graciously agreed to prioritize attention to reviewing this paper in a timely way. It is a misrepresentation to equate slow peer review with thoroughness or rigor or to use timely peer review as a justification for sloppiness in manuscript preparation.
“…no reason to doubt the thoroughness or rigor of the review process…”
This phrase is as good as pleading guilty.
If you ever find yourself typing something similar, STOP! Open a new document and write your letter of resignation instead. You’ll thank me in the end…
Completely agree with this. I think the lame-stream and retraction-frenzied media whipped this up to be something that it just wasn’t. Why so many apparent scientists are so blood-thirsty is beyond me and these crying-wolf episodes only serve to discredit many anonymous blogs. RW did a decent job here though as the original post was very cautious and it is nice to see that lessons have been learned on this site.
OK – the basic problem in this paper: if the same figure panel is shown within two different figures, with two different label tags, in addition to different cropping of the same picture in two figures, then this cannot be attributed to “it seems that there were some minor errors made by the authors when preparing the figures for initial submission”. Based on these so-called minor errors how one can trust that errors were not there during experimentation and data collection. And, as per ORI’s definitions and Cell’s document on instructions to authors on image manipulations and data presentation in figures, this case clearly and completely falls within the boundary of “RESEARCH MISCONDUCT”.
Regarding the peer-review process and the explanation from Cell Press – it is a very clear case of lack in rigorous review of this manuscript. However, it is not surprising because most of these big papers and breakthrough manuscripts are reviewed by the so-called experienced and famous senior scientists in the filed, who actually are symbiotic friends.
If Cells press is so eager to block the retraction of this paper they need to make a deal with the PI/group leader to share all the reagents and methods to a completely independent group (no senior established investigator please!) to reproduce the results, and let all the experimentation, data collection and analysis be done with continuous videotaping. If the results are reproduced (as published), then they should not retract this paper, as well as publish the repeat work. This would be a powerful tool to validate this breakthrough research.
iloveresearch – I hope you understand the seriousness of this case now.
Totally agree with your posting!
“…are reviewed by the so-called experienced and famous senior scientists in the filed, who actually are symbiotic friends.” I’m all for fraud busting, but this smrells of professional jealousy to me.
“…are reviewed by the so-called experienced and famous senior scientists in the filed, who actually are symbiotic friends.”
As unfortunate as it is, this happens all the time. Unclear how to prevent this as part of scientists job’s is to network at conferences, so the “you scratch my back I’ll scratch yours” impulse will definitely be there.
Opop wrote “Unclear how to prevent this …”
Double-blind peer review.
(Won’t stop it, but will definitely help. Double blinding helped for clinical trials.)
“Why so many apparent scientists are so blood-thirsty is beyond me”. It’s not that difficult to grasp, really. There are essentially two ways to feel good about oneself: 1) be or become better than others, and 2) put others down any way one can. With actual talent running short, option 2) is what is left.
Average PI, I am disappointed you see this as “..put others down any way one can”.
Do you have any comments on the science article itself? For example, are there any image problems in the paper?
I think Average PI meant it in a general way, really,and his comment seems sound enough. Yet I do feel the challenged paper deserved at least an Expression of Concern, which indeed reflects the feelings of most scientific community about it. I do not know why we insist in expecting paid publishers to say what scientists want to say amidst the open communication era.
I’m a woman, actually 🙂
Average PI, I could not agree more with you. This may be one of the most important publications in the last 20 years and the senior author made available the cells for anyone to confirm their findings.
I also think that Cell did the right thing to accelerate the review process and the publication in this case. These findings are so important that the accelerated process of review in Cell was fully justified and the right thing to do.
nenson, does anyone have the cells yet? Are the results reproducible?
For one, I have been promised many a cell line and model, only to be told at the last minute that there was an unfortunate liquid nitrogen crash and the cells are no longer available, the animal model stopped reproducing and the embryos are no longer viable.
If the results have been reproduced, please post a link.
I am guessing the cells will be forthcoming to other labs – and if they are forthcoming I think it is very likely they will pass the DNA based tests.
I would be interested if the gene expression microarrays replicated – particularly Figure 7 the hEXO-NT1 versus hESO-7 where they show an r value of 996%, which is identical the amount of correlation for technical/biological replicates within cell types. This is despite them having totally different genotypes.
Littlegreyrabbit
Lets wait and see what the results actually turn out to be, if any, before saying tests will be passed.
I’ve seen this exact thing before, and I expect the tests to show the opposite of what you expect.
So, lets wait and see.
@stewart: they may get perfect 10 out of 10, if they go to right hands….you know what I mean!! You never know.
I understand that human dermal fetal fibroblasts are fairly straightforward to turn into iPS, would it be possible to induce them to pluripotency and then transfer the nucleus?
Not sure that this paper is all that significant at all. The human ES field is extremely over-hyped and has delivered very little in terms of therapeutics or advancing basic research. Also SCNT reprogramming has been around for many years already so getting it to work in humans is little more than a technical advance rather than a real advancement in our understanding of developmental biology.
I wish these ES related fields would fizzle out already so we could get back to doing real science. Just my opinion.
It’s not “crying-wolf;” it’s “peer-review”.
One might be measured in choosing words or one might not be. That is not really important. Cell, Elsevier and all other journals and publishers have the same rules. This paper broke them. This raises a question asked here and elsewhere many times: how is it possible for a ‘scientist’, to make such “errors” and for a topnotch peer review process and a prestigious journal not detect one of them? A smell of a paradox here? After all, the bar for journals such as Cell is apparently incredibly high.
Moreover, this is not an isolated incident. It is the frustration of double standards that leads to people venting their anger. Fix the problem and the associated blood-lust goes away. Accept the lame response from publishers, authors and institutions and it will get worse.
Exactly which “rules” did this paper break?
In response to your question:
Photographic images of cells are primary data in cell biology. If you show me a picture of some cells and tell me these are cells A after treatment B, and then a second later, you show me the same picture and tell me these are cells C after treatment D, you are falsifying data.
An author needs to know of every picture taken in the lab, of every blot made in the lab, of everything in the lab, when it was made, how it was made. The correlation of the picture and the story behind it is obviously vital. We have to trust the author, that the correlation between picture and story is true, because we have no real way of proving that. Now, reusing the same picture with different stories automatically destroys this trust, because it makes it obvious that at least one story behind the picture is not true.
Therefore, it is unexcusable. You don’t mix up pictures. You can accidentally write “x200 magnification” instead of “x100 magnification”, but you do not mix up pictures.
In most of these reuse-cases, I don’t even think it was a sloppy mistake, because I just cannnot imagine how it can happen. In the end you have some, what, 30 or so core images that you want to show. If you did the experiments, you know them by heart. You don’t mix them up and you know the story behind every image. And you surely don’t reuse the same image twice by accident. This last statement is just a general view of things and not an accusation of fraud in this specific case, as I certainly don’t know the details of this case……
Genetics, you are correct that researchers know the story behind each image that represents many months of work and years of planning, that together form the basis for graphs and the statistical tests use to establish whether there is an effect of cell B or drug D.
It is the very basis of all science.
As you may know most modern imaging systems have inherent labelling unique to each image – it is impossible to mix up the images with those systems.
Well now I am very confused. Here you are saying that every image from a modern imaging system uniquely labels an image.
I’m trying to square it with the statement in the Nature interview:
“..the decision was made because of the limited number of available photographs that had a measure bar on the image.”
Indeed the Figure 6D “h-NT1 Ph” image proudly displays this highly desired measure bar.
As has been pointed out, it has a close relative in Figure S5, with several “minor errors” (Cell’s words) such as different crop, lower resolution, and the conflicting label “hESO-7”. What hasn’t been discussed is that it also doesn’t have a measure bar in the same place as the Fig. 6D image. Reuse of images has been admitted in the interview, but not the covering of undesired measure bars.
Do you see why I am confused and can you or anyone else help? Under what circumstances would measure bars be so undesirable as to need eradication? Alternatively might there be valid scientific explanations such that the data could be made available to Cell Press to unambiguously confirm that the image has not been manipulated to remove inconvenient elements before publication?
Scrutineer
Its not that the sytems uniquely label the images it is more of the systems retaining imaging parameters when the image was captured. This can include brightness, contrast, magnification, exposure etc.
Some systems allow you to remove the measure bars. For example, if you take a picture of cells in flasks with a system, say made by company X, then you may then export the image (with or without a measure bar) as any number of file types (tif, jpeg etc) – easily transferable to powerpoint and the like – once the image has been exported it loses the information from the system made by company X.
The original image file types may of course be opened using the imaging system used to capture the original image and all the information such as lens magnification and brightness and contrast are visible.
In the case here it may be the authors exported the image and lost track of the parameters – but, in my opinion, that is unlikely.
The authors could easily send the original images to the journal who can open them in the original software used to capture the images and assess the relative merits. It would be a matter of a few minutes to assess that.
Thanks. It is good to know that there is a straightforward way to show that the measure bar wasn’t rubbed out. There is a hint of edging where the bar would be but it might just be a jpg compression artefact.
Other people would get university committees investigating them but everyone is trying to gloss over what are obvious serious scientific issues just because the topic is a media darling and totally in your face as to what it can accomplish.
Full scale investigation called for in this case.
I see two problems. First, even if the suspect images and data do not affect major conclusions, how can we trust such sloppy science? Why were the final data not vetted by the senior author prior to publication, and how can we be sure that there are not hidden problems with primary data records due to lax oversight? Second, the rushed review process clearly failed as these errors sailed right past the reviewers.
Because of sloppiness and haste, science has taken another hit. It’s deplorable. As scientists, how can we reason with those eager to cut budgets for the sciences if we can’t keep our house in good order, which includes maintaining the highest standards in peer review and editorial practices?
Yes, this case is another case which minimize the trust of not scientific people in research, especially in stem cell research. Can’t understand how this “mistake” can happened. I think everybody here write papers before and know how long it takes to design figures in the correct way the paper wants. So you need several hours and can’t see that you use the same figures, wrong designations etc.?
Whats wrong with science and pressure the researchers get because of funds and the money its so bad.
Reuters about this case:
http://www.reuters.com/article/2013/05/23/science-clone-stem-cells-idUSL2N0E41D120130523
Publish story as sexy as possible with ‘image misconduct’, and then try actual experiments upon request!
So this is a good example of what is wrong with science. We have a false reward system, based on the name of the journal and the amount of grant income. The consequence is that some people will believe that the reward system is science. It isn’t, it is simply a very poor proxy for the allocation of resource and measurement of performance.
Stewart is, of course, absolutely correct. Modern instruments time stamp data and this is always linked to meta data relating to the experiment. In those cases when raw data are eventually released for inspection by the community, such metadata are critical in ascertaining the path between raw data and figures in papers. It isn’t always easy, see the current travails of a group of people concerned about some two dozen papers that rely on STM imaging of nanoparticles: you access the data here.
So there are only two conclusions possible.
1. Between the original image data and the organisation of these data for publication, the link to metadata was lost. This equates to very sloppy research and, therefore, that one cannot have any degree of certainty in the conclusions drawn from the work: could be right or wrong, toss a coin.
2. Images were deliberately chosen to misrepresent experimental observations. This is fabrication. Here we can be more certain: the data cannot support the conclusions.
In both cases the conclusions may very well be correct. That is not the point. Conclusions drawn in discussions at the bar can also be correct, but they need hard evidence.
Some quality bullshit there from Cell.
My belief in the validity of most articles is inversely correlated with the impact factor of the journal they are published in.
I said it yesterday and I’ll say it again, for the love of Darwin, didn’t ANYBODY triple-check the figures before or during submission? At least twice?
I mean, we’re all in a rush to publish, but still…
There are 23 authors listed on the manuscript, 1 senior editor, 4 reviewers = at least 28 scientists. How is it possible that nobody raised a red flag on these ‘errors’? (or someone did, but after it was accepted for publication…)
Given the haste with which we now know the publication was prepared, it is quite likely that many of the 23 never got a chance to read it until it appeared on the Cell website 🙁
There are also discrepancies pointed out on PubPeer, like the duplication of images of the microarray in supp figures, and one of the main figures genotyping (Fig 6B) where oocyte donor looks identical to heso NT2; and hesoNT1 and hesoNT3 with hints of modifications to the background signal. It smells very very fishy… Perhaps one might mislabel once in the rush to publish, but this happens multiple times throughout the paper. And what about error bars? There was none in the entire main manuscript. Any why was there an experimental condition with n = 1 in Fig.5 b? Is there scientific rigour? You be the judge.