What do you do when it turns out the materials you used in your successful experiment weren’t actually the materials you thought they were?
If you’re Peter Zammit, of King’s College London, and colleagues, you retract a 2008 paper in the Journal of Cell Science. Here’s the notice, for “B-catenin promotes self-renewal of skeletal-muscle satellite cells:”
The authors wish to retract this paper after it has come to their attention that the two constructs that they thought were encoding for B-catenin or stabilized B-catenin are, in fact, expressing an unrelated protein. This was discovered when both constructs were used as part of another study, and it became clear that they had been confused with an unrelated construct from another project that does not express B-catenin or stabilized B-catenin.
The authors apologize for not having detected the error prior to publication.
The paper has been cited 37 times, according to Thomson Scientific’s Web of Knowledge.
We salute the authors, just as we did the team that realized they’d ordered the wrong mice and the one that became aware they’d used chemicals from a mislabeled bottle.
They could have gone all the way and disclosed what was in those plasmids…
Reminds me of this one too :
Nature 426, 100 (6 November 2003) | doi:10.1038/nature02141
Retraction: Hes1 is a target of microRNA-23 during retinoic-acid-induced neuronal differentiation of NT2 cells
http://www.nature.com/nature/journal/v426/n6962/full/nature02141.html
So, they used the wrong construct and got the right answer? It sounds like one of those “lucky accidents” you hear about. Possibly the incorrect experiment bears some more evaluation.
There are a lot of factors that will drive/inhibit myoblast differentiation in vitro so it is not completely ridiculous that they would get the expected result, but it is still rather odd. It is always possible that someone cloned a protein from the same pathway etc, but without more detail we will never know.
A freshly made construct should always be sequenced, particularly when you have such a complex bi-cistronic plasmid which you basically hand-made. In all fairness, this is not really an honest mistake, it is poor bench work and should never happen. I mean you would expect it to happen from time to time with students, but you would catch it immediately following sequencing. Where was the supervision?
This is actually a very common occurrence. I know of several colleagues who have attributed results to constructs that were later found to be incorrect. I remember one student who had done most of her PhD thesis work using a plasmid which later turned out to be empty!!! She just couldn’t be bothered to do a restriction digest to check it each time she prepped it. Nightmare.
Did she get a PhD?
Did she get her PhD?
Eventually, yes. But she had to do somewhat of a U-turn and start all over. It cost her a couple of years of her life.
I find that the hardest thing to teach recalcitrant students is the extreme care needed to do a reliable experiment and the number of times things have to be repeated. On the other hand, the more dedicated the student, the easier it is to convey that message. Elaine Newman
I agree! I observed some PhD students spend 7 years in a lab and still produce garbage data, as if all the hours spent training them went down the drain. People don’t double check, for example. How many times do you find glitches even after having checked once? Almost always! 🙂
Totally agree with you both. I am becoming increasingly frustated at students demands for seemingly instant results of glamour mag status. It is hard to install an attitude of scepticism in ones results (which I think is essential) as I find that it is something that needs to come from within. I have said before that as techniques become more technical and perilous, these kind of retractions will increase in number. At the end of the day, the PhD should focus much more on the training aspect of doing research, rather than papers, papers, papers.
After making a mistake in the lab because of laziness, my PhD advisor once said to me that it is fine to cut corners, but that you must first learn where and when you can cut them. Advice that I live by today.
Good idea in theory, but basically what your asking a student to do is doing something more tedious that will slow down his work, will publish less frequently, and therefore be relatively uncompetitive for a professional job. In the end, we are evaluated on number of publications in high impact journals and how famous your post-doc advisor was.
That is not what I am asking for at all, actually! Since when did performing experiments correctly and with the right checks and controls become “tedious”? It is our job and these things are routine.
I think this is exactly the view that has to be changed. Science depends on deep thought and scrutinity of data and ideas. It takes a long time, thats a fact. Forcing it into a business-model line of production will generate a lot of mistakes and destroy its reliability in the long run. And problem is, we have been running on this track for quite a while now.
Im all for doing careful, cautious work, even if it means having a poor publication record. In fact that is what I do and what has happens to me. However, I think a lot of the failure of work be at the requiired cautiousness comes at the level of the advisor/PI. If advisors are not satisfied with the care and pace of the research in his or her lab, then it is up to him/her to help: go into the lab and do follow experiments for the grad student post-doc, carefully study, analyze the results and think about the work. Ive worked in enough labs to know that this never happens and advisors expect their minions to work almost at their level. These days, you have to be smarter than the advisor to get into academic/industrial research. Its not enough burden on the advisor.
That’s a seriously naive statement. Learning to design, control and interpret the results of an experiment are fundamental to the scientific process. The role of the mentor is to teach this process in a real-world (thesis project) context and the role of the oversight committee is to determine if the Ph.D. candidate is learning and apply the lessons to their research. Puppetry is for theatre, not for graduate students who should be making an effort to demonstrate their progress towards intellectual independence, academic rigor and their capacity to innovate as soon as they establish themselves in the lab. TLDR: troubleshoot!
That said, nobody ever invests as much care or as much brain power into a project as the primary researcher performing the work. If that happens to be you, then the degree of success you observe in the near term (and long term), will be proportional to the extent to which you absorbed and applied these lessons! If you aren’t willing to “own” your failures then the likelihood that you will end up owning success is lowered.
“That said, nobody ever invests as much care or as much brain power into a project as the primary researcher performing the work.”
A laughable statement. Rarely does a PI think about the work that is going on in the lab more than the post docs running the experiments and thinking about what to do next. Most labs are post-doc driven and the PI takes the experiments/thoughts from the grad students postdocs and assembles them into a grant. He/she may get additional ideas by listening to colleagues.
Oops. I agree with TeaHag on this point, that the primary researcher (post doc, grad student) does most, if not all, of the thinking for the project.
Which I think builds my point that the PI needs to support the work in the lab more and help prevent poor work from occurring.