Here’s a nice example of how science should work.
A team of Swiss microbiologists has retracted their 2012 paper in PLoS One on the genetics of the TB mycobacterium after learning that the fusion protein they thought they’d used in their study was in fact a different molecule.
Here’s the retraction notice for the article, “A β-lactamase based reporter system for ESX dependent protein translocation in mycobacteria,” which has been cited once, according to Thomson Scientific’s Web of Knowledge:
The authors request the retraction of the publication “A β-lactamase based reporter system for ESX dependent protein translocation in mycobacteria” because the main conclusions of the publication are unreliable.
The main finding from the publication was the development of an ESX (precisely ESX-3) dependent protein translocation reporter based on an ampicillin resistant phenotype. Subsequent investigations by the authors have demonstrated that the reporter construct “pIV-blaTEM”, which was supposed to express the “Bla-TEM-EsxG” fusion protein, was accidentally replaced with a vector expressing a fusion protein carrying a twin-arginine-transporter (tat) secretion signal (not referenced in the publication). Re-testing of a newly constructed “true” BlaTEM-EsxG reporter strain indicated that the vector pIV-blaTEM does not confer an ampicillin-resistance phenotype. The second ESX reporter construct (pVI-blaTEM) expressing BlaTEM-EsxB does confer an ampicillin-resistance phenotype, but this phenotype does not depend on a functional ESX-1 secretion system as demonstrated by characterization of a subsequently generated eccD1 (MSMEG_0068) deletion mutant expressing BlaTEM-EsxB.
That’s about as straightforward an explanation as readers might wish for in such cases. Kudos to the researchers for their transparent handling of the problem, and for correcting the scientific record swiftly.
Confirming the identity of DNA constructs before proceeding with a study would be a nicer way to make science work.
that’s a “nice example of how science should work”?!
Design an experiment, carry it out so poorly that one fails to establish categorically that the fusion protein is the one you think it is, write the work up and send it to a journal that accepts pretty much anything so long as it seems to be carried out properly, and then retract it.
I don’t think the authors deserve any particular kudos, although they did the obvious thing. Anyone finding themselves in that mess would wish to retract the paper – better to sort out your own mess than have someone else do it. I don’t see that scientists should be praised for doing the obvious…it perpetuates the myth that the standard approach to science lies near or even over the border between careful experimentation/analysis and dodgy practice and so anyone doing something pretty straightforward like retracting inadvertent junk is deserving of special praise.
I’ve several times found myself in the mortifying state of realizing that I/we’ve cocked something up quite severely and that what we thought we’d done isn’t true at all; embarrassing confessions to colleagues/collaborators. But putting together a study for publication induces a state approaching neurosis that involves checking and checking and checking. Having a journal like PLoS One around means that one can lower one’s standards just little more.
Personally speaking it’s more than once crossed my mind that some stuff that we’d really like to publish but isn’t quite ready could just be written up for PLoS One. In my opinion there’s a major dichotomy between the view that open access and PLoS One “publish anything” is simply wonderful and “isn’t current science awful because people cheat and experiments aren’t reproducable” which seem to be held quite often by the same individuals. I suspect that were going to get even more of the latter the more we embrace the “easy to publish just cough up the cash” model….
Kudos to the researcher, because he identified his own error even though he wasted public money, similar case few minutes before him somebody else identified this error it is called research misconduct, well institutions, funding agencies and ORI will waste some more public money or time for investigation then screwed up the scientist career, Well who is paying for this? Public! I just confused this Kudos goes to the public, researcher, institutions, funding agencies or ORI. Forgot to ask what type of quality measure taken by the Journals when they are publishing this paper in the first place? Well Kudos even though I don’t know whom to prize..
something I haven’t quite got to grips with:
10 years ago there wasn’t a PLOS nor the hundreds of new electronic journals that now provide around double the number of “publication slots” as previously. No doubt the number of scientists has increased somewhat, especially to accommodate the rising economic status and pretensions of China and India. But is there really that much more stuff that needs to be published? Were scientists simply not publishing fast enough 10 and more years ago?
Personally I don’t think so. There’s lots of reasons for the massive expansion of journals and publications and not many of them are positive reasons in my opinion even if they do accord with the imperatives of our age (easy money making; CV puffing in an attempt to raise status etc.). It’s not surprising in that environment that retractions have increased.
PLOS provides some useful insight. PLOS One accepts pretty much anything and not surprisingly has quitre a few retractions (but maybe not that many as a proportion of ts massive publication numbers??). There are some very good PLoS journals and what distinguishes these is their high bar for acceptance; .i.e. they aim for a very high quality in relation to novelty and perceived importance and reject a high proportion of submissions out of hand.
That seems the bottom line to me. We seem to demand from the sidelines extremely high quality in science, but put up with the expansion of publication sources that pretty much ensures a litany of retractions associated with poorly constructed experiments and analysis, misdemeanours of various seriousness and general rubbish. Maybe we’re in some sort of transition between old-style and new-age publishing practices but unless we embrace very high standards of quality that permeate throughout the whole system from scientific practice up through publishing and down through mentoring and teaching we may as well just order in the popcorn and enjoy the car-crash delights of retractions brought to you by a blog near you…that would be quite modern too.
These guys appear to have made a mistake. Mistakes get made from time to time. They corrected it, as far as I can tell without outside pressure. Is there more?
1. It seems retractions happen all over the place and not just in open access journals. In any case retractions are probably only a small subset of the work that wouldn’t reproduce if performed blinded.
2. There is no point in funding labs to do research that can’t be published. There has been an explosion of research, research funding and countries joining the research community, hence there are places for that research to appear.
3. The practice of subscribing to a journal and sitting down and reading its table of contents is not so common these days. Maybe with one or two peak journals. Otherwise I think people just do what I did and do pubmed searches regularly or have pubmed alerts and read the papers in their field as soon as they came out, regardless of the journal. I think it is a pretty much a given in at least some areas of modern science that you have to take what you read with liberal tablespoons of salt. Artificially choking off the outlets for publication will simply increase the already substantial incentives for fraud.
In defense of PlusOne, sometimes it is the only place good work can be published. Our group has experienced some very destructive reviewing from the world leaders in our field. Our markers are simply better than their markers, making their patents effectively worthless. So we have been told, by people who would know, that these leaders have been actively either writing destructive reviews or contacting editors to have our work blocked.
The first time was with a 25+ impact factor journal. We received feedback from the reviewers and positive feedback from the editor, did all of the requested work and resubmitted, just to have the editors reply with one sentence…”we are no longer interested in this work”. We found out that a very important person contacted the editor and had it blocked. After that, no one above a 10+ IF would even review it. We finally got it into the review process with another good journal, around 9 IF. The reviewers came back with… “too good to be true”, rejected.
We finally said, “scr*w it”, broke the paper up and submitted the first part to PlosOne.
The work is significant enough to receive millions in funding to bring it towards the clinic. So we know it is relevant.
We just cant get it published.
and before you suggest that I or anyone else report this to the ORI….please realize that the people I am talking about are too big to fail. They are untouchable. The best revenge is to get into clinics and starting curing people. It is our driving goal and they cant stop or block it.
Yes that’s horrible and disgraceful. However I expect that you’ll “win” in the end. In my experience good work does get its due recognition wherever it’s published. You highlight the fact that there are several reasons for publishing in Plos One, some of them not so good (e.g. that it’s relatively easy and thus one might not take the care over experiments and data that one might otherwise take with the greater risk for error and the dreaded spectre of non-reproducibility that haunts these pages!)….you’ve identified a good reason for publishing a paper there
Sadly, no one wins in these situations. You just have to keep fighting the good fight.
I believe that several months ago there was a post on this website showing the correlation between impact factor and # of retractions. Higher the IF …the greater the # of retractions. Wouldn’t this argue against some of the points above?
an alternative argument might be that, because PLoS One is more accommodating to authors there is, er, less incentive to be inventive, in order to get published. This may result in giving the journal more of a consistency of watery piss than the fine red wine offered by Nature or Science, but on the other hand, i think there is more variety in what is being reported. Some of it may be unimaginative, or just plain weird, but you can use the site statistics to filter out the good stuff.
can i type ‘piss’ on this site?
some one should keep an eye on PLoS One – number is getting increased
I’m glad of the emergence of new journals including PLOS ONE. I spent many years getting things rejected because the editor or one of the anonymous reviewers didn’t think it was “important” enough. I’ve seen tons of bad science get published in top journals because it was sexy or politically correct and would sell copies and make the evening news. That’s hardly good science. PLOS ONE’s philosophy is to publish good science and let the readers decide what is important.
The problem here seems to be that after making the construct they wanted, they somehow had lab contamination or a mix-up and did most of their experiments with a different construct. That sucks but it could happen to anyone.