You might be forgiven for thinking that the editors were describing a bad relationship rather than a paper gone wrong, the journal of Plant and Cell Physiology is retracting a 2004 article by Korean researchers who “manipulated and repeatedly used” micrographs.
The article, “Ornithine Decarboxylase Gene (CaODC1) is Specifically Induced during TMV-mediated but Salicylate-independent Resistant Response in Hot Pepper,” which appeared a s a short communication in the journal, came from the lab of Kyung-Hee Paek at Korea University.
According to the retraction notice:
The Editor-in-Chief, with consent of the corresponding author (Kyung-Hee Paek), has retracted this paper because it has been brought to our attention that there are problems with the way in which Figures 2 and 3 were compiled and presented.
In Figures 2A and 2C, and Figures 3A, 3B, 3D and 3E, rRNA photomicrographs were manipulated and repeatedly used. The erroneous figures have been attributed to human error and the authors have not been able to locate the original data or provide sufficient information, such as lab notes, to prove accuracy and authenticity of published data. As stated in the instructions to authors, Plant and Cell Physiology does not tolerate any form of author misconduct, including manipulation of data or duplication of previously published data without the necessary permissions to reproduce copyright material, and include an acknowledgement of the source in their manuscript.
We would like to apologize for any inconvenience this incident may cause to readers of the journal.
The paper has been cited 10 times, according to Thomson Scientific’s Web of Knowledge.
We aren’t sure “human error” is quite the right phrase here. Booting a routine grounder to short is an “error.” “Manipulated and repeatedly used” doesn’t really sound like an error, does it?
Well, at least they were honest enough when caught to know the game was up, rather than pull our some images from a pile on the floor and send these in as the “correct” data. So rather better than the “corrections” that appear when image data are manipulated or re-used and authors are able, very rapidly, to miraculously locate the “right” figures. In the numerous corrections of this type that appear regularly in journals, one has to ask how, if the control of the lab on data documentation was so poor that the “errors” occurred in the first place, they were able to come up with the correct data. Such corrections also seem to lead to institutional exoneration of the perpetrators of any charge of misconduct or fraud. So well done the editors at PCP and perhaps some recognition to the authors for sort of putting their hands up, though a more robust mea culpa would have been better.
Plant and Cell Physiology says it does not tolerate manipulation of data, but clearly it does, UNTIL SUCH TIME AS SOMEONE POINTS IT OUT. Who reviewed this paper? Does the journal even keep track of who can be trusted to review well, and exclude those who can’t? The craze to publish quickly is ridiculous and unnecessary. Give people time to review and pay reviewers. That is also a way to put a dent in predatory publishing (perhaps).
Elaine, alas, another peer who appears to have understood that alot of the rot taking place in this field of science is related to incompetent peers (or pseudo-peers) and equally incompetent editors. By direct analogy, the publishers would also be equally incompetent for continued hiring of incompetent editors. One of the problems is that the electronic submission systems request authors to suggest reviewers. Many a time, the editors rely heavily on the opinion and judgement of the “peers”, serving themselves only to make a final decision, in a demi-god-like fashion. So, I praise you for calling out not only the authors, who were caught red-handed, but also the peers and editors (and ultimately the publisher). A vicious cycle we are in.
Elaine, can you explain why you, apparently, expect good reviewers to find image manipulation?
Also, do you really think giving reviewers more time will improve the process? My personal experience, and based on a brief survey of colleagues it is not unique, is that more time just means that it gets on my high priority long deadline list, rather than high priority short deadline list. With all other obligations, I really don’t see how a later deadline will give me more time to review. Whether it is due tomorrow or in three weeks, it still takes the same amount of time to review, time that cannot be spent on other things.
I also have my doubts about the effectiveness of paying reviewers. It would be nice to get that kind of acknowledgment for time spent, but does it really improve the review process? There is absolutely zero evidence that it does. Perhaps with professional reviewers it will work, people who are specifically hired to review papers e.g. 50% of their time.
Marco, that was an interesting observation. In the publishing process, all parties have responsibilities to ensure the academic integrity: authors, editors, peers, journals. Whenever there is a flaw in just any one of these levels, there are bound to be problems down the road. I agree with you that it is impossible for even the best of peers to detect all problems, such as plagiarism or figure manipulation. So, in that case, the publisher should hire, at a fee, a professional who is capable of scouring the literature in search of such fraud. I think that after a few more months of RW and many more retractions, publishers may start to think about this solution. In addition to having editors (actually, what the heck some of the editors do except for reap the laurels and put a stamp on the acceptance/rejection button?) who should fully verify the scientific integrity, there should always be no less than three TRUE peers, not some pseudo-peers that they picked up from a Google search in some predatory OA journal (www.scholarlyoa.com). In addition to these two layers of quality control, publishers seriously need to start thinking about having a figure and plagiarist specialist on their paid staff who would verify for all papers across disciplines; larger publishers that churn out hundreds of papers a day could invest in a team. I think it would be worth the investment to save the administrative trouble and embarrassment later on. Think about it, if you were a publisher and after 5 years you had 20 papers retracted from a journal that you publish, what message would that send to the peers? That journal and publisher would rapidly lose credibility, authorship and ultimately what they want, profits. I think that one of the goals behind this blog is not only to highlight the retractions, but also to find solutions to the root problems. I think that my suggestion is a very realistic “doable” solution with clear benefits (for all parties). Of pertinent interest: http://www.globalsciencebooks.info/JournalsSup/images/2013/AAJPSB_7(SI1)/AAJPSB_7(SI1)6-15o.pdf
I am less pessimistic than Marco but much less optimistic than JATdS. It is true that there are responsibilities at all levels of the process. Starting with the publishers- we all read papers from journals which we trust, and ignore those which we don’t trust. Some years ago one would try to cover everything written on a particular subject- now I doubt that one would do so. Over time, the predatory journals will assume the position they merit. JATdS suggests that publishers should do a preliminary screen- its a pity to have 2 or 3 reviewers each doing a plagiarism search e.g. searching for similar titles, checking the authors papers for prior publications, ditto checking references for similar publications. If this were done in the publishers office, the results could b e given to the reviewers to check their validity.
I am an avid fan of RW but I am disappointed at the continuing discussion of image manipulation- is this really the only fraud one can recognize. Let there be a publisher’s image screener who knows how to do it.
I am interested in reviewers being able to spot that growth conditions which are supposed to be aerobic are actually not, that a strain does or does not have the gene(s) claimed, i.e. can figure out whether the experiments are done correctly and can be interpreted as the authors propose.
Reviewing is a community responsibility but there are so many more papers than there used to be, and many less well-trained scientists, that it really is difficult to maintain standards. I think that there are senior grad students who could do the basic work of reviewing, consulting similar papers, suggesting pros and cons, and get paid for it (the money would make a difference to them) and the PI or other senior member of the lab could make the final assessment.
It is true that one tends to let things wait if there is time. however reviewing also requires time to think- I usually need 5-10 days to mull over what I have read and whether it all hangs together. Sometimes I know it doesn’t but it takes me a while to figure out why. In any case, when one reviews a paper, one is considering another person’s life work. It seems to me that this deserves more than once over lightly.
In recent years I have received useful reviews for which I am grateful, and mean-spirited or cursory ones.
A recent review suggested that we should have done two experiments, each of which formed the basis of
a section of the results. The reviewer had not even read closely enough to see them. It took me 2 letters to get the editor to admit this was not tolerable. And this was a good journal!
Elaine. I have nothing but praise for your critique. In your field, it might be easier to flag and flog the editors for incompetence. In my field, the elitists who run the editor boards of the top 100 journals is sickening. Not all of them are rotten and not all of them are incompetent. But I have seen so much serious ignorance and incompetence by these academics in their ivory towers (non-racist intonation) that it is impossible to rid the system of the rot. When I complain to them, with clear proof, they look down, with the attitude of “trouble maker”. However, if we don’t complain, then how do we purge the system? One solution, for I have nothing to lose, is to publish a compilation of the editorial blunders in journals in my field of study. Not the low-life journals that publish just about anything. I am referring to those pompous IF-wielding journals who are supposed to represent class and quality. In some ways, we alll have the responsibility of avidly defending science and advancing it. On the other hand, we have the deep responsibility of hotly defending its values and calling out anyone, even the status quo elitists, who think their judgement is always right.
PS: I guess this is what we are looking for as a result of incompetent editors: http://online.liebertpub.com/doi/pdfplus/10.1089/ars.2012.4519