Do editors like talking about journals’ mistakes? Nature takes on retractions

courtesy Nature

One of the themes we’ve hit hard here at Retraction Watch is that there is tremendous variation in how journals deal with retractions. Some make notices crystal clear, while others seem to want to make them as opaque as possible. Some editors go out of their way to publicize withdrawals, while others bury them and won’t talk about them when they appear. 

In a Nature feature out today on retractions, Richard van Noorden highlights those disparities. He also highlights the fact that there are more retractions to talk about: As a graphic accompanying the piece makes clear, retractions have risen 10-fold in the last decade, even as the number of papers published has grown by less than fifty percent.

But even with that growth, the number of retractions — we’re on track for 400 this year, according to Thomson Reuters — is a vanishingly small percentage of the 700,000 papers published annually. Still, science prides itself on transparency — or should, anyway. van Noorden gives Ivan the chance to offer some advice to those scientists and editors who are reluctant to acknowledge there’s ever any dirty laundry in science:

“I think that what we’re advocating is part of a much larger phenomenon in public life and on the Web right now,” he says. “What scientists should be doing is saying, ‘In the course of what we do are errors, and among us are also people that commit misconduct or fraud. Look how small that number is! And here’s what we’re doing to root that out.'”

You can read the whole piece here. We’ll also be doing a live chat with Nature on retractions on Tuesday.

20 thoughts on “Do editors like talking about journals’ mistakes? Nature takes on retractions”

  1. Questions which follow from the Nature article on retractions are:-

    1. Is the editorship of Nature in at least two minds?
    2. Is Nature not sending mixed messages on people exposing scientific misconduct?

    On the 8th of September 2010 one editor of Nature wrote this piece about a “destablizing force” in science:

    I find it quite hilarious that science can be so easily “destabilized” by a few e-mails.

    Yet only 2 weeks before, on the 25th of August, an editor, perhaps another one, of Nature had supported the early publication (summary of the conclusions” below does mean early publication) of a report on the Marc Hauser case because of press attention and the need for others to get on with thier lives.

    I quote:

    “the courage of the young researchers who alerted the university to their concerns over how the professor was interpreting his data”.

    “Fortunately, the silence did not last, and on Friday Harvard released a summary of the conclusions reached by its internal investigation”.

    Since that time Nature wrote more about the “destabilizing affair”. Apparently the accusation were near the mark all along.

    My suggestion is that the editors at Nature should take a course in “joined-up thinking”. By way of explanantion “joined-up writing” is British English for cursive script.

  2. Commenting here in a personal capacity, I wonder about this statement “What scientists should be doing is saying, ‘In the course of what we do are errors, and among us are also people that commit misconduct or fraud. Look how small that number is! And here’s what we’re doing to root that out.’”

    Does this reflect people’s (anyone’s) psychology? Most people I know, whether scientists or not, are more interested in doing their job to the best of their ability, rather than looking at everyone else in the world who is in the same profession and thinking about whether or not they are honest and if not how to “root” them out.

    For example, after the recent phone hacking scandal, does this hold?: “What journalists should be doing is saying, ‘In the course of what we do are errors, and among us are also people that commit misconduct or fraud. Look how small that number is! And here’s what we’re doing to root that out.’” (I haven’t seen any journalists doing that, though I’ve seen them saying that despite the hacking most journalists are honest – it’s the” rooting out” bit I haven’t seen or read).
    Or bankers, social workers, politicians, doctors, etc? (There have been recent “bad apple” cases in all those professions.)

    Are scientists somehow different? Do they have lots of time to devote to this and other “scientific community” issues (most of the ones I know are very into their own research and don’t like/feel overburdened as it is having to do the amount of admin and so on that comes with the territory)?

    Journal editors/publishers who discover that a paper they have published may have flaws, and funders/employers of people whose work comes into question, have a duty to act in those cases as they have an ethical responsibility. It is when it comes to a scientist taking responsibility for other scientists, often in completely different fields, countries, employment situations,etc, that I find it hard to envisage what exactly they ought to be doing in your view. (Especially when the available data seem to be based on surveys or on specific disciplines, so may not be accurate and may not translate between disciplines – or between different organisational cultures, or types of employer, etc).

    1. Thanks for the thoughtful comment and question, Maxine.

      As far as your particular question about the hacking scandal, I thnk it’s quite clear that it has been the work of another journalism outlet, The Guardian, led by Nick Davies, that has rooted out that misconduct.

      I’m not arguing for all scientists to take responsibility for all scientists. That seems an extreme extrapolation of my quote. I’m arguing for scientists to take real responsibility for transparency. By “here’s what we’re doing to root that out” I mean scientists should be able to point to cases in which they’ve brought problematic behavior and findings to others’ attention, for example. Journal editors are scientists, of course, and need to take more responsibility than they currently do.

      Does this reflect anyone’s psychology? Well, yes. It reflects the psychology of people who step forward. And if the reason scientists aren’t taking the time to do that is that they “don’t like” particular aspects of their responsibilities, that’s their prerogative. But if that’s the choice they and their employers have made — and I think we’d agree that organizational cultures should make it easier to blow the whistle — it’s a statement that transparency is far down the priority list. I’d rather trust the people who’ve made transparency a major goal.

  3. Please, please, please hammer on Nature (no offense to Maxine) for their apparent policy to stifle “communications arising” (as noted by many commenters on this blog in that editors appear to delay and asymmetrically favor the authors responding to scientist’s raising important concerns). Note that PNAS appears to have a much more open policy for the discussion of papers in their “comment on” feature (or the publish a lot more of them). Nature has been presented massive, overwhelming evidence of a fraudulent crystal structure in their pages (for example), and this paper is still not retracted, nor is there an “Editorial Expression of Concern”- there was a “Brief Communications Arising” allowed that itself was relatively neutered (yet it still states that the structure in question does not obey physical laws). This paper is from 2006. I generally like and appreciate many Nature titles, but they appear to be very obstinate in one area where they could do a lot of good. Nature titles are not alone in their reluctance to support the greater use of “Brief Communications Arising” by streamlining the process or reducing obstacles for the process, but they are the most visible.

    Please see fraudulent structure here:

    1. Seconded! It’s even worse than your comment indicates – UAB investigated this structure and others published by the same P.I. during his time there and concluded that nearly all were fraudulent. Their report was released in December 2009, and a number of these have since been retracted by the journals in which they appeared. Yet Nature has continued to ignore the evidence. Perhaps when the NIH ORI makes its ruling the editors will finally act – but I’m not holding my breath.

    1. Also, Dan Vergano @ USA Today writes: More Wikipedia copying from climate critics.

      This may be early for Retraction Watch. On the other hand, Wiley knew about the 3 issues (Wegman&said(2011), Said&Wegman(2009), and Said’s false affiliation … by April, i.e., 5+ months ago.

      So this isn’t exactly an editor issue, but a publisher issue.

  4. Nature says:
    “When the UK-based Committee on Publication Ethics (COPE) surveyed editors’ attitudes to retraction two years ago, it found huge inconsistencies in policies and practices between journals, says Elizabeth Wager, a medical writer in Princes Risborough, UK, who is chair of COPE.”
    But that appears to be the goal of Wager & COPE: to obscure rules, to introduce personal factor to the editor’s decision, to ease making arbitrary decisions. The COPE Council member, Irene Hames, in a recent comment to “Ill communication” leads to retraction of tissue paper (sorry) for authorship issues” on this site, writes:
    “It’s up to editors to evaluate cases individually, using common sense and recognising when actions may have occurred not from any intention to deceive but, for example, out of ignorance of generally accepted publication and ethical norms. I have dealt with many authorship cases – it is one of the most common areas of concern and dispute for journal editors – and they can be very complex and often involve ‘ill-communication’ rather than deceit.”
    She, the COPE guidelines and the COPE chief – E. Wager are working hard to enable editors to make arbitrary decisions. They are approaching the moment when plagiarism and every other fraud can be called either intentional or unintentional, well, depending on some personal circumstances, on who is the accused person, who is the accuser, etc.
    And E. Wager doesn’t ever hide her personal choices for one who has to be guilty and one who has not to be guilty: she tried to prove that women do not lie, men do. She put this on the Brit.Med.J. blog, see
    That prompted a comment on the blog: “Liz, how do we know you’re not lying?”

    Well, I could have answered this question. The matter was about the retraction of the second paper of these:
    1. “Cell Patterns Associated with Normal and Mutant Morphogenesis in Silver Stained Drosophila Imaginal Discs”, Michael Pyshnov and Ellen Larsen.
    This was a MS withdrawn upon my complain that second author had no right to be the author.
    2. “Cell Patterns Associated with Normal and Mutant Morphogenesis in Silver-impregnated Imaginal Discs of Drosophila”, Ellen Larsen and Aaron Zorn.
    (In the same journal.)
    In her letter to the editor, Larsen wrote: “I regret that owing to circumstances I shall outline below, I must withdraw this manuscript from consideration. I intend, however, to submit the results of a similar study (performed by myself and an undergraduate) in the very near future…” and she wrote: “The first author of MS 483-87, Mr. M. Pyshnov was a graduate student under my supervision for some five years. In my opinion he is a very creative scientist with great technical flair. Unfortunately, after discovering disc specific cell arrangements and their modification in a homeotic mutant he became unable to do more research. A year after he produced his last preparations (those found in the MS) his graduate student status was changed to “lapsed student”, ie, one who is free to return to complete requirements but who is no longer officially registered.”, and she concluded: “In the new paper I shall try to incorporate both the reviewers’ and your excellent stylistic suggestions so that these efforts will not have been entirely wasted.” (She speaks about the reviewers on the manuscript!)
    See documents at , also, see the academic decision to terminate my PhD program – because my scholarship ended! Not for any inability to do more research. (In fact, the experiments were finished and I was given a place to write the thesis.)

    COPE refused to advise the editor to withdraw the article, because:
    1. They said that there cannot be plagiarism of unpublished work.
    2. They said this was an authorship dispute, not plagiarism. (Above, see Lasen’s admission of my authorship of the two discoveries. No authorship dispute!)
    3. They said they never would give the editor the advice because he had seen this case before he became a COPE member.
    All three points were stupidly concocted new rules and a blatantly falsified fact.

    Liz Wager later said: “We have been in touch with Michael Pyshnov but were unable to respond to his complaint because it occurred many years before the journal in question became a COPE member (and, obviously, we cannot apply our codes retrospectively).” (Times Higher Education, 28 August, 2009.) This was a lie; in fact, only three points above were the issues raised by COPE. Also, only about five months (not many years) had passed between my complaint to the editor and his membership in COPE. I asked COPE to act since my complaint remained valid – the plagiarised paper is still there, but they concocted the rule #2 and sabotaged their own investigation. Why?

    For how long Nature will not see what COPE is doing?

  5. A nice and overdue piece by nature on the current rise in retractions/research misconduct and how some institutions and journals fail to deal with it in an appropriate manner. However, in my opinion the article fails to discuss one important reason why researchers revert to manipulating/fabricating/plagiarizing data. Cuts to research and education are leading to less grants and, consequently, permanent positions left to fight for by an ever-increasing number of PhD students and postdocs. Early career researchers are aware that they have only a limited number of years to achieve a maximum of high-impact publications. Qualifying for permanent positions, nowadays even more than in the past, is a numbers game: impact factors, hirsch numbers and publication records. Thus, there is little margin for error in a world seemingly accepting that productivity equals scientific excellence. The reality of everyday lab life with the failing experiments and negative results is of course unsuited to such a results-only approach of evaluating young researchers. It can therefore be no surprise that more scientist are know willing to tweak or fabricate results in order to save their careers (plus that modern technology makes it easier). I think this is also reflected by the increasing retractions in high-impact journals – if you risk so much by cheating, it should be at least worth it. I don’t condone misconduct at all, quite to the contrary, but are confident that we will see a further rise in retractions due to misconduct and convinced that the way pressure is being put on early career researchers is partly to blame.

  6. @Ivan, there are around 1.4 million articles published every year in Web of Science, not 700,000 as you’ve written. Makes the number of retractions even smaller!

    @GB: You raise an important point, that pressure on scientists today likely leads to more manipulation, plagiarism, fabrication, or at the least cutting corners and sloppy mistakes. But as I try to make clear in the article, I don’t think you can yet tie the rising numbers of retractions to this problem, because we all know that real levels of misconduct, plagiarism and sloppiness are much higher than ever gets recorded in retractions. (See Fanelli, PLoS One, 2009 for evidence). In other words: we are seeing more of the tip of the iceberg, but that doesn’t tell us too much about the rate at which the iceberg itself is growing.

    What you can say is that we are slowly seeing more willingness to retract, on the part of journal editors. This at least is bringing the problem to everyone’s attention!

    One further point: retractions in high-impact journals such as Nature and Science have always been higher than in lower-impact journals. Perhaps, yes, because of the pressure to publish in such journals; perhaps also because of the attention paid to these journals, as people will be keen to write in to correct errors that they see. But, from the statistics provided by Thomson Reuters from the Web of Science database, we are not actually seeing more retraction notices in Nature and Science in the past five years than we saw in the first five years of the 2000s. As I found out, the recent upsurge has come mainly from lower-impact journals, many of whom ventured into their very first retractions in recent years.

    Your concern is still something we should all be worried about – I would just caution using the retraction numbers to prove any points on this.

    1. Thanks for your reply. I agree with you that awareness of misconduct and the willingness of editors to retract are on the rise. But the majority of retracted papers are from the recent past. If fraud was always at the same level, why don’t we see more of the older publications being pulled? They have been around for much longer, which means they (could) have been scrutinized for longer. Still fewer of them are retracted. Maybe it is as you say and it’s the current coverage that is misleading me, but I have the feeling that misconduct is indeed on the rise.

      With regard to high impact: please don’t take it personal, but for me other journals than nature and science also count to this group. Blood, PNAS and J. Immunol. all have attractive impact factors. And as your own graph shows, all those journals had a dramatic increase in retractions. Furthermore, as has already been pointed out on this blog before, the nature and science numbers from 2001-2005 are skewed. Take away the gargantuan numbers of retractions which were due to the Schoen affair (7 for nature, 9 for science) and you will end up with approx. 50% lower numbers in that bracket. Although I have not checked this, I bet you have more labs/institutions retracting papers in the span 2006-2010 than in the 2001-2005 period for those journals. So, although you are right and retractions are springing up across all journals, I think your numbers also confirm the point I made about the rise of retractions in high impact journals.

    2. I feel that the bar to get an editor to look a scientific concerns about a paper in Nature is too high.
      You have to write a “matters arising” document containing new data.

      I mention a paper I am concerned about, but do not identify the paper, as an example.

      In this paper all the control mice, 17 in total, live longer than the median lifespan for the strain of mice used in the experiment. I believe this occurring is very unlikely. I could get some of the verys mouse strain and look after them for 25 months, the age they all lived to in the Nature paper, and see what happened. Even if I did it would still not be the same as seeing the primary data which the authors will not show me. Surely it is up to the authors to prove their experiement, not for other to disprove (reverse burden of proof)?

  7. @GB – good points re Blood, PNAS, J. Immunol. And the Schoen affair is also a good point. Thanks!

    As for why older papers don’t get pulled now – I don’t know about past fraud, but it’s in general true that older invalid papers don’t get retracted, often because people feel that there is no point retracting papers whose conclusions are no longer vital to the science in that field. Take Svante Paabo’s 1985 paper where he cloned nuclear DNA from an ancient mummy [Pääbo, S. Nature 314, 644-645 (1985)]. We realise now that it’s wrong, because of contamination. (See for the details). No-one is asking for it to be retracted, since today DNA extraction techniques have moved on apace and the old methods are not relevant.

    When it comes to this problem of ‘attention’ on older papers, I think a great example is the case of Homer Jacobson’s 2007 retraction of his 1955 Science paper, after he found that creationists had cited his work and he spotted errors in his old paper. (See It would never have been retracted if it hadn’t been seized on by creationists. The literature must be strewn with thousands of erroneous old papers that don’t need to be retracted or corrected because they no longer hold scientific or wider attention. Science as a culture tends to clean up the dirt from only the most recently published literature, if it bothers to do any revision at all. (And this system seems to work quite well).

    Might there be a similar shrug of the shoulders when it comes to older papers that [unlike Paabo’s and Jacobson’s, I should make clear] might in fact contain fraudulent or plagiarised material – but are no longer the bedrock of their field; better to leave old literature ‘contaminated’ than scrub it up, when no-one is looking at it anyway?

    It comes back again to this dual function of retractions: cleaning up the literature versus pointing the finger to those who have committed misconduct. Cleaning up old unexamined literature is simply not a priority for scientists, institutions, or editors; and pointing the finger back to past suspected fraudsters is extremely burdensome & problematic – especially for editors with limited time to spare if institutional investigations do not back them up.

  8. This is a very interesting subject. I think Richard is right that the fact that old papers are not retracted is exactly because no-one is really looking at these in detail any more. The imperative to address flawed papers relates to the real life problems they cause (i.e. very serious issues relating to dangerous effects of misdiagnosis or false attribution of therapeutic potential and so on arising from flawed drug trials; issues of authorship, unfair self-promotion/career-enhancement, and time and money-wasting that arises from other labs attempts to build on flawed results). All of this stuff fades with time – the flawed data in a 15- or 20-year old paper has very little potential for real-life damage.

    On the other hand I can’t help feeling that there was simply less scientific fraud even 15-20 years ago, and perhaps this relates to the ease of fabrication and plagiarism in the electronic era. Even 10 years ago the issue of plagiarism hardly existed in my recollection; as a researcher but also a University teacher, plagiarism has become an ever-present bogeyman and this is surely because it is simply so easy to cut ‘n paste stuff. Likewise in the days when one took one’s protein or DNA gels to the departmental photographer for preparing publication-quality pictures, there wasn’t that much scope for manipulating images to the point of fabrication. And there wasn’t the awful imperative to churn out papers to meet promotion or research assessment exercises and so on that there is nowadays.

    Interestingly, while the ability to fabricate by plagiarism and image-manipulation has become very easy, so has the ability to detect this. So I wonder whether the recent rise in retractions resulting from these type of fraud, is going to be followed by a considerable slow down as individuals of less honest nature realize that the chances of getting caught are too high.

  9. There are still plenty of stubborn journal editors out there. If one is looking for an extreme example of such behavior by a journal in the face of a potentially faked paper, how about Wiley’s Angewandte Chemie and the 2006 La Clair synthesis of hexacyclinol (accomplished with the credited help of five anonymous technicians from Bionic Bros GmbH in Berlin, Germany, a company that apparently shares an address with a yoga studio and whose only other known product, apart from this paper, is a single video game), that I would think anyone familiar with the topic would agree has been totally discredited as a fabrication by now. Not only did the journal hide behind the “three knowledgeable referees approved it” excuse at the time, but La Clair has also just recently published another paper in, wait for it, Angewandte Chemie of course.

  10. Thanks for the interesting links Richard. I think you and chris have a good point about why older papers are less likely to be retracted. I haven’t seen an “old” paper being retracted for fraud for a long time, maybe people don’t care anymore, or it simply was less common.

    Very good point about the detectability of image manipulation by chris. In my opinion it should be relatively easy for journals to run image analysis software over all the figures of a paper prior to publication in order to flag up possible manipulations. The same goes for text plagiarism software, every university is using that for students today, shouldn’t be a problem for a publisher to implement. It would probably save time, money and embarrassment in the long run. @Richard: Is nature/macmillan already doing anything in that direction or planning to do so in future? If this would be done I would expect fraud to decline, as chris suggested. But current image manipulation software is just to good for some of the manipulations to be spotted by the naked eye, you again need software to detect it. Without the threat of being caught already before publishing, I don’t think people will refrain from cheating.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.