Weekend reads: Fallout from misconduct at Duke; does journal prestige matter?; the data on fake peer review

Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, would you consider a tax-deductible donation of $25, or a recurring donation of an amount of your choosing, to support it? 

The week at Retraction Watch featured the retraction of a paper on a “gut makeover,” a retraction following a mass resignation from an editorial board, and the resignation of a management researcher who admitted to misconduct. Here’s what was happening elsewhere:

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

8 thoughts on “Weekend reads: Fallout from misconduct at Duke; does journal prestige matter?; the data on fake peer review”

  1. “In the end, science has to stand the test of time, not the test of what journal you happened to get it into during your lifetime as a scientist.” This is an extremely well articulated piece and I would encourage everyone to read it. I wish a few high-profile labs with nothing to loose decided to publish their findings in low IF society-type journals for a while. Others would likely follow suit, ultimately forcing everyone to evaluate published findings on their own merits.

    1. High profile labs have everything to lose: all that grant funding if they don’t make splashy findings. Nobody wants to be the Ivy League professor who can’t get published in a big name journal and get pushed out. Even if all the big labs coordinated and agreed to only publish in the Obscure Journal of Biology (OJB), it wouldn’t mean that papers got evaluated on their own merits. OJB would become the new Cell and be used as a proxy for quality.

      1. Having been on study sections my sense is that continued grant support depends on productivity, measured in #, quality, and impact of publications, much less on where the science was published. So, a few publications in OBJ would unlikely cause a dent in funding.
        I don’t think OBJ would turn into the new Cell. OBJ does not employ editorial rejections (weeding out smaller labs in favor of big names, independent of the quality of the science), has no space constraints, and does not ask for an endless number of additional experiments. Anecdotally, I know quite a few well known scientists who have made the conscious decision to only publish in OBJ, and their papers get cited just as often as if they had been published in Cell.
        Ultimately I don’t have all the answers, but the current system is not sustainable. It puts a small number of editors in charge of making critical decisions about what type of science is worthy of attention, and what is “more suitable for a specialized journal”. This has created an absurd arms race, with exploding man hours and costs per publication, and increased incentive for misconduct.

        1. I would like to point out one more difference between Cell and OBJ/OJB: archives.
          I wanted to find a paper from 1979, published in the Journal of Cyclic Nucleotide Research. After much head-scratching and web-digging I managed to establish that a successor of this journal got merged with another one, and this another one is online, (even the issues from before the merger) but the paper I want apparently is not.
          Whereas a paper from Cell from 1979 is at a few clicks distance.
          As much as I agree with the need to end this arms race and prestige for the sake of prestige, I do see some advantages for all-famous journals that stay around for a longer time.

    2. In my opinion, the strongest pressure to publish in the glossies comes from search committees and fellowship award committees.
      So while the PI of a high-profile lab may have nothing to lose, the trainees have a lot to lose if their papers are published in low IF journals.

  2. As a basic scientist I would be happy to lower the significance threshold for p to < 0.005.

    I would however, need a corresponding increase in my funding, lab space, animal ethics approval and workforce. I would also need some relief from the pressure to publish positive results and a reduction in the frequency of threatened unemployment from once every 2-4 years to something like once every 10-15 years.

    1. As a basic scientist my take on this is that we should move away from p values and instead report effect sizes and confidence intervals, particularly for discovery-based research.

      1. I agree with you – effect sizes and confidence intervals are more useful than a p value alone. I try to include these types of measures and provide my raw data so others can analyze it however they wish.

        But resources are still an issue. My power analysis tells me that I need more data points. My confidence intervals are always wider than I would like. If we were to change the threshold to p < 0.005 (or 99.5% CI), I still need a corresponding increase in statistical power and that won't come cheaply.

Leave a Reply

Your email address will not be published. Required fields are marked *