Weekend reads: A U.S. gov’t memo on publishing leaves scientists in disbelief; money wasted on flawed research; an eye doctor whose research subjects were at risk

Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance.

The week at Retraction Watch featured the retraction of a paper on red wine, tea, and cancer; a look at why researchers make up co-authors’ names, and how PLOS ONE has become a “major retraction engine.” Here’s what was happening elsewhere:

Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

5 thoughts on “Weekend reads: A U.S. gov’t memo on publishing leaves scientists in disbelief; money wasted on flawed research; an eye doctor whose research subjects were at risk”

  1. In my view, the only real problem with reproducibility is the lack of appreciation that low statistical power increases risk of false positive results, not just false negatives. But I also think that some people who do understand that and are shouting about irreproducibility may have standards that are too high. Many don’t want results with less than 90% or 95% post-test probability to be published, but I think the cost of missing a positive effect (which could be patient lives or advancement of society in the long run) is greater in many cases than reporting one that doesn’t pan out when someone else tries to reproduce it. So in my view, probabilities more like 70-80% would be acceptable in many cases. But then we also have to keep in mind that the power and p value that can be used to calculate probabilities depend on the specific statistical test used, and picking the right test gets complicated.

  2. While specialists may indeed be aware that low statistical power increases risk of false positive results, what about the journalists that cover these sensational discoveries?

    1. Responsible journalists should note the probabilities reported by responsible scientists (and not report a story without some note about probabilities). So often what one hears in the press is very differently pitched from the original paper.
      Journalists should also never, ever simply take the word of a university press officer, whose job it is to hype results.

    2. Better media reporting seems to be an unattainable dream, but maybe authors and research ethics groups could ask institutions to quit putting out sensationalized press releases. I understand the desire to make the public see an institution’s value, but making the public dumber about science in the pursuit of that end is counter-productive in the long-run.

      Ridicule might be one way to push back on the media relations people who are responsible for sensationalizing studies. They would likely tone down their phasing to avoid being added to a Wall of Shame or some such public list.

  3. touching story about salk…I still can’t believe this has happened.
    Regarding the dislike button, yes it is high time we should have one. – i don’t use “like” button on any social media apps as they don’t have “dislike” button…Simple.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.