Weekend reads: PhD sues alma mater for alleged retaliation; an unexpected rejection; saying no to peer review requests

Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance.

The week at Retraction Watch featured a former postdoc who faked nearly 60 experiments; an apology and retraction from a cancer researcher; and three retractions from UCLA. Here’s what was happening elsewhere:

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].

7 thoughts on “Weekend reads: PhD sues alma mater for alleged retaliation; an unexpected rejection; saying no to peer review requests”

  1. If you don’t review papers in the Summer, then you shouldn’t submit them then either. It’s a give and take honor system.

    1. Given the speed with which reviews and editorial processing are often performed, perhaps a more reasonably suggestion would be not to submit them in the spring.

    1. Those are some strange errors. They are all similar to this:
      “F(1, 23) = 5.29, p = .032 (recalculated p-value: 0.03086)”. What’s the advantage of having a 0.032 p-value instead of 0.031?
      All of the errors are in this vein: off by ~0.01, and never crossing an important threshold (e.g. a 0.06 p-value is never reported as a 0.04). One the one hand, there doesn’t seem to be a good reason for these differences (it can’t be a measurement error or something, and they’re too numerous to be typos). But on the other hand, there’s no apparent advantage to intentionally misreporting multiple p-values in this way. My guess is the statistical software used to prepare the papers had a small bug, or perhaps used settings slightly different settings than statcheck.

        1. Thanks. I was aware of the effort, but not the origins. Unfortunately I don’t have the expertise to critique the methods or results.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.