Weekend reads: Why following up on fraud matters; how many retractions in 2017?; misleading abstracts

The week at Retraction Watch featured the world energy solution that wasn’t, a story about Elsevier and fake peer reviews, and a question from a readers about citing retracted papers. Here’s what was happening elsewhere:

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at [email protected].

5 thoughts on “Weekend reads: Why following up on fraud matters; how many retractions in 2017?; misleading abstracts”

  1. Great articles to start out 2018.

    Ivan gets it and articulates it precisely here:
    http://www.cbc.ca/news/health/second-opinion-december-30-2017-1.4468173
    “The whole point is to cut down on waste in science,” Oransky said. “Science is too important and scientific dollars are too precious to be wasting time on work that’s been shown to be fraudulent or wrong.”

    Richard Grant too:
    https://www.theguardian.com/science/occams-corner/2018/jan/04/science-fraud-research-misconduct
    “And this brings me back to where we came in, to the effect scientific malpractice has on other people. The scientific fraudster who worked briefly for us had lied and cheated their way to a prestigious position, winning grants and awards on the back of made-up research. In doing so they took resources that could have been used by someone else, someone who wasn’t a liar and a cheat.”

    Through winning research funds and positions – deceitfully – they denied an incredible opportunity to somebody who might have actually deserved it; they prevented some other young, honest and hard-working scientist from fulfilling their potential.

    There is a spectrum of wrongdoing in science, from selecting the one result out of many that makes sense to you {NOTE: this is about cherry-picking data or patients], up to deliberately faking figures in papers. And until there is a change in culture such that people are encouraged to say, “Guys, this isn’t right” – such as I might have been a decade ago – there will continue to be high-profile cases of seriously damaging scientific fraud.”

    While I’ve lost confidence in ORI, COPE and other watchdogs, there’s hope in investigative [aka just plain good] science journalists (like those of RW, Stephanie M Lee of Buzzflash, Ed Yong of The Atlantic, etc).

  2. ‘55.2% (n = 186) of respondents.’
    Why not give the percentage with all the ten digits a pocket calculator provides? It would be only marginally more irrelevant than all the digits of the 55.2% cited.
    Sigh!

    1. The first digit after the decimal point is relevant here, and necessary for consistency: 337 individuals completed the survey, and n=186 would have considered sharing their manuscripts to an open platform. By rounding (186/337)*100 to 55%, the figure would be ambiguous, given that the same result is obtained with n = 185: (185/337)*100 = 54.9% = 55% after rounding. Even with n = 187, the same result is obtained: (187/337)*100 = 55.49% = 55% after rounding. The percentage including one (and only one) digit after the decimal point ensures that n = 186 is correct.
      In contrast, (186/337)*100 = 55.19% would be meaningless, since n is an integer. On the other hand, things would be different with large integers (say n > 100,000), but it’s not the case here. Not to mention the cases where n is a very very large integer, for example the Avogadro’s number: a yield of 55.2% for a synthesis in chemistry is a fantasy.

    2. I think Bernès misses the point a little. There is no particular need to be able to calculate the exact number of respondents from the percentage. A better guide is whether the level of precision printed has any meaning given the likely accuracy of the figure. We know that if we repeated the experiment a few times, even under identical conditions, the percentage would vary by at least a few percentage points [sd of a percentage = 100 * p (1-p)/sqrt(N)]. So the figures after the decimal place here are effectively just random numbers of no information value. Although jxj might feel that extra digits do no harm, sometimes these extra figures may mislead the casual reader in suggesting greater accuracy than really exists: it is poor scientific communication. And, at least to me, not rounding appropriately suggests a lack of understanding by the authors of what their results can tell us.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.