Weekend reads: A publisher sends the wrong message on data sharing; jail for scientific fraud; pigs fly

The week at Retraction Watch featured three new ways companies are trying to scam authors, and a look at why one journal is publishing a running tally of their retractions. Here’s what was happening elsewhere:

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

14 thoughts on “Weekend reads: A publisher sends the wrong message on data sharing; jail for scientific fraud; pigs fly”

  1. Lock um up! They take taxpayer money and then BS the public with bum reports–all to
    line their credentials and pockets!!!!!

  2. ” Research funds are wasted on reformatting manuscripts, says Julian Budd. (Nature)”

    Reformatting is frustrating: why don’t journals adopt a universal format and style?

    1. The most pathetic is that even the Latex format needs rewriting due to the different segmentation and figure/reference formats.

    2. This is a discipline specific issue. Most economics journals don’t require you to follow their style on first submission. Even if you make mistakes, in following their style, on final submission, I usually find their copy editors take it on themselves to fix it up.
      Recently I have submitted to a couple of non-economic journals and I really don’t understand why they want me to waste time redoing the papers style for a first submission that they may not accept anyway.

    3. My belief is that journals do this to set up artificial barriers to submission to prevent machine gun re-submission of weak papers to every single journal in the field.

  3. The March 7 update to Dr. Brian Wansink’s comment (http://foodpsychology.cornell.edu/note-brian-wansink-research) contains this statement:

    “… a master’s thesis was intentionally expanded upon through a second study which offered more data that affirmed its findings with the same language, more participants and the same results.”

    Assuming that this is a response to my blog post https://steamtraen.blogspot.fr/2017/03/some-instances-of-apparent-duplicate.html (section E), this appears to be refer to these two articles:

    Wansink & Seed, 2001: http://link.springer.com/article/10.1057/palgrave.bm.2540021
    Wansink, 2003: https://www.cambridge.org/core/journals/journal-of-advertising-research/article/developing-a-cost-effective-brand-loyalty-program/2309B9BDBF47C6CA9ED0A6A0B1D06097

    Dr. Wansink’s observation that these two studies show “the same results” is somewhat of an understatement. There are 45 measured variables in the second study of each of these articles (2001, Table 5; 2003, Table 2). Of these, 17 of the 18 numbers that were reported to one decimal place are identical, and 22 of 27 that were reported to two decimal places are identical. I suspect that this degree of (re)measurement accuracy and concurrent apparent absence of sampling errors is probably unparalleled in the history of scientific research.

  4. There has recently been a shift in focus from the problem of research misconduct to a “crisis in irreproducibility” . The shift in emphasis appears to have followed the declaration by Collins and Tabak in Nature that such a crisis exists and that, “With rare exceptions” it is not caused by research misconduct. (Nature505,612-613,2014)

    There may indeed be a crisis but they refer to only two reports performed by pharmaceutical companies whose laboratories failed to replicate studies performed by others.They attribute the irreproducibility to flawed research practices which, undoubtedly, may be true. However, their list of such flawed practices,inexplicably includes,The, “Application of a secret sauce, to the data”. (Merely a flawed practice?)

    Of course, upgraded training in research practices will be beneficial, but it should, and need not, require that diminished attention be to paid to the deleterious effects of research misconduct which will continue unabated until a comprehensive plan to address it is initiated.

    Donald S. Kornfeld, MD
    Columbia University

    1. I don’t believe that Collins and Tabak in any way caused such a shift, they just became aware of it.

      There is further evidence for a serious problem of irreproducibility, notably the work of Ioannidis, suggesting that low power and publication (and other) bias(es) have brought us to a situation where half of what is published is not expected to replicate, on those statistical grounds alone. And of course publications can have a host of other weaknesses than just the statistics.

      Why Most Published Research Findings Are False
      John P. A. Ioannidis
      PLOS
      http://dx.doi.org/10.1371/journal.pmed.0020124

      There are more direct replication studies, with psychology leading the way:
      https://osf.io/ezcuj/wiki/home/
      (39% replication rate).

      I think many working scientists have large numbers of war stories about publications whose results could not be reproduce.

      In conclusion, misconduct is certainly a problem, but there is a wider problem of irreproducibility. Luckily, to a large extent they have the same solution: full public data access and greater scrutiny will discourage both cheating and sloppy practice. Prevention is better than cure.

  5. Maybe I won’t be popular with this, but I do not support early data sharing. The way the NIH threw the blood-pressure trial for the open-data contest as a peace of meat is absolutely disgusting. With all the open data mining tools out there, it will be strip-mined before the actual collectors can start writing their papers.

  6. One of the comments here led me to think the following (surely not an original idea, and maybe this has already been done): Since the list of predatory publishers is now defunct, and since it’s probably very hard to keep UP with the large number of predatory publishers… We need a group to do the opposite: create and curate a list of reputable journals.

    Any journal not listed can be assumed to be either “not yet vetted” or predatory. The curators of the list can be asked to review publishers who want to be on the list. Has something like this been done?

  7. Yes, DOAJ does precisely that, but based on measurable and verifiable criteria; not a curator’s subjective sense of a journal’s reputation.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.