Weekend reads: California universities battle in court for research dollars; fake conferences; fake impact factors

booksThis week at Retraction Watch featured a look at the nuances of replication efforts, aka “the replication paradox,” as well as yet another story of fake peer reviews, this time at Hindawi. Here’s what was happening elsewhere:

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.

8 thoughts on “Weekend reads: California universities battle in court for research dollars; fake conferences; fake impact factors”

  1. Some might be interested in my views (feel free to comment here or at PubPeer):

    Teixeira da Silva, J.A. (2015) Negative results: negative perceptions limit their potential for increasing reproducibility. Journal of Negative Results in BioMedicine 14: 12.
    http://www.jnrbm.com/content/14/1/12
    http://www.jnrbm.com/content/pdf/s12952-015-0033-9.pdf
    DOI: 10.1186/s12952-015-0033-9

    Teixeira da Silva, J.A. (2015) COPE code of conduct clause 3.1. under the microscope: a prelude to unfair rejections. Current Science 109(1): 16-17
    http://www.currentscience.ac.in/Volumes/109/01/0016.pdf

  2. I’m not exactly persuaded by the wrap-up of Siegfried’s article, at least in the intended sense:

    Consequently many research findings reported in the media turn out to be wrong — even though the journalists are faithfully reporting what scientists have published.

    As one prominent science journalist once articulated this situation to me, “The problem with science journalism is science.”

  3. Can someone with a background in medicine explain to me why you would want to give placebos to people with life-threatening conditions?

    1. These patients have been treated already with chemotherapy and surgery. That treatment is repeated when the cancer returns. The clinical trial tests whether niraparib can delay this recurrence. The normal approach is “no treatment”, so a placebo is the proper control.

    2. You use a placebo as a control condition when you are uncertain whether a new treatment will have any effect, good or bad, and there is no other treatment option for the patient. Only in this scenario is it ethical to use a placebo as a control. You use a placebo control condition so you can figure out if the new treatment does good, or does harm, by comparing patient outcomes on the treatment and placebo control conditions. Since you are in a scenario where there is no other treatment option for the patient, a placebo control is the ethical comparator to use.

      If you have evidence that some current treatment is effective, and you want to test a newer treatment, you can use the current treatment as a control condition.

      So typically when you see a study of a placebo versus some medication for people with life-threatening conditions, there is no good evidence that the medication does good or harm, and there is no other option for the patient. In this situation running a study using a placebo control (even on patients with life-threatening conditions) is the ethical way to sort out whether a new treatment will help, or just threaten the patient’s life even more.

  4. “Null hypothesis testing should be banished, estimating effect sizes should be emphasized.” From Tom Siegfried, “10 ways to save science from its statistical self.”

    This article about p-values is poorly researched and thus poorly written.

    When people crash cars or airplanes, we don’t ban cars and airplanes, we set out to train people more thoroughly so they don’t crash as often.

    When people misuse statistical methodologies, banning the methodologies is equally ridiculous. Tom Siegfried clearly does not understand the philosophy of statistical evaluation of data, and needs further training. Siegfried rolls out the tired trope that switching to Bayesian methodologies will save us. So, Tom, will you be an Objective Bayesian, or a Subjective Bayesian? What kinds of priors will you use?

    Ironically, Siegfried’s personal web page

    http://www.sciencenoise.org/

    proudly proclaims

    “I’ve been named the winner of the American Institute of Physics Science Communication Award for 2013. The prize was for an essay I wrote in Science News on the occasion of the discovery of the Higgs boson.”

    a discovery whose very methodology insisted on a really, really small p-value before the team would proclaim any success for their discovery.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.