This week at Retraction Watch featured a look at the nuances of replication efforts, aka “the replication paradox,” as well as yet another story of fake peer reviews, this time at Hindawi. Here’s what was happening elsewhere:
- The University of California, San Diego is suing the University of Southern California and one of its former scientists, saying they conspired to take funding away to create a new Alzheimer’s research center. Here’s USC’s response.
- Fake conferences: A report from de Volkskrant.
- Here’s why officials searched a controversial autism doctor’s offices. Emily Willingham reports.
- Mehrdad Jalalian explains how he identified fake impact factors.
- A predatory journal listed a murdered doctor as its editor-in-chief, Jeffrey Beall reports.
- Want to place bets on which studies will pass replications? Now you can.
- Erasmus University in Rotterdam is rescinding a doctorate in psychology because of plagiarism, Debora Weber-Wulff reports via NRC.nl.
- Social psychologist Jens Förster, whose work has been subject to scrutiny, has been replaced as a speaker on ethics and scandal.
- In a survey, “graduate students from the United States and international graduate students studying in the US are prone to different biases.”
- “Null hypothesis testing should be banished, estimating effect sizes should be emphasized.” From Tom Siegfried, “10 ways to save science from its statistical self.”
- Academic nursery rhymes, courtesy of Academia Obscura. Also: Academics with beer.
- “Thousands of animals have been exposed to deadly pathogens, chemicals, and radiation so that scientists can develop medicines to protect Americans from weapons of mass destruction,” Peter Aldhous reports at BuzzFeed. “Was all this suffering really necessary?”
- The Bogdanov brothers, whose work in theoretical physics has been the subject of a long dispute, have lost a lawsuit they filed against the CNRS for allegedly releasing a report about their theses.
- Did “statistical deception [create] the appearance that statins are safe and effective in primary and secondary prevention of cardiovascular disease?” A new paper argues that.
- Yet another journal has been fooled into publishing a fake paper created using SCIgen, Jeffrey Beall reports.
- An argument that “metrics should support, not supplant, expert judgement,” from James Wilsdon.
- The University of Toronto defended a course whose viewing material included an hour-long interview with Andrew Wakefield, author of the retracted paper in The Lancet claiming to show a link between autism and vaccines, but did not say whether it would be offered again.
- In an open letter to Elsevier, Elena Dragomir says of an issue of Procedia Economics and Finance that “some of the papers published in this issue do not seem to meet the minimum requirements of a decent research paper.”
- “Broader impact statements: Are researchers thinking broadly enough?” asks Kirk Engelhardt.
- A professor in Italy faces charges that he pocketed some of his students’ fees.
- “…I will from now on only review for journals where the review is open and published or where I am free to publish the review,” writes Dave Fernig.
- “What do early-career researchers think about open access?” A survey.
- Shocking: People lie about their weight online, too!
- A clinical trial participant’s Change.org petition convinced a drug company to unblind her.
- How to name diseases: A modern guide, from Alastair Gee at The New Yorker.
- In autism research, “overhyped studies have left families clamoring for experimental treatments,” writes Rachel Zamzow.
- “Today’s retraction boom:” Ivan talks to Theral Timpson of Mendelspod.
- In an earlier version of a new PLOS ONE paper, Gilles Séralini had not disclosed a significant conflict of interest, and had included a section of the abstract that the editors later cut, the Genetics Literary Project notes.
- “Has physics cried wolf too often, or do false alarms help build understanding?” asks Jon Butterworth.
- Want funding for research on transparency? Talk to the Berkeley Initiative For Transparency In The Social Sciences.
- “A reanalysis calls into question a year-old claim that humans can decipher at least 1 trillion different scents,” Kerry Grens reports at The Scientist.
- The retraction of a story about an incident in the operating room raises important questions about trust and anonymous sources, Ivan writes in MedPage Today.
- Here’s why citation distribution matters, and what PeerJ is doing about it.
- Could Science editor-in-chief Marcia McNutt’s climate change editorial affect how the journal reviews papers? Doubtful, says Andrew Revkin.
- In other news, McNutt is slated to become the next president of the National Academy of Sciences, so Science will need a new editor-in-chief next year.
- Here are the “six major challenges that require more attention in the ethics education of students and scientists and in the research on ethical conduct in science.”
- A neurosurgeon who “left the University of California, Davis, in 2013 after officials concluded [his] actions violated the school’s code of conduct” has been hired by Marshall University, the Associated Press reports.
- “When it comes to big data, genomics may soon take the lead in requiring the most storage space,” The Scientist reports.
- In open access journals, “the percentage of grant-funded articles increases as the associated [article processing charges] increase,” according to a new study.
Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.
The link to Ivan’s MedPage Today piece isn’t right. I think it should go here: http://www.medpagetoday.com/Blogs/IvanToday/52511
Indeed, fixed. Thanks!
Some might be interested in my views (feel free to comment here or at PubPeer):
Teixeira da Silva, J.A. (2015) Negative results: negative perceptions limit their potential for increasing reproducibility. Journal of Negative Results in BioMedicine 14: 12.
http://www.jnrbm.com/content/14/1/12
http://www.jnrbm.com/content/pdf/s12952-015-0033-9.pdf
DOI: 10.1186/s12952-015-0033-9
Teixeira da Silva, J.A. (2015) COPE code of conduct clause 3.1. under the microscope: a prelude to unfair rejections. Current Science 109(1): 16-17
http://www.currentscience.ac.in/Volumes/109/01/0016.pdf
I’m not exactly persuaded by the wrap-up of Siegfried’s article, at least in the intended sense:
Can someone with a background in medicine explain to me why you would want to give placebos to people with life-threatening conditions?
These patients have been treated already with chemotherapy and surgery. That treatment is repeated when the cancer returns. The clinical trial tests whether niraparib can delay this recurrence. The normal approach is “no treatment”, so a placebo is the proper control.
You use a placebo as a control condition when you are uncertain whether a new treatment will have any effect, good or bad, and there is no other treatment option for the patient. Only in this scenario is it ethical to use a placebo as a control. You use a placebo control condition so you can figure out if the new treatment does good, or does harm, by comparing patient outcomes on the treatment and placebo control conditions. Since you are in a scenario where there is no other treatment option for the patient, a placebo control is the ethical comparator to use.
If you have evidence that some current treatment is effective, and you want to test a newer treatment, you can use the current treatment as a control condition.
So typically when you see a study of a placebo versus some medication for people with life-threatening conditions, there is no good evidence that the medication does good or harm, and there is no other option for the patient. In this situation running a study using a placebo control (even on patients with life-threatening conditions) is the ethical way to sort out whether a new treatment will help, or just threaten the patient’s life even more.
“Null hypothesis testing should be banished, estimating effect sizes should be emphasized.” From Tom Siegfried, “10 ways to save science from its statistical self.”
This article about p-values is poorly researched and thus poorly written.
When people crash cars or airplanes, we don’t ban cars and airplanes, we set out to train people more thoroughly so they don’t crash as often.
When people misuse statistical methodologies, banning the methodologies is equally ridiculous. Tom Siegfried clearly does not understand the philosophy of statistical evaluation of data, and needs further training. Siegfried rolls out the tired trope that switching to Bayesian methodologies will save us. So, Tom, will you be an Objective Bayesian, or a Subjective Bayesian? What kinds of priors will you use?
Ironically, Siegfried’s personal web page
http://www.sciencenoise.org/
proudly proclaims
“I’ve been named the winner of the American Institute of Physics Science Communication Award for 2013. The prize was for an essay I wrote in Science News on the occasion of the discovery of the Higgs boson.”
a discovery whose very methodology insisted on a really, really small p-value before the team would proclaim any success for their discovery.