The week at Retraction Watch featured revelations about what happens when researchers unwittingly use a tool without permission, and a look at why women peer review less often than men. Here’s what was happening elsewhere:
- Medical academics like me can roll their eyes at naughty old pharma,” writes Ben Goldacre. “But there’s a problem. The evidence shows that medical academics also misbehave.” (The Times)
- “Are phase I trials ethical?” asks In The Pipeline blogger Derek Lowe.
- “It is going to be really hard to shake that culture in science: that a mistake is still a sin,” says Pamela Ronald, who has had to retract papers. (Holly Else, Times Higher Education)
- In the wake of losing Jeffrey Beall’s famed journal blacklist, we need more groups undertaking similar efforts, our co-founders argue in their latest for STAT. And Neuroskeptic explains why he’ll miss Beall’s list. (Discover)
- “The book is no longer available for sale in our shops pending a review by our historians for factual accuracy.” Falsehoods lead the Smithsonian to pull a book about President Trump. (Ian Shapira, Washington Post)
- Autism advocates and scientists are objecting to the fact that Andrew Wakefield, whose study claiming a link between vaccines and autism was retracted, will address an European Parliament-backed event next month. And a film Wakefield directed will not be shown in London, following a similar outcry. (Tom Whipple, The Times)
- “I didn’t sign a confidentiality agreement, and I was not aware that I had implicitly agreed to the journal’s policies.” A reviewer clashes with Elsevier over sharing the contents of his review. (Quirin Schiermeier, Nature)
- In advance of his February 3rd lecture in Montreal on post-publication peer review, our co-founder Ivan Oransky discusses the current state of research integrity. (McGill Reporter)
- “[I]ncreasingly scientists are realizing that if a replication fails to reproduce the original results, it doesn’t mean the original was wrong.” Vox’s Julia Belluz’s take on the recently released Reproducibility Project studies.
- “In the final analysis, it turns out that that ‘excellence’ is not excellent.” A look at evaluations of scientific quality. (Palgrave Communications)
- A research infrastructure manager says data-sharing terms are currently leave too much interpretation by lawyers and across countries, and they need to be clearer. (Nature)
- A billionaire who made a fortune at Enron has stepped in to support the battle against bad science. (Sam Apple, Wired) (Disclosure: The Arnold Foundation funds The Center For Scientific Integrity, the parent organization of Retraction Watch)
- “Young researchers may feel unspoken pressure to ensure their data fit a hypothesis,” writes Miriam Shuchman. (CMAJ)
- As of January 1st, the University of Groningen has required all of its researchers to provide fulltext versions of their articles to be made publicly available. (James C Coyne, Quick Thoughts blog)
- “Anyone who has ever been ‘scored’ will worry about the accuracy of the scores given; anyone who has been involved in decision-making will have their own views about the process, its validity and whether their own part left them satisfied.” In academics, says Athene Donald, there are plenty of biases in committees people don’t consider. (Occam’s Typewriter blog)
- Modeling reveals patterns such as which researchers are more likely to accept a review. (Jen Laloup, PLOS Blog)
- The University of Kentucky sued its student newspaper to avoid having to release the results of an investigation into sexual misconduct allegations, and won. (Tyler Kingkade, BuzzFeed)
- A medical journal no longer recommends a widely-used morning sickness pill, reports The Toronto Star, following a re-analysis that was part of a look at invisible and abandoned trials.”
- India’s University Grants Commission includes 35 predatory journals on its list of “preferred” journals to publish in. (Thomas Manuel, The Wire)
- “I think that watching a season’s worth of episodes of Game of Thrones is more valuable than writing a paper such as ‘Eating Heavily: Men Eat More in the Company of Women.’ Views on this may differ, however.” Andrew Gelman looks at four papers that emerged from one set of questionable data. His post follows a PeerJ preprint in which researchers take a more rigorous look at the same papers.
- The T-Index: “Our proposal not only resolves the long standing concern for the fair distribution of each author’s credit depending on his/her contribution, but it will also, hopefully, discourage addition of non-contributing authors to a paper.” (Journal of Informetrics, sub req’d)
- “Open Access is in the longer run almost inevitable, because it is the optimal solution, and the best interest of all stakeholders in the process. But due to the peculiar oligopolistic nature of the publishing industry, the progress has so far been painstakingly slow.” This passage, ironically, appears in the conclusion of a paywalled article.
- “Scientists at the [U.S.] Environmental Protection Agency who want to publish or present their scientific findings likely will need to have their work reviewed on a ‘case by case basis’ before it can be disseminated,” “even though “any review would directly contradict the agency’s current scientific integrity policy.” (Nathan Rott, NPR)
- How can you be sure the health news you see isn’t fake? The Guardian’s Sarah Boseley and our co-founder Ivan Oransky offer their tips. (BBC Health Check)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Third to last link (“paywalled article”) is broken.
Fixed, thanks.
Dear Retraction Watch,
I’m not sure why you included Ben Goldacre’s “Patients are dying from lack of good medical research” at the top of your weekend reads list. The article didn’t provide any insight into either pharma company or academic research issues for shutting down trials or bad data or his euphemistic “it’s a mess.” The only light shed on the realities of the dilemma was the sentence fragment “regulators over-interpret EU red tape on research.” Which was an over-simplistic partial thought. There was no actionable message or request. It was basically a gripe; I can read those on social media. There are PLENTY of articles and positions out there on the reasons that trials are abandoned (poor efficacy, unclear outcomes, PI trial violations, sloppy subject records, sanctioning, a new and better novel candidate, weak P-values, reviewer dissatisfaction, on and on). Please don’t waste my time with shallow puff pieces. I expect more from you all.