Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, we could really use your help. Would you consider a tax-deductible donation to support Weekend Reads, and our daily work? Thanks in advance.
The week at Retraction Watch featured the University of Alabama’s request for 20 retractions of papers by one of its former researchers; a sturgeon researcher who’s up to 13 retractions for fake peer review; and what happens when researchers from several high-profile institutions can’t reproduce findings. Here’s what was happening elsewhere:
- “Science should be ‘show me’, not ‘trust me’; it should be ‘help me if you can’, not ‘catch me if you can’.” (Philip Stark, Nature)
- “39.2% revealed having been pressured by a principle investigator or collaborator to produce “positive” data. 62.8% admitted that the pressure to publish influences the way they report data.” (Clinical Cancer Research)
- “Relatively few journals asked reviewers to grade specific components of a manuscript. Higher impact factor journal manuscript grading forms more frequently addressed statistical analysis, ethical considerations, and conflict of interest.” (Research Integrity and Peer Review)
- “A leading wolf researcher has agreed to leave Washington State University at the end of the spring term in return for $300,000 to settle a suit he brought over infringement of his academic freedom.” (Linda Mapes, The Seattle Times)
- “The dirty secret of philosophy is that we have insanely low acceptance rates—often well under 10% —for papers.” (Justin Weinberg, Daily Nous)
- How to review a manuscript: 10 tips from editors. (Chris Palmer, APA Monitor)
- “Apparently, there are some editors of academic journals who will readily send manuscripts out to ‘non-preferred reviewers’ — the specific people that authors specify who they don’t want to receive the paper for review. I think this is all kinds of messed up.” (Small Pond Science)
- Anne Scheel et al — including Nick Brown — find problems in a study of fetal visual perception. The authors of the original study respond. (Current Biology)
- What did a “reproducibility crisis” committee find when it looked at climate science? (Francie Diep, Pacific Standard)
- “Slow Academia is for the privileged – but then, isn’t all academia?” (Alison Edwards, The Thesis Whisperer)
- “Are funder Open Access platforms a good idea?” ask
- When reviews cite papers that are later retracted, that’s a problem, says Richard Gray. (Danielle Chilvers, The Wiley Network) We interviewed Gray about the issue in January.
- The U.S. EPA has extended the comment period on so-called “secret science” rules. (Federal Register)
- A former EMBO Young Investigator Award winner is up to five retractions. (Jim Daley, The Scientist)
- Brian Wansink — whose work has come under intense scrutiny, with some of it retracted — says he’s going to start publishing again soon.
- A pioneer in heart surgery has “been accused of putting his quest to make history ahead of the needs of some patients.” (Charles Ornstein, Mike Hixenbaugh, ProPublica/Houston Chronicle)
- “There is little evidence to suggest peer reviewer training programmes improve the quality of reviews,” writes Shaun Khoo. (LSE Impact Blog)
- “[I]n a world full of stories and claims about the next big transformation in medicine, did the claims made by Theranos stand out as that outrageous, unbelievable, or in need of extra scrutiny?” (Michael Joyner, STAT)
- “The U.S. Navy has uncovered further data falsification at the Hunters Point Shipyard,” reports NBC Bay Area.
- “Professor Moosa said the fact the Retraction Watch website existed was evidence of the growing problem of academic misconduct.” (Erica Servini, The Australian)
- “Given that there are more than 28 000 peer-reviewed journals, one might ask whether the world needs another.” A JAMA open access journal is born.
- Richard Lehman, whose weekly journal reviews in the BMJ are much loved, will be retiring this summer. Know anyone who might want the job?
- “Peer review varies in quality and thoroughness,” says Nikolai Slavov. “Making it publicly available could improve it.” (The Scientist) Irene Hames agrees.
- “We know what it takes for institutions and scholars to produce high-quality, high-integrity research, and yet we do not always act upon that knowledge.” (C.K. Gunsalus, Bill of Health)
- “[T]he rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome.” (Ioana Alina Cristea, John P. A. Ioannidis, PLOS ONE)
- “However, our study did not get scooped, and our paper was published in our first-choice journal without issue.” How Brian O’Roak learned to stop worrying and love preprints. (Spectrum)
- A different kind of retraction, of a political endorsement, from The Dallas Morning News.
- Journals are losing citations to preprint servers, says Phil Davis, who’s crunched the numbers. (The Scholarly Kitchen)
- Geologists admit to falsifying data, but say it’s OK because the data are taken from a stratum of the same age. (The Japan Times)
- “Our client found that when the object of her complaint was a star faculty member, those policies were not enforced.” More sexual harassment allegations at Harvard. (Shera Avi-Yonah, Angela Fu, The Harvard Crimson)
- “[M]ost researchers don’t think in terms of story when they write a journal paper,” but they should, says Anna Clemens. (LSE Impact Blog)
- “Vulnerable patients” are “easy targets for companies willing to sacrifice ethics for profits,” says Jody Lyneé Madeira. (The Hill)
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
“Professor Moosa said the fact the Retraction Watch website existed was evidence of the growing problem of academic misconduct.” (Erica Servini, The Australian)
This illustrates why the written scientific literature is the heart of science and not oral debate. No matter how smart you are it is so easy to say something that is obviously stupid in writing.
Those who do not see the fallacy: could you start a Moosa Watch blog. That would then be evidence that there is something wrong with Australian science.
P.S. It would be nice if links to paywall sites were marked as such.
“39.2% revealed having been pressured by a principle investigator or collaborator to produce “positive” data. 62.8% admitted that the pressure to publish influences the way they report data.”
Ahemm.
To give such numbers with three digits is justified if the study included on the order of 1 000 000 or more participants, to compensate for the shot noise alone. But then, it is well-known that 67.9273446567326232 % of all statistical studies report way too many digits.