The week at Retraction Watch featured the revocation of a PhD, a questionable way to boost university rankings, and a look at what editors should do when a researcher known to have committed misconduct submits a new manuscript. Here’s what was happening elsewhere:
- For scientists whose grants are rejected by the U.S. National Institutes of Health, this would be even more cruel than for most people: A new scam features callers pretending to be from the NIH, offering grants. (Cristina Miranda, Federal Trade Commission blog)
- Calls for retraction: Do they aim to correct the record? Or “to ensure the record does not challenge cherished preconceptions?” A Lancet Infectious Diseases editorial makes many of the same points our co-founders made a year ago in their STAT column.
- “It is time to consider how we can stop senior academics bullying their way on to research papers.” A Hippocratic Oath for researchers? asks Trisha Greenhalgh. (Times Higher Education)
- The U.S. Office of Human Research Protections has been pursuing far fewer cases, report Jeannie Baumann and Madi Alexander. (Bloomberg BNA)
- A fascinating history: Publishers killed attempts at a biology preprint service 50 years ago. (Jocelyn Kaiser, Science)
- “Improving scholarly publishing will not be simple or easy.” Kyle Siler on new frontiers. (LSE Impact Blog)
- What’s the “gray literature” at this point? asks the Grumpy Geophysicist.
- “We conclude that ExxonMobil contributed to advancing climate science—by way of its scientists’ academic publications—but promoted doubt about it in advertorials.” (Geoffrey Supran and Naomi Oreskes, Environmental Research Letters)
- “The handling of this case appears to us as an attempt to minimise the problem and to try to make it disappear silently.” The debate over the WHO-INTERGROWTH-21st study continues. (The Lancet)
- “In my own case, I found that I had spent about two-thirds of my effort on projects that never produced a published paper.” John Kirwan looks at “bad ideas” in science. (Nature)
- “This lack of recognition for the value of failure holds back creative risk-taking in science.” (Finn Strivens, The Panoptic)
- “Chemists are finally getting a preprint server of their own, but the idea is still contentious,” writes Rebecca Trager. (Chemistry World)
- A new preprint platform for paleontologists, paleorXiv, launched this week, posting 16 manuscripts on its first day. (Green Tea and Velociraptors blog) Researchers react to its creation. (Ivy Shih, Nature Index)
- “A bold open-access push in Germany could change the future of academic publishing,” report Gretchen Vogel and Kai Kupferschmidt. (Science)
- “Can reviewers rise above their intellectual [conflicts of interest] when performing unblended reviews…?” Two perspectives in JAMA.
- “Science is broken, at least by any useful definition of the word. Self-correction doesn’t always happen, and science journalists mustn’t be afraid to spell that out.” (Daniel Engber, Slate)
- Backlash from the “global academic community” prompts Cambridge University Press to reinstate hundreds of papers previously removed under pressure from a Chinese governmentattempt to censor politically sensitive topics. (Holly Else, Times Higher Education)
- A new paper finds that many psychological research findings have weak evidential support because of a low threshold for acceptance. (PLOS ONE)
- BMC Medicine offers a way to up the number of clinical trial results published. (Chris Chambers, The Guardian) See the original announcement here.
- Three journalists in South Africa are suspended for “a serious breach of ethics” after they published a wildly inaccurate storyclaiming the African National Congress political party was in serious debt. (Sydney Smith, iMediaEthics)
- “Debate about the value of open peer review is often clouded by confusion about exactly what traits are being discussed. Differing open peer review elements need not go together, and they potentially have very different benefits and drawbacks.” (F1000 Research blog)
- A new paper finds that even when psychology papers are successfully replicated, there may be other underlying bad habits that skew the results. (Behavioral Sciences)
- Sometimes reproducibility is simply about checking your chemistry. Monya Baker looks at some initiatives that can help scientists do that. (Nature)
- Proposals in two disciplines to ban graduate students from publishing catches a backlash. (Colleen Flaherty, Inside Higher Ed)
- “Cultural differences between industry and academia can create or increase difficulties in reproducing research findings.” (Science)
- PNAS lifts an embargo early after a news outlet breaks an embargo.
- Four years and 100,000 worms later, a group of scientists haven’t resolved discrepancies in attempts to replicate lifespan extension research — but they’ve caught glimpses of new biology along the way. (Nature). Our co-founders highlighted their efforts in STAT six months ago.
- Should results that can’t be reproduced be published in the first place? (MedPage Today)
- “The R-factor (which stands for “reproducibility, reputation, responsibility, and robustness”) strikes [Neuroskeptic] as a flawed idea.” (Discover)
- “Our results indicate that the propensity to think analytically plays an important role in the recognition of misinformation, regardless of political valence – a finding that opens up potential avenues for fighting fake news.” (SSRN)
- A systematic review of retractions in emergency medicine found that the most common reason for retraction was misconduct by the authors. (European Journal of Emergency Medicine)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.