The week at Retraction Watch featured revelations about a case of misconduct at the University of Colorado Denver, and the case of a do-over that led to a retraction. Here’s what was happening elsewhere:
- “An eye for an eye, a tooth for a tooth — a life for a lab book?” A Chinese court recommends punishing misconduct by execution in the most serious cases involving drug approval data. Our co-founders’ latest for STAT.
- “If nobody criticizes my work, I won’t learn that I’m wrong.” Julia Rohrer argues there’s merit in criticism and danger in too much scientific cuddling. (The 100% CI blog)
- Here’s an idea to criticize: “Universities can as well replicate the policies by having annual awards to staff who publish in highly acclaimed international journals.” (Samson Rwahwire, New Vision)
- Elsevier is awarded $15 million in damages in the publisher’s case against Sci-Hub, offering paywall-free access to countless academic papers and books. (Quirin Schiermeier, Nature)
- A look back on 47 years of Infection and Immunity by outgoing editor-in-chief Ferric C. Fang, and what has kept the journal on the road as other journals have fallen by the wayside. (Note: Fang is on the board of directors for The Center For Scientific Integrity, the parent non-profit organization of Retraction Watch.)
- A researcher at the Oklahoma City Veterans Affairs Medical Center has been fired after he was found to have double-billed for his hours. (Randy Ellis, NewsOK)
- In the middle of an investigation in allegations of misconduct, one University of Tokyo researcher launches a preemptive defense of his papers, saying that an apology and corrections will suffice. (Dennis Normile, Science)
- “Threshold levels for [Journal Citation Report] suppression are set exceedingly high, so high, that even when a journal quintuples it’s Impact Factor and moves from 10th to 1st place, it may not be sufficient to trigger suppression.” (Phil Davis, The Scholarly Kitchen) See our coverage of one such suspicious journal here.
- The growing trend of using online databases to evaluate academic productivity is undermined by the variation in database quality, says a new letter. (New England Journal of Medicine)
- Andrew Gelman has a candidate for best correction ever: One from Brian Wansink’s lab. (See more here.)
- A top contender for Best Retraction of the Last Century — the L.A. Times sends out an automated tweet about an earthquake from 1925. (Twitter)
- A Cooperation and Liaison between Universities and Editors (CLUE) proposal recommends keeping data for 10 years (Holly Else, Times Higher Education). We spoke with one of the guideline’s authors here.
- The Impact Factor is neither the only way nor a particularly good way for researchers to assess journal quality.” (Sierra Williams, PeerJ Blog)
- “Replication is impossible, falsification unnecessary and truth lies in published articles.” Discuss. (Matti Heino, Data Punk blog)
- The International Agency for Research on Cancer, part of the World Health Organization, says there will be some changes to a paper following a Reuters story alleging that the agency didn’t have certain data when determining whether glyphosate is carcinogenic. See the original Reuters story here.
- The increasing transparency in peer review processes allowed studies to confirm that science’s gender gap extends beyond the research side of things into who is asked to review as well. (Emma Stoye, Chemistry World)
- Stories of academic retractions can feed a narrative that science is broken rather than self-correcting, says Neuroskeptic (Discover). Related: Check out the our co-founder Ivan Oransky’s recent presentation on the weaponization of retractions.
- The case against plagiarism detection software like Turnitin: Companies can strip mine the submitted material and sell it for profit. (Sean Michael Morris and Jesse Stommel, Digital Pedagogy Lab)
- A group of Danish researchers suggest that LIGO’s discovery of gravitational waves may have been premature, and the signal they thought they found was nothing more than noise. (Sabine Hossenfelder, Forbes)
- China’s ministry of science and technology vows a “no tolerance” approach to academic misconduct following the largest case of faked peer reviews yet. (Yuan Yang and Archie Zhang, Financial Times, sub req’d)
- “So how do you separate bona fide solicitations for journal articles, book proposals, or conference papers from those that are not?” (Brooks Hanson and Jenny Lunn, Eos)
- “A long-debated study aimed at validating a low-cost way to screen for cervical cancer in India has come under fire again,” Charles Piller reports in STAT. Read our earlier coverage of the withdrawal of a paper on the subject.
- “Replication in science, especially for the past fifty years, has been decreasing significantly. Why?” (David Kamper, The Michigan Daily)
- How do we make data count — ie make it citable? A new grant supports work toward that. (Laura Rueda, DataCite blog)
- Daniele Fanelli has a new version of his “mathematical theory of knowledge, science, bias and pseudoscience.” (PeerJ)
- “Does it ever happen that there are malicious, unfounded accusations of misconduct? What is the best way to proceed with the person who made the accusations?” Amperasand, the PRIM&R blog, talks to an investigator for the U.S. National Science Foundation’s Office of the Inspector General, Jim Kroll.
- Bias and unreliability in peer review: Dan Kahan reviews a classic study of the subject. (Cultural Cognition Project blog)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Elsevier getting 15 million in damages is so important, it gets mentioned twice?
(fourth point, and third-to-last point)
We’ve retracted that duplicate publication — thanks!
With regard to the LIGO announcement of gravitational waves and the re-analysis by Danish researchers suggesting that the signal is really noise, does anyone know of an independent verification of the putative waves using the same raw data and the stated methodology?
In terms of the LIGO detection, one doesn’t even need statistical analysis – the result is visible by eye (the first detection was at least 50 sigma, but in fact so strong they couldn’t really decide on how to estimate it, because there was no noise at that level to actually estimate it – they had to extrapolate the noise from far away in parameter space).
But if you read the analysis of the Danish team, it’s obvious it’s garbage. For instance, they complain that once you subtract the best fit theoretical template, the remaining signal is correlated. Well, frankly, duh. The templates aren’t perfect matches – they’re theoretical templates constructed for possible signals, but they have finite resolution, so the actual signal is a little off the template. So, after subtracting the template, there’s a bit of signal left, which correlates with the bit of signal left in the other detector. Ditto when the noise is the sea of low power gravitational waves we can’t isolate from one another – that should have the same lag time and be correlated.
The correlations persist for periods longer than 40 minutes, preceding transient arrivals by >10 minutes. The problem is that subtraction of template from whitened data seriously overestimates the majority of signal strength. The signals themselves are very generic broadband signals common to many non-gravitational sources at many time scales.
Just saw the LIGO noise, much higher than signal, I mean, is that really?
This is entirely true. LIGO signals are order-subtracted, yet these processes are not also applied to coincident magnetometers or other multiscaled geomagnetic or space physical data. Transient vacuua and extended quasiperiodic coupling events may not have sophisticated exclusion protocols during LIGO data processing if detection thresholds are already inadequate or favor particular averaged windows that can obscure fine variation.