The week at Retraction Watch featured the world energy solution that wasn’t, a story about Elsevier and fake peer reviews, and a question from a readers about citing retracted papers. Here’s what was happening elsewhere:
- “It was then we found their name in the newspapers, on Retraction Watch, and in a few other places. Why following up on fraud matters. (Richard P. Grant, The Guardian)
- “Evidence shows that research abstracts are commonly inconsistent with their corresponding full reports, and may mislead readers.” (BMC Medical Research Methodology)
- “Science is too important and scientific dollars are too precious to be wasting time on work that’s been shown to be fraudulent or wrong.” Our Ivan Oransky reveals how many retractions there were in 2017. (Kelly Crowe, CBC)
- Last year, a German court convicted a rising professor of fraud, for thesis plagiarism. Bernd Kramer says the case is shut, legally speaking. “Nevertheless, questions remain.” (Zeit Campus, in German)
- Librarians Amy Riegelman and Caitlin Bakker offer a guide to understanding retractions. (College & Research Libraries News)
- “We should not allow scientific research in Malaysia to be defamed by Western journalists who lack critical thinking and are poorly trained in science and the philosophy of science,” an anonymous scientist based in the U.S. says. (The Star)
- The UK’s Research Excellence Framework “forces academics to produce scholarship in greater quantity but of poorer quality.” (Rachel Pells, Times Higher Education)
- How long does one particular lab take to publish their findings? Stephen Royle answers. (Quantixed)
- A milestone for “one of the first preprint repositories to specialize in the work of a single country.” (Ivy Shih, Nature)
- “We’ve been told that facts have lost their power, that debunking lies only makes them stronger,” says Daniel Engber. “Don’t believe it.” (Slate)
- “The circumstances of these retractions highlight some of the challenges connected to reproducibility policies,” writes Science editor in chief Jeremy Berg in a progress report. (Science)
- “The unique character of peer review in criminology remains unknown,” says Ethan M. Higgins. (Journal of Criminal Justice Education)
- A professor’s research on fluoride and public health has garnered her “more than her share of haters.” (Tom Blackwell, National Post)
- “Integrity goes beyond avoiding misconduct, and scientific integrity has a wider domain than research integrity.” (David Shaw and Priya Satalkar, Accountability in Research)
- “Violence against scientists is rare in the United States, but occurred at least three times in 2016.” (Journal of the American Academy of Psychiatry and the Law)
- A new study says “it is not the time taken to revise papers but the actual number of revisions that leads to greater recognition for papers in terms of citation impact.” (Scientometrics)
- Two experts on reproducibility, University of Illinois’ C.K. Gunsalus and NPR’s Richard Harris, sit down to talk about…college football and family history, of course. (NPR)
- When first author credit is shared, men are more likely to be listed first than women. (Preprint, bioRxiv)
- An analysis of more than 1,000 retractions we’ve covered, broken down by country. (Scientometrics)
- “There have been two distinct responses to the replication crisis – by instituting measures like registered reports and by making data openly available. But another group continues to remain in denial.” (Shravan Vasisth, The Wire)
- Among clinical researchers, “Sharing manuscripts to a public online platform, instead of submitting to a peer-reviewed journal, would have been considered by 55.2% (n = 186) of respondents.” (Research Integrity and Peer Review)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here. If you have comments or feedback, you can reach us at [email protected].
Great articles to start out 2018.
Ivan gets it and articulates it precisely here:
http://www.cbc.ca/news/health/second-opinion-december-30-2017-1.4468173
“The whole point is to cut down on waste in science,” Oransky said. “Science is too important and scientific dollars are too precious to be wasting time on work that’s been shown to be fraudulent or wrong.”
Richard Grant too:
https://www.theguardian.com/science/occams-corner/2018/jan/04/science-fraud-research-misconduct
“And this brings me back to where we came in, to the effect scientific malpractice has on other people. The scientific fraudster who worked briefly for us had lied and cheated their way to a prestigious position, winning grants and awards on the back of made-up research. In doing so they took resources that could have been used by someone else, someone who wasn’t a liar and a cheat.”
Through winning research funds and positions – deceitfully – they denied an incredible opportunity to somebody who might have actually deserved it; they prevented some other young, honest and hard-working scientist from fulfilling their potential.
There is a spectrum of wrongdoing in science, from selecting the one result out of many that makes sense to you {NOTE: this is about cherry-picking data or patients], up to deliberately faking figures in papers. And until there is a change in culture such that people are encouraged to say, “Guys, this isn’t right” – such as I might have been a decade ago – there will continue to be high-profile cases of seriously damaging scientific fraud.”
While I’ve lost confidence in ORI, COPE and other watchdogs, there’s hope in investigative [aka just plain good] science journalists (like those of RW, Stephanie M Lee of Buzzflash, Ed Yong of The Atlantic, etc).
‘55.2% (n = 186) of respondents.’
Why not give the percentage with all the ten digits a pocket calculator provides? It would be only marginally more irrelevant than all the digits of the 55.2% cited.
Sigh!
What is up with you people and your hatred of decimal percents? It’s not harming anything.
The first digit after the decimal point is relevant here, and necessary for consistency: 337 individuals completed the survey, and n=186 would have considered sharing their manuscripts to an open platform. By rounding (186/337)*100 to 55%, the figure would be ambiguous, given that the same result is obtained with n = 185: (185/337)*100 = 54.9% = 55% after rounding. Even with n = 187, the same result is obtained: (187/337)*100 = 55.49% = 55% after rounding. The percentage including one (and only one) digit after the decimal point ensures that n = 186 is correct.
In contrast, (186/337)*100 = 55.19% would be meaningless, since n is an integer. On the other hand, things would be different with large integers (say n > 100,000), but it’s not the case here. Not to mention the cases where n is a very very large integer, for example the Avogadro’s number: a yield of 55.2% for a synthesis in chemistry is a fantasy.
I think Bernès misses the point a little. There is no particular need to be able to calculate the exact number of respondents from the percentage. A better guide is whether the level of precision printed has any meaning given the likely accuracy of the figure. We know that if we repeated the experiment a few times, even under identical conditions, the percentage would vary by at least a few percentage points [sd of a percentage = 100 * p (1-p)/sqrt(N)]. So the figures after the decimal place here are effectively just random numbers of no information value. Although jxj might feel that extra digits do no harm, sometimes these extra figures may mislead the casual reader in suggesting greater accuracy than really exists: it is poor scientific communication. And, at least to me, not rounding appropriately suggests a lack of understanding by the authors of what their results can tell us.