The week at Retraction Watch featured the retraction of a physics society’s press release quoting U.S. president-elect Donald Trump, and an apparent blow for clairvoyance research. Here’s what was happening elsewhere:
- The true — but hilarious — horrors of asking for someone’s data, courtesy of David Crotty in The Scholarly Kitchen.
- The risk to a researcher’s reputation doesn’t seem to be enough to stop scientific fraud. We need tougher deterrents, argues Marilyn McMahon. (The Conversation)
- “Your study has been retracted.” A light-hearted take on the problems plaguing many academic papers. (Chip Rowe, Nautilus)
- “There’s a way to spot data fakery,” our co-founders argue in STAT. “All journals should be using it.”
- Research has failed to show that peer review provides much benefit, argues Richard Smith. So what’s our best option? (BMJ Blogs; disclosure: Smith is a member of the board of directors of our parent non-profit)
- “It was the first time that data – my data – showed me that everything I thought was true was incorrect. It shattered my world view.” Some thoughts from the day after the U.S. election. (Aaron Carroll, The Incidental Economist)
- “Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions,” a new PLOS Biology study finds.
- “In my opinion, clarity breeds precision, and vice versa.” So when writing your scientific papers, follow Meenakshi Prabhune’s golden rule: keep it simple. (Nature Jobs)
- How do you know which journal is right for your paper? Patrick Dunleavy has a comprehensive guide. (The Impact Blog)
- Randomized clinical trials effectively minimize the risk of bias in the trial, but the results are still susceptible to trickery like fudging the data or cheating on the peer review process, says Edzard Ernst. (The Spectator)
- “The World Health Organization has referred leading Oxford University researchers to the UK General Medical Council after an independent review found that they committed research ethics misconduct,” Nigel Hawkes reports. (The BMJ; sub req’d)
- Want to re-use your figures in different papers? Here’s how to do that the right way. (GSN Munich)
- The U.S. Office of Research Integrity revises its plagiarism module to address cultural linguistic issues and predatory publications. (ORI)
- “In general, researchers tend to be ‘seduced’ by P values, often times relying too much on them at the expense of missing important interpretations of their data.” (Christian Nelson, Elizabeth Schofield, Journal of Sexual Medicine; sub req’d)
- Just days after taking office, Croatia’s new science minister is under investigation over allegations he plagiarized parts of a paper. (Nenad Jarić Dauenhauer, Chemistry World)
- South Korea’s safety minister allegedly plagiarized his dissertation from six other papers. (Lee Han-soo, The Korea Times)
- A new paper on arXiv suggests that a problem with Web of Science’s database can negatively impact researchers attempting to quantify their scientific output.
- A new metric by the NIH, the Relative Citation Ratio, has quietly become the hottest metric for biomedical funders. (Gautam Naik, Nature)
- Is math a hotter field than physics, economics, and biomedicine? A new paper in arXiv says so, but Carl Bergstrom tells us what it really means.
- “Our editorial team recognizes that this is not the best clinical trial we have published in the Annals of Allergy, Asthma and Immunology. However, neither is [it] the worst.” Edzard Ernst wonders about a paper on acupuncture.
- In recent surveys, more than half of scientists expressed support for open peer review, although many still debate the details of how it should work. (Ewen Callaway, Nature) A related editorial in Nature Communications.
- The Council for the Development of Social Science Research in Africa announces a new citation index for the continent. (CODESRIA)
- “Although industry-sponsored studies were more likely to have conclusions favorable to industry than non–industry-sponsored studies, the difference was not significant.” (JAMA Internal Medicine)
- “The researchers concluded that articles in English received more citations than those published in other languages.” (Lilian Nassi-Calò, Scielo Blog)
- What are the rules for electronically stored information when it’s used in a misconduct investigation? Paul Thaler and Karen Karas take a look. (Cohen Seglias blog)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
“The researchers concluded that articles in English received more citations than those published in other languages.” (Lilian Nassi-Calò, Scielo Blog)
The relevant links are
http://blog.scielo.org/en/2016/11/04/study-shows-that-articles-published-in-english-attract-more-citations/#.WCdJfPrhCUk
and
http://link.springer.com/article/10.1007%2Fs13280-016-0820-7 (the original article in Ambio, unfortunately not OA)
Fixed, thanks!
I fully agree that fraudulent use of public funds is a real and serious crime and should have real and serious consequences.
BUT the US locks up too many non-violent offenders already. Incarceration costs the taxpayers a huge amount of money.
I would prefer to see serious fraudsters spend the rest of their lives paying back what they stole out of wages earned by flipping burgers.