This week at Retraction Watch featured the return of a notorious fraudster, and plagiarism of plagiarism. Here’s what was happening elsewhere:
- The first of four assessments into former Karolinska Institutet surgeon Paolo Macchiarini’s papers finds he committed misconduct in a paper describing esophagus implants in rats. (Gretchen Vogel, Science) Meanwhile, earlier in the week, the Karolinska was slammed for its decisions (The Local) amid a call for its chancellor and board of directors to resign. (Svenska Dagbladet) And two Nobel judges were dismissed. (The Guardian) Summary here.
- Certain data from the controversial PACE trial of chronic fatigue syndrome will be disclosed to a member of the public by Queen Mary University of London. (press release) Meanwhile, the leaders of the trial re-analyzed their data according to their original prespecified protocol – and found very different results.
- Female scientists are turning to data to show bias at conferences, reports Apoorva Mandavilli in The New York Times. And “Women are more likely to be accepted to speak at academic conferences if applications are anonymised to remove any mention of their gender,” reports Jack Grove at Times Higher Education, based on a new study.
- “In my opinion, this review is irredeemably flawed and should be retracted.” Hilda Bastian doesn’t mince words in her analysis of a paper on women in science. (PubMed Commons)
- Heard of the Indian Journal of Medical Ethics? Our co-founders take a look at why this obscure journal is getting a lot of attention. (STAT)
- Publish or perish: This PhD student has been writing erotic novels to fund his or her studies. (The Guardian)
- “Fraudulent academic data could cost Duke University millions:” Our Alison McCook speaks to AirTalk about our recent feature in Science.
- Scientists are raising serious questions about a study linking multiple sclerosis to a particular gene. But “like more and more journals, the one that published the paper does not run letters to the editor, making it harder for scientists to see that the claim has been hotly disputed.” (Sharon Begley, STAT)
- “Some publishers show a lack of understanding of research as an iterative process” in their scramble to publish the latest results, Alice Meadows and Karin Wolf write. (The Scholarly Kitchen)
- Academic archetypes are an anti-pattern for reproducibility, says Benjamin Laken.
- Let’s “treat harassment as we would falsifying data or plagiarism—as a type of scientific misconduct.” (Serina Diniega et al, Earth & Space Science News)
- This journal would be happy to have you on its editorial board, as long as you pay for the privilege. (Jeffrey Beall)
- “A new wave of studies that attempted to replicate the promising experiments [involving simple psychological tricks] have found discouraging results,” writes Ed Yong. (The Atlantic)
- “Molecular Biology of the Cell (MBoC) has developed a checklist for authors to help them ensure that their work can be reproduced by others,” writes Mark Leader.
- “Like accident investigations that seek to identify correctable causes and thus reduce the likelihood of future accidents, a robust and informative process for reporting the causes of retraction could provide information to minimize future errors,” write Arturo Casadevall and colleagues in a framework for improving research quality. (mBio)
- “Should psychology researchers focus more on confirming old results and less on new discoveries?” asks Jonathon Keats. (Discover)
- Controversies in the UK over statins have revealed there is a “lack of a central institution where scientists who wish to question the actions or ethics of other scientists or scientific institutions can go,” says Lancet editor Richard Horton. Larry Husten, at Cardiobrief, unpacks the arguments.
- Where are the data? Nature and 12 of their other titles will require submitted papers to include information about accessing their underlying data.
- A new study looks at whether open access papers are cited more often. Surprise, surprise — they are, just as many other studies have found. (David Matthews, Times Higher Education)
- Are the most prestigious medical journals transparent enough? It’ll only cost you $31.50 to find out. (Trends in Pharmacological Science)
- It’s a long march to open data in science, and different fields face different problems, from patient confidentiality to data theft. (Sven Titz, Swiss National Science Foundation)
- “More emphasis on publications means that early-career researchers have become replaceable (and often unemployable) cogs in a paper-production machine, while the amount of unread and irreproducible research and patents has exploded.” Instead, we have to fix the perverse incentives around scientific funding, says Julia Lane. (Nature)
- “Is most published research wrong?” A video from Veritasium.
- “The descriptions of actual [animal] experiments in scientific publications and college textbooks are frequently sanitized,” and it covers up the emotional torment the research can cause, says John P. Gluck in The New York Times.
- Why can’t more recent studies replicate the famous result that typically Black names receive fewer callbacks than typically White names? Uri Simonsohn has some ideas. (Data Colada)
- “It turns out that the problem of rejecting a paper, even if the data are false, does not prevent the data from being published in another journal.” Fraud is one of the reasons so few new cardiovascular drugs make it to the clinic, says Stephen F. Vatner. (Circulation Research)
- “Unfortunately, not every claim accorded the status of fact turns out to be true.” A paper explores how scientific claims become canonized — and what publication bias has to do with it. (arXiv)
- Pictures of scientists in the New York Times skew race, but not gender, says a new study. (Newspaper Research Journal,
sub req’dmade freely available by SAGE after our post) - Wellcome Trust grantees, want to have your article process charges covered? Make sure the publisher meets these requirements. (Robert Kiley)
- “As a publishing professional acutely concerned with financial accounting, and reckoning, I concluded publishing economics was the gloomiest science of all.” Journal of Electronic Publishing editor Maria Bonn takes stock.
- “[T]he pharmaceutical publications industry seeks to legitimise ghostwriting by changing its definition while deflecting attention from wider marketing practices in academic publishing,” says Alastair Matheson.
- “Why Do Research Articles in Economics Get Desk Rejection in Reputable Journals?” ask Jonathan Emenike Ogbuabor and God’stime Osekhebhen Eigbiremolen. (Ideas, REPEC)
- When it comes to funding research, “Some critics have been suggesting that peer review is just too much hard work and perhaps a lottery would be better,” writes Adrian Barnett. “Mind you this is a suggestion from economists, so take that any way you want.” (mBio)
- The “Relative Citation Ratio is article level and field independent and provides an alternative to the invalid practice of using journal impact factors to identify influential papers.” An idea floated in a preprint earlier this year is published in PLOS Biology.
- The number of authors per paper published by researchers in South Korea has been increasing. (Science Editing)
- Vanity Fair quietly removed an article about a powerful publishing family, J.K. Trotter of Jezebel reports.
- “Tracking contributorship rather than authorship reveals unconscious gender and status disparities in publishing,” argue Cassidy Sugimoto and Vincent Lariviere. (C&EN)
- EMBO and Wiley have launched SmartFigures, which allows readers to follow linked data through different papers. (press release)
- “[T]he efficiency of scientific publishing is delicate and very unstable.” (Research Policy, sub req’d)
- “The most productive economists have a US PhD and work in the North. Inbreeding at the national level is associated with higher productivity in the North and lower in the South.” (Economic Notes, sub req’d)
- Pass the Public Access to Public Science bill to promote open access, says U.S. Rep. James Sensenbrenner, R-Wisc. (Forbes)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
The Lancet’s Horton is critical of COPE:
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)31583-5/fulltext
The International Union of Crystallography signs the San Francisco Declaration on Research Assessment (DORA):
http://www.iucr.org/news/press-releases/iucr-signs-san-francisco-declaration
Thank you for covering the PACE trial again. I thought I would just provide some context for the figures newly released by the PACE trial’s researchers. They encourage readers to believe that their last-minute release of their own pre-specified primary outcomes support their old claims.
Their new results show that 10% of those patients who received only SMC were classed as ‘overall improvers’, while 20% of those who received SMC & CBT were, and 21% of those who received SMC & GET.
In a press release they had previously summarised their findings by stating that:
“In 2011, the first findings from the PACE trial showed that CBT and GET benefit around 60% of patients with CFS/ME, for whom fatigue was the main symptom.”
http://www.ox.ac.uk/news/2012-08-02-two-effective-treatments-cfsme-are-also-cost-effective
This was a non-blinded trial relying upon subjective self-report questionnaires to measure primary outcomes. Those receiving additional CBT and GET were told during their treatment that these treatments had been found to be effective. It was always likely that there would be problems with response bias, and poor results from the trial’s objective outcomes indicates that this was the case:
http://www.bmj.com/content/350/bmj.h227/rr-10
I will resist the temptation to write more.
It is a bit worrying when people start quoting letters to the editor to support their arguments.
The response I linked to cited the relevant papers for the supporting data, so rather than provide a long list of references here I thought that those interested would be able to check the references at the link. I didn’t realise some readers would be so sensitive as to find this worrying.
How are the PACE results “very different”. They are less significant which is what you would expect going from a continuous to categorical outcome, but most are still significant and in the same direction.