The week at Retraction Watch featured a retraction from Nature, and a discussion of what it means to be an author on a paper with thousands of them. Here’s what was happening elsewhere:
- Is PubPeer’s Brandon Stell the ”vigilante of scientific publishing?” That’s what Steve Kolowich of The Chronicle of Higher Education says.
- “Our peer review system is a toothless watchdog.” Our latest in STAT.
- What’s the best predictor of a journal’s subscription price? A new study in Scientometrics (sub req’d) takes a look.
- “[S]ome academics publish a lot – and others publish at moderate rates, or not at all,” writes Marek Kwiek in Inside Higher Ed. “It has always been so.”
- “It is NOT the role of the reviewer to spot ethics issues in papers. It is the responsibility of the author to abide by the publishing ethics rules,” says Jaap van Harten, Executive Publisher at Elsevier.
- So-called “gold open access publishing” may actually make things worse than they are now, says Björn Brembs.
- Do Sokal-style hoaxes and other “stings” disrupt trust in scholarly communication? Jutta Haider and Fredrik Åström explore the question in the Journal of the Association for Information Science and Technology.
- There’s a new U.S. state-by-state ranking of per-capita research funding. Can you guess who’s #1? It’s also 100-fold higher than the state in last place.
- “Medtronic collected data on thousands of patients given its Infuse device, uncovering complications,” the Minneapolis Star-Tribune reports. “The problems went unreported for years – even as scrutiny intensified.”
- Why did a culture of preprints develop in physics, but not biology? Anna Nowogrodzki tries to answer in Undark. Meanwhile, a lot of biologists aren’t “educated yet as to what a preprint is, and what the benefits of it are,” Jason Hoyt says on the Mendelspod podcast.
- A journal has created the “Invited Reproducibility Paper” to verify experimental reproducibility. (ElsevierConnect)
- A peer review openness initiative “could backfire if editors and authors feel coerced into data-sharing and so may not be the most pragmatic way of encouraging greater openness,” according to Dorothy Bishop.
- Why are we stuck with the impact factor? Phil Davis answers the question on Scholarly Kitchen.
- “Science fairs are as flawed as my solar-powered hot dog cooker,” argues Carl Zimmer in STAT.
- Here are three things scholarly publishers should know about researchers, courtesy of Charlie Rapple at Scholarly Kitchen.
- It “may be impossible to assess the value” of the National Football League’s concussion study, Joe Elia of the Journal Watch podcast concludes based on an interview with one of the authors.
- Do Scopus and the Web of Science “correct ‘old’ omitted citations?” A new study in Scientometrics (sub req’d) tries to answer.
- A publisher called KnowledgeCuddle. Really? asks Jeffrey Beall.
- David Crotty takes a look at data citation standards, which are improving but still have a ways to go. (Scholarly Kitchen)
- Can science benefit from its version of “Craigslist,” to connect scientists to available resources? Deborah Berry explores in The Conversation.
- Want to learn more about “Keeping the pool clean?” Check out this July conference about preventing retractions due to misconduct, featuring our Adam Marcus.
- So now you have your own lab, the big fight edition. (The Mole, Journal of Cell Science)
- The U.S. National Institutes of Health must take “immediate action” to ensure better reporting of clinical trial results, say patient advocates. (Charles Piller, STAT)
- eLife has created a new open-source publishing platform: eLife Continuum.
- Listen to Retraction Watch co-founder Ivan Oransky discuss fraud on the Skepticality podcast.
- A new study in the BMJ, based on decades-old records found in a dusty basement, undermines standard dietary advice, Sharon Begley at STAT reports.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Re preprints. The arXiv site seems to include a large number of negative results — at least in the field I’ve been following over the last few months (planetary science). Perhaps that’s a characteristic of the field, rather than the publishing model. Even so, the preprint system surely makes it easier to communicate negative results.
That’s not research funding, it’s NIH Research Project Grant funding. I suspect Maryland would be far and away the winner if it were total NIH research funding, and probably for total federal research funding as well.
The first article in The Chronicle of Higher Education needs a subscription now. :/
The last item raises some troubling questions, particularly in light of earlier RW posts on research integrity.
The STAT article doesn’t address the issue of consent, either for the original study, or for the subsequent use of the data by others. The BMJ paper itself states that: “No consent forms were required because the study diets were considered to be acceptable as house diets and the testing was considered to contribute to better patient care.” and “Residents were given the opportunity to decline participation.” However participants were residents of state mental hospitals or nursing homes, and it’s not clear if participants had legal capacity to consent, nor if they were actually or potentially coerced into participating.
Even more troubling for me was this line in Begley’s story: “Franz lived and breathed science, his sons recall, to the extent that he would have his wife, a nurse, draw his children’s blood so he would have more to study.” I cannot imagine any modern human research ethics committee, or researcher employer, viewing this in a positive light.
The link to the NFL study does not work, and it’s a link into retractionwatch.
Fixed, thanks.
A new study in the BMJ, based on decades-old records found in a dusty basement, undermines standard dietary advice, Sharon Begley at STAT reports.
Like ELF above, I was impressed by the cavalier attitude towards “subject consent” that prevailed in the late 60s. It seemed to be that anyone was fair game if they couldn’t get away.
The results are certainly timely, with the general pendulum of opinion swinging away from the “animal fats = heart disease” dogma. Four decades later, people are left blinking and wondering how so much dietary policy was made and advice was given… and how much shaky research was published, avoiding close scrutiny because it confirmed what people already believed. In the cause of that study, it seems that the “animal fats = heart disease” crusaders were happy to abandon their own data in a basement if it contradicted their position.
After reading the Ramsden et al.’s BMJ article (http://www.bmj.com/content/353/bmj.i1246) and the ethics statement in Franz’s study (http://atvb.ahajournals.org/content/9/1/129.long), I’d suggest BMJ and NIH have also taken a rather cavalier attitude towards “subject consent”. The ethics statement in the reanalysis merely repeats statements in the original study: “No consent forms were required because the study diets were considered to be acceptable as house diets and the testing was considered to contribute to better patient care.” Yet in the details of the study, it seems residents were given samples of the food and able to decline participation (ie dissent). Those who participated then had bloods drawn (samples were frozen for later analysis) and an electrocardiogram every six months. There’s an additional statement in the text – but not mentioned in the ethics statement – around autopsies: “The reasons for failure to perform autopsies was almost always refusal by relatives or inability to contact relatives”.
Even if consent was deemed unnecessary for participants in the original study, the re-analysis surely involved the long-term storage of patient’s medical data (separate from their medical records, which may well have been destroyed by now) and a re-use of data for which no consent was ever obtained. There’s no suggestion that either the NIH (who funded the reanalysis) nor BMJ (who published it) even considered ethical or consent issues, or had the protocol reviewed by an ethics committee. If the study’s findings weren’t so novel, there would surely be calls for its retraction. At best, this sends a very confusing message about acceptable practices and research integrity.