The week at Retraction Watch featured more installments in the seemingly never-ending story of fake peer reviews. Here’s what was happening elsewhere:
- 179 researchers have been indicted in a plagiarism scandal in South Korea, University World News reports.
- “Thank you for your rejection of the above manuscript. Unfortunately, we cannot accept it at this time.” A brilliant response to rejection by journals, from Cath Chapman and Tim Slade in The BMJ.
- Most research institutions routinely break the law that requires them to report clinical trial results, STAT reports. And one researcher says in Science Translational Medicine (sub req’d) that one of the reasons reporting is so infrequent is the need for time-consuming manual entry.
- Elite scientists really can hold back science, says Brian Resnick in Vox.
- Stephen Heard doesn’t want a world in which “preprint servers obviate the need for pre-publication peer review or for the existence of conventional scientific journals.”
- As pressure builds on PLOS ONE to release data from a chronic fatigue syndrome study, Mary Mangan writes at the Genetic Literacy Project about another frustrating experience she’s had with the journal’s data access policy.
- “Do we need more ‘small’ science?” asks Julia Belluz in Vox.
- Gizmodo’s George Dvorsky presents his picks for the most notorious science scandals of 2015.
- Governments routinely hide scientific misdeeds, and it’s time to change that, we argue in STAT.
- Should clinical medicine move to a pre-print model? Michael Lauer, Harlan Krumholz, and Eric Topol say so. (via Cardiobrief)
- The science myths that will not die, courtesy of Megan Scudellari at Nature.
- “Why is so much reported science wrong, and what can fix that?” asks Chelsea Leu in California magazine.
- Which studies earned the most buzz in the media this year? John Bohannon takes a look in Science.
- Brian Nosek, of the Center For Open Science (with whom we partner), is one of The Chronicle of Higher Education’s top 10 people who had an impact on science this year (sub req’d). Hear from Nosek on Health News Review’s podcast, which is free, and also from “why most published research findings are false” author John Ioannidis.
- “More private dollars are bankrolling clinical trials,” Sarah Wickline Wallan reports in MedPage Today.
- Could a first-author h-index provide “a better estimate of scholarly productivity than their respective total publication counterparts?” New paper by Glenn Walters in Scientometrics. (sub req’d)
- Here’s how to write collaboratively, from Stephen Mumford and Rani Lill Anjum in The Chronicle of Higher Education. And here are three myths about authorship, in a new paper in Science & Engineering Ethics (sub req’d).
- Why do scientists like anonymity? asks Lenny Teytelman. Another look from Neuroskeptic.
- Why science needs failure to succeed: A segment from Science Friday.
- Scientists like to cite Bob Dylan in papers, says a new study in The BMJ.
- Chris Hartgerink and Stephen George have proposed a new way to spot problems in data deposited to ClinicalTrials.gov.
- Here’s the doctor who stopped the CDC’s gun control research program, courtesy of Joyce Frieden at MedPage Today.
- “Diversity in medical research is a long way off,” says a new study by Chris Gunter and colleagues.
- The AAAS is reconsidering the Fellow nomination of Patrick Harran, a researcher whose lab tech died in a lab accident in 2009, Chemjobber reports.
- Instead of a peer review, a reviewer sent a warning to a manuscript’s authors, Jeffrey Beall reports.
- “[U]nprofessional behavior in some fields and areas is so common…that it becomes the accepted norm,” writes Chris Parsons at Southern Fried Science.
- “Some Review Journals Do Not Allow Students to Author Reviews: Is this Ethical?” ask three authors in Science & Engineering Ethics (sub req’d).
- “The U.S. Department of Transportation has released a report that establishes objectives to ensure public access to publications and digital data sets arising from any of its managed research and development programs.” (via The Transportation Review Board)
- Lessons of the #overlyhonestmethods hashtag, in Science & Engineering Ethics: Do “epistemic virtues such as standardization, consistency, reportability and reproducibility need to be reevaluated?” (sub req’d)
- More erosion of the Ingelfinger Rule? A social psychologist goes straight to the New York Times with her preliminary research on iPhones and posture (via Nick Brown).
- Expensive and exploratory research biopsies are “overused in early studies of new cancer drugs,” says a new study in the Journal of Clinical Oncology.
Retractions Outside of The Scientific Literature
- Will a retraction and apology put Humpty Dumpty together again?
- A poem is retracted.
- Bill Cosby is demanding a retraction.
- A TripAdvisor user retracts a review of The Cheesecake Factory.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post. Click here to review our Comments Policy.
As pressure builds on PLOS ONE to release data from a chronic fatigue syndrome study
The PACE team have come up with a new reason to deny Prof. Coyne access to the data he requested, i.e. he hasn’t signed the agreement to preserve patient confidentiality. This being the agreement that they haven’t given him any opportunity to sign.
I just fail to understand why should scientists give away their data for free, when , in other disciplines, data is bought and sold? In my opinion, the owner of data is the entity that pays for its collection.
I think that’s the point in many of these instances. In the case of grants funded by NIH and similar US institutions, the taxpayer is “the entity that paid for its collection.”
The publication of the data, methods and code is intended to allow replication, which as has been shown here and elsewhere recently has been problematic. Some of the topics — healthcare, global warming, toxicity studies — have large impacts for public policy and public spending, and attempts at replication seems eminently appropriate.
Statistical manipulation to produce inappropriate results, whether accidental or intentional, will be exposed along with other problems.
Now, in areas where the data or methods are commercially valuable, the solution is to go ahead and use it commercially, publishing only what can be released … and replicated.
===|==============/ Keith DeHavelle
Under the Bayh-Dole Act in the USA, rights to intellectual property (which includes data) are not automatically assigned to the federal government. They are assigned to the grantee.
In my experience, funding contracts negotiated by research institutions clearly spell out who owns the IP generated through the work. Generally it’s the research institution, not the funder.
One possible difference is where data are patient medical records collected during routine care. Depending on the local laws, the data is owned by the patient – not the service provider and not researchers who may use the data.
It is my understanding that data is not copyrighted.
I think the flaw in your argument is that it isn’t their data. The data belong to the organisation that pays the scientists and when their funding comes from the the public either in the form of fundraising or taxes the public has a right to expect a return. They also have the right not to be expected to fund the same research over and over again because those they fund won’t share their data. In the field of genomics it’s generally considered very poor practice not to release most data.
If the your data is valuable and you wish not to share it, don’t publish. It’s that simple.
First, science is ill served by claims that are not supported by free access to data so they can be replicated and verified. Secondly, too often the results are deliberately promoted to invoke fear and concern. You will suffer if you eat this; you will get sick if you don’t do that; the earth will fry if we don’t change; etc. We should not publish any research that seeks to alter what others do if they hide their data. As we see daily on this site, the ethics of many researchers leave much to be desired. The potential to do grievous harm to society is too great not to demand open access of data and methodology.
It seems to me that the primary reason to hid one’s data is fear that someone else will make better use of it. If that’s true, don’t publish. In fact, you are not worthy of being published.
With reagrds to the BMJ article on rejecting rejections. The authors state under competing interests:
“Although they have received several rejections from prestigious medical journals, at the time of writing they had not been rejected by The BMJ. Should this occur they will no doubt use the strategy outlined here.”
But how would we ever know? If they used the strategy would BMJ have then “accepted” their paper, or would BMJ have rejected their own rejection based on the authors rejection of the rejection?
The rejecting a rejection paper also features one comment (http://www.bmj.com/content/351/bmj.h6326/rr-1) that suggests that this paper is quite similar to a much earlier rejection-rejection letter a student once sent to a college after it had rejected his application. The present paper does not list this source. A case for Retraction Watch? 😉
Yes that struck me too. It was Oxford and a female student: http://www.bbc.co.uk/news/uk-england-oxfordshire-16604050