The week at Retraction Watch featured three new ways companies are trying to scam authors, and a look at why one journal is publishing a running tally of their retractions. Here’s what was happening elsewhere:
- “Does the American Psychological Association really want to be the publishing equivalent of a shady used car dealer?” The latest on data-sharing from our co-founders in STAT.
- “For data fraud, most Americans support criminal penalties.” (Science and Engineering Ethics, sub req’d) Read an interview we did with Justin Pickett and Sean Patrick Roche, the authors of this paper, when it was posted as a preprint last year.
- Papers on how pigs fly and birds live in the bottom of the ocean have been accepted by a “peer-reviewed” conference. Tom Spears is at it again. (Ottawa Sun)
- A group of psychology researchers calls for quotas on scientific committees to balance gender distribution. (The Research Whisperer)
- “When you call something a ‘crisis,’ it’s easy to blame and point fingers at someone,” says Lenny Teytelman, who organized a panel on reproducibility at a recent meeting. “It’s dangerous and unproductive to panic.” (SLAS)
- “In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience,” conclude Denes Szucs and John Ioannidis. (PLOS Biology)
- “I think the incentives to do these trials will be dramatically lessened if this is going to be the expectation going forward.” An open-data contest earns criticism in spite of scientific discoveries. (Heidi Ledford, Nature)
- “As a peer reviewer, I am interested in a manuscript’s content — not its format.” Research funds are wasted on reformatting manuscripts, says Julian Budd. (Nature)
- Can a film count as research, and if so, can a journal publish it? asks Matthew Reisz. (Times Higher Education)
- Northwestern engaged in “serious violations of academic freedom” dealing with a journal, says committee. (Peter Schmidt, Chronicle of Higher Education) Earlier coverage.
- Food researcher Brian Wansink responds to criticisms by others of his lab that we’ve highlighted on Retraction Watch.
- “How much can a single editor distort the scholarly record?” asks Phil Davis. (The Scholarly Kitchen)
- “The potential chilling effect of such lawsuits on our modern scientific discourse cannot be ignored, nor should it be tolerated.” A Nature Medicine editorial argues that legal challenges of data-driven findings are a slippery slope.
- The new Journal of Financial Reporting publishes its “playbook: “three strategies to improve their peer review process. (Andrew Gelman)
- “In my view, the investigation should focus on those actually involved in preparing the questionable figures and those directly involved in supervising their production.” British geneticist David Latchman offers his defense to a lengthy misconduct investigation. (Daniel Cressey, Nature) See our previous coverage here.
- Philippine president Rodrigo Duterte appoints a self-confessed plagiarizer — his former professor — to be his consultant on education. (Pia Ranada, Rappler)
- “Despite their moderate advances, women still published fewer articles than men, and were much less likely to be listed as first or last authors on a paper.” (Erin Ross, Nature)
- The authors of a white paper call for the democratization of journal publishing by making the knowledge and resources to do so freely available.
- “So pointing out why a study is not perfect is not enough: good criticism takes into account that research always involves a trade-off between validity and practicality.” (J.P. de Ruiter, Rolf Zwaan’s blog)
- Are mega-journals the future of scholarly communication? asks George Lăzăroiu. (Educational Philosophy and Theory)
- Moving forward, The BMJ will declare all revenues from industry, editor Fiona Godlee reports. (The BMJ)
- “[O]n one hand the annual number of Web of Science publications has increased by 317% between 2000 and 2015, economists distribute their works across a wider range of journals than before, they are more cited and the weighted average of impact factors of all journals where they publish has risen by 228%,” according to a new study of publishing patterns by economists in central and eastern Europe. “On the other hand, however, a number of economists have chosen an opposite strategy and publish mostly in local or ‘predatory’ journals.” (Science and Engineering Ethics, sub req’d)
- Is a paper urging a cure for unreliable studies–and dinged for plagiarism—itself irreproducible? asks Daniel Himmelstein (Satoshi Village blog)
- “One-third of journals take more than 2 weeks for an immediate (desk) rejection and one sixth even more than 4 weeks,” according to a new paper. (Scientometrics)
- Recruitment of reviewers is becoming harder at some journals, but likely not because of reviewer fatigue, according to a new study. (Research Integrity and Peer Review)
- “I urge journals to start creating databases of qualified and paid reviewers, who will take care of their submissions, by providing timely, high-quality and impartial reviews,” says Eleftherios P. Diamandis. (Clinical Biochemistry, sub req’d)
- “Australia’s main science funder is not taking an evidence-based approach in reforms to its system of funding allocation,” says Adrian Barnett. (Nature Index)
- Preprints on CVs: Dorothy Bishop weighs in. (Bishop Blog)
- The way that publishers insist authors refer to at least one eponymous condition means others can’t find all of what they’re looking for in indices, says Lenny Teytelman. (protocols.io)
- “JAMA Dermatology will aspire to report the demographic characteristics of participants in every study that has a sufficient sample size to assure participant anonymity when race/ethnicity is reported.” A new initiative.
- “As the interest around reproducibility and replicability has built to a fever pitch in the scientific community it has morphed into a glossy scientific field in its own right.” (Jeff Leek, Simply Statistics blog)
- A new paper argues that Gottfried Leibniz plagiarized his invention of the binary system from Thomas Harriot and Juan Caramuel de Lobkowitz. (Science and Engineering Ethics, sub req’d)
- A tool called Everware aims to make it easier for researchers to share their code, one of the biggest problems when it comes to reproducing results. (Jordan Pearson, Motherboard)
- American Chemical Society journals have tightened up screening standards, Derek Lowe reports. (The Pipeline)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Lock um up! They take taxpayer money and then BS the public with bum reports–all to
line their credentials and pockets!!!!!
” Research funds are wasted on reformatting manuscripts, says Julian Budd. (Nature)”
Reformatting is frustrating: why don’t journals adopt a universal format and style?
The most pathetic is that even the Latex format needs rewriting due to the different segmentation and figure/reference formats.
This is a discipline specific issue. Most economics journals don’t require you to follow their style on first submission. Even if you make mistakes, in following their style, on final submission, I usually find their copy editors take it on themselves to fix it up.
Recently I have submitted to a couple of non-economic journals and I really don’t understand why they want me to waste time redoing the papers style for a first submission that they may not accept anyway.
Exactly. Eloquently put. Thank you!
My belief is that journals do this to set up artificial barriers to submission to prevent machine gun re-submission of weak papers to every single journal in the field.
Ok. they can ask submitters to reformat if the article is accepted.
The March 7 update to Dr. Brian Wansink’s comment (http://foodpsychology.cornell.edu/note-brian-wansink-research) contains this statement:
“… a master’s thesis was intentionally expanded upon through a second study which offered more data that affirmed its findings with the same language, more participants and the same results.”
Assuming that this is a response to my blog post https://steamtraen.blogspot.fr/2017/03/some-instances-of-apparent-duplicate.html (section E), this appears to be refer to these two articles:
Wansink & Seed, 2001: http://link.springer.com/article/10.1057/palgrave.bm.2540021
Wansink, 2003: https://www.cambridge.org/core/journals/journal-of-advertising-research/article/developing-a-cost-effective-brand-loyalty-program/2309B9BDBF47C6CA9ED0A6A0B1D06097
Dr. Wansink’s observation that these two studies show “the same results” is somewhat of an understatement. There are 45 measured variables in the second study of each of these articles (2001, Table 5; 2003, Table 2). Of these, 17 of the 18 numbers that were reported to one decimal place are identical, and 22 of 27 that were reported to two decimal places are identical. I suspect that this degree of (re)measurement accuracy and concurrent apparent absence of sampling errors is probably unparalleled in the history of scientific research.
There has recently been a shift in focus from the problem of research misconduct to a “crisis in irreproducibility” . The shift in emphasis appears to have followed the declaration by Collins and Tabak in Nature that such a crisis exists and that, “With rare exceptions” it is not caused by research misconduct. (Nature505,612-613,2014)
There may indeed be a crisis but they refer to only two reports performed by pharmaceutical companies whose laboratories failed to replicate studies performed by others.They attribute the irreproducibility to flawed research practices which, undoubtedly, may be true. However, their list of such flawed practices,inexplicably includes,The, “Application of a secret sauce, to the data”. (Merely a flawed practice?)
Of course, upgraded training in research practices will be beneficial, but it should, and need not, require that diminished attention be to paid to the deleterious effects of research misconduct which will continue unabated until a comprehensive plan to address it is initiated.
Donald S. Kornfeld, MD
Columbia University
I don’t believe that Collins and Tabak in any way caused such a shift, they just became aware of it.
There is further evidence for a serious problem of irreproducibility, notably the work of Ioannidis, suggesting that low power and publication (and other) bias(es) have brought us to a situation where half of what is published is not expected to replicate, on those statistical grounds alone. And of course publications can have a host of other weaknesses than just the statistics.
Why Most Published Research Findings Are False
John P. A. Ioannidis
PLOS
http://dx.doi.org/10.1371/journal.pmed.0020124
There are more direct replication studies, with psychology leading the way:
https://osf.io/ezcuj/wiki/home/
(39% replication rate).
I think many working scientists have large numbers of war stories about publications whose results could not be reproduce.
In conclusion, misconduct is certainly a problem, but there is a wider problem of irreproducibility. Luckily, to a large extent they have the same solution: full public data access and greater scrutiny will discourage both cheating and sloppy practice. Prevention is better than cure.
The fact that, due to staffing and internal turmoil, ORI isn’t making many findings may be having an impact.
Maybe I won’t be popular with this, but I do not support early data sharing. The way the NIH threw the blood-pressure trial for the open-data contest as a peace of meat is absolutely disgusting. With all the open data mining tools out there, it will be strip-mined before the actual collectors can start writing their papers.
One of the comments here led me to think the following (surely not an original idea, and maybe this has already been done): Since the list of predatory publishers is now defunct, and since it’s probably very hard to keep UP with the large number of predatory publishers… We need a group to do the opposite: create and curate a list of reputable journals.
Any journal not listed can be assumed to be either “not yet vetted” or predatory. The curators of the list can be asked to review publishers who want to be on the list. Has something like this been done?
Yes, DOAJ does precisely that, but based on measurable and verifiable criteria; not a curator’s subjective sense of a journal’s reputation.