The week at Retraction Watch featured an economist being asked to review his own paper, and a new member of our leaderboard. Here’s what was happening elsewhere:
- When it comes to peer review for certain journals, Dave Fernig is on strike.
- “Science should be more boring,” says Valentin Amrhein.
- “The pressure to publish pushes down quality,” argues Daniel Sarewitz in Nature.
- A survey of members of the American Society of Blood and Marrow Transplantation shows 9% “did not believe in the peer review process,” reports a new paper in Bone Marrow Transplantation. (sub req’d)
- “[F]emale researchers engage in more scientific collaborations,” says a new analysis in Scientometrics. (sub req’d)
- “Why scientists should learn to fail, fast and often,” write co-founders Ivan Oransky and Adam Marcus in STAT.
- The National Library of Medicine has a new director: Patricia Flatley Brennan.
- “In the long term, it should be irrelevant where a researcher publishes their findings,” according to Vitek Tracz and Rebecca Lawrence, writing in F1000 Research.
- The New England Journal of Medicine has published a new Perspective piece on data-sharing. It does not use the term “research parasites,” nor refer to the Journal’s editorial that did.
- “In science,” write Paul D. Thacker and Curt Furberg in the Los Angeles Times, “follow the money — if you can.”
- “Kentucky judge orders release of secret OxyContin records sought by STAT,” reports David Armstrong for the outlet.
- “For cholesterol study volunteer, an unsettling discovery,” reports Jennifer Couzin-Frankel in Science: Herself.
- “We apologize for the delay in the reply to Dr Allison’s letter of November 2014, this was probably due to the fact that it was inadvertently discarded.” A group of researchers responds to critiques from David Allison and colleagues in The American Journal of Hypertension. (sub req’d)
- “A big-time neuroscientist threatened to sue when I asked about his side business,” reports Jesse Singal for New York Magazine.
- The Directory of Open Access Journals has removed 3300 journals for not submitting “a valid reapplication before the communicated deadline,” a requirement after the DOAJ introduced more stringent criteria for indexing.
- “What constitutes appropriate peer review for interdisciplinary research?” asks Gabriele Bammer in Palgrave Communications.
- “Can a good tree bring forth evil fruit?” Benjamin Capps explores industry funding of medical research in the British Medical Bulletin. (sub req’d)
- What’s not responsible for the reproducibility crisis in social psychology: “Badly specified theories,” write D. Trafimow and B. Earp in an in-press paper for Theory & Psychology.
- John Oliver dedicates a segment on his show to why many news stories about science should be ignored.
- “‘Predatory conferences’ stalk Japan’s groves of academia,” writes James McCrostie in The Japan Times.
- How often do researchers pay out of their own pockets for lab supplies? asks Lenny Teytelman.
- Altmetrics scores “should never be used to measure the merit of scientific publications,” writes David Wardle in Ideas in Ecology and Evolution, while noting that the company behind the scores “cautions that one should not read too much into these scores without digging ‘deeper into the numbers and looking at the qualitative data underneath.’” Another study in Scientometrics (sub req’d) raises similar questions.
- “We can’t pretend that all papers are anything close to equal in terms of scientific productivity,” writes Steve Shea. The comment thread includes a lot of thoughtful responses.
- If you want to solve the replication crisis in psychology, you need to keep the limitations of questionnaires in mind, says Jana Uher.
- OMICS Group may have a new name, but it’s the “same horrible business,” says Jeffrey Beall.
- A new study “of 250 ‘indie’ [open access] journals founded prior to 2002, showed that 51% of these journals were still in operation in 2014 and that the median number of articles published per year had risen from 11 to 18 among the survivors.” (PeerJ)
- Scientists need to do more to confront stem cell hype, say Timothy Caulfield and colleagues in Science.
- French-speaking readers: Hear and read our Ivan Oransky talk about retractions on Radio Télévision Suisse and Le Temps.
- “[R]esearchers in arts and humanities and social science should use less adverbs in academic writing,” argues Lei Lei in Scientometrics. (sub req’d)
- Yikes — a New York Times story misreported a Muslim leader’s Snapchat handle as “Pimpin4Paradise786;” it has since been corrected.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our new daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
Academic papers would also be easier to read if they used internally consistent English – e.g. “use fewer adverbs” not “use less adverbs” (fewer should be used with countable sounds; less with mass nouns – e.g. fewer pebbles vs. less gravel).
Re Dave Fernig strike:
There have been more than one report of a reviewer stealing an article and claiming its credit. How will such a possibility be prevented if data sharing policy is implemented? How are author’s right protected during manuscripts processing?
There may be a misunderstanding here of what is meant by “data sharing” here. In this context, Fernig is talking about the framework for ensuring that data underlying a paper’s findings are openly available *after* the paper is published. It doesn’t involve the public disclosure of the manuscript’s contents or raw data prior to review or publication.
Speaking to the unrelated problem of unethical peer reviewers stealing manuscripts and taking credit for them, it’s something that’s fundamentally impossible to block by technical means. You have to give a peer reviewer the manuscript if you want them to be able to review it. On the bright side, the manuscript tracking systems used by most journals today make it much easier to verify who submitted what text, data, and figures when–which makes it relatively straightforward to confirm who the original authors are in the (fortunately rare) event of reviewer misconduct.