The week at Retraction Watch featured news of a publisher hack, and a story about a Nature Cell Biology paper likely headed for retraction. Here’s what was happening elsewhere:
- Our peer review system is like Godzilla: It shouldn’t be able to stand under its own weight, yet somehow it does. The latest from our co-founders in STAT. Related: “An estimated 63.4 million hours were devoted to peer review in 2015, among which 18.9 million hours were provided by the top 5% contributing reviewers.” How much time the world’s scientists spend on peer review. (PLOS ONE)
- “Beyond the time it takes to actually get the science done, peer review has become the slowest step in the process of sharing studies.” But that just means it should be fixed, not abolished, argues Tricia Serio. (The Conversation)
- Wondering when to submit your new paper? “[W]eekend days (Saturday and Sunday) are not the best days to finalize and submit manuscripts.” (Physica A: Statistical Mechanics and its Applications; sub req’d)
- What if we looked at fraudulent studies as viruses? A new paper in Minerva proposes new ways to stop their spread.
- Why research integrity isn’t just “somebody else’s problem:” A presentation by Elizabeth Wager. (Wager is a member of the board of directors of our parent non-profit organization.)
- “Do the academics of the Internet age still communicate as stiffly as their colleagues did at the time of the Apollo programme?” Yes, it turns out, except for the explosion of first-person pronouns. (Nature)
- Wellcome Open Research, a new platform, “will significantly speed up the process of sharing new findings and enable researchers to publish any outputs from the funding they have received without having to persuade journal editors that the work is worthy of publication in a particular journal,” says Rebecca Lawrence of publishing partner F1000.
- “There’s trouble in the Crescent City: questions are being raised about just exactly how a editorial based on embargoed content from the New England Journal of Medicine ended up with editors from a competing journal days before the NEJM publication.” (Peggy Peck, MedPage Today)
- “It is really disappointing that science has not been able to put an end to this.” A disgraced stem cell entrepreneur convicted of administering unproven therapies may be practicing again. (Alison Abbott, Nature)
- “[It] should be noted that the greatest danger to the prestige of the journal is the claim of an unexplainable relationship between the journal and the industry.” Lessons from a retraction, courtesy of Zeki Öngen in the The Anatolian Journal of Cardiology.
- Peer reviewer scores of a paper do not predict the impact that paper will have in the future, according to a new paper in Scientometrics.
- “Don’t we already know that science’s reward schemes encourage ‘safe’ research and encourage corner-cutting?” asks Philip Ball of two recent studies. (Nature)
- “[O]nly a minority of biomedical journals require data sharing,” finds a new preprint in PeerJ.
- “[W]hy you should invest time reviewing, how to write a constructive review, and how to respond effectively to reviews of your own work.” (Journal of Consumer Research; sub req’d)
- Five ways supervisors can promote research integrity, and other infographics, from the U.S. Office of Research Integrity.
- “Exploration is important […] But exploration, like anything else, can be done well or it can be done poorly (or anywhere in between).” How to do exploratory studies properly, from Andrew Gelman.
- “Lucky bastard manages to reap psychological benefit from sugar pill.” The Onion’s take on clinical trials.
- “I remember when I was a postdoc. You’re usually looking for one good figure to put in a publication…You may maybe repeat it once, but it’s a whole different level of scrutiny and rigor when you’re trying to develop a therapeutic.” The dirty little secret of biotech. (Damien Garde & Meghana Keshavan, STAT)
- The peer review process can be an anxious time for the author. Biswapriya Misra’s tongue-in-cheek peer review hymn may help calm the nerves. (Nature blog)
- “Indeed, while Garfield had intended the measure to help scientists search for bibliographic references, impact factor (IF) was quickly adopted to assess the influence of particular journals and, not long after, of individual scientists.” Some ways to rethink the metric. (Yan Wang, Haoyang Li, and Shibo Jiang, The Scientist)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
https://popehat.com/2016/11/10/popehat-signal-dutch-blogger-sued-in-florida-for-criticism-of-junk-science/
“It is really disappointing that science has not been able to put an end to this.”
It is not obvious to me why it is the collective responsibility of scientists to prevent medical fraud, nor how “science” is supposed to “put an end” to a guy shifting his scam from one country to another.
The first graf on the Nature language piece would have been a touch better were they able to actually figure out what a split infinitive is.
to actually figure out what a split infinitive is
I see what you do there.
Regarding the PLoS One peer review study, I would feel that that authors have made a fundamental mistake. They model various scenarios of the potential reviewer pool. For example, in scenario 1, the potential available pool is determined as “researchers who co-authored at least one paper that year”. However, these authors are not all equally available as papers normally only give ONE contact email. Indeed the authors own paper has four authors – and one contact email (for the first author). The other scenarios are thus largely equally flawed. While it could be argued that I (as an editor) could spend extra time tracking down the email address for, say Phillippe Ravaud, in practice I think most editors would not. The lack of equal availability of all authors on the paper as potential reviewers would argue that the available reviewer pool is significantly smaller than the study proposes.
Thanks for your comment.
When defining the author scenarios we didn’t try to answer the question of how many reviewers are available to an editor at a given time. If that was the case, then indeed our author scenarios wouldn’t be an appropriate answer. We would have also needed a totally different methodology to address such a complex question. It is something that we plan to do in the future, but we didn’t address it in this paper.
The question we tried to address was rather who qualifies to be a reviewer in general.
Clearly researchers that are first authors of papers qualify to be reviewers since they have conducted research on a given topic. Moreover, researchers that are last authors since they are the supervisors of a given project and/or the directors of the lab also qualify to be reviewers. Thus the minimum potential supply of reviewers in a given year are those who were first or last authors of at least one publication.
Please note that we use the term potential supply and not real supply of reviewers. The reason for that is exactly the fact that not all of those qualified to be reviewers are available to editors at all times. In my personal opinion this is a main driver of the extreme imbalance in the effort that we observed. Those who do the least amount of reviews might be people that editors don’t know (such as young researchers) or those who decline a lot to review.
The reason that it is important to know whether reviewers exist or not is because there have been many claims in the literature that there is a shortage in the pool of reviewers and that the system will eventually collapse. However, these claims are mostly anecdotal and not backed by data, so we decided to use data to examine whether this is true or not. If it would have been true that there is a shortage of reviewers, then the only solution would be to find ways to lower the demand for reviews or to convince the reviewers to perform more reviews each year. However, since we showed that there is no shortage of reviewers a third option is possible for the editors. To find ways to expand their databases and find people that were previously unknown to them because it seems that a lot of them exist.
I hope this answer clarified the issue. I would be happy to answer anything else that might not be that clear in our paper.
Potential reviewers could certainly come to the attention of editors and enter the general pool because they have been first or last authors on at least one paper, but there is also the contributor nomination route, where someone suggests a few names as reviewers of their manuscript. The editor may not use those names immediately, but they end up saved for future reference.
It would be interesting to hear from a few editors. What proportion of their reviewer pool did they research through publication authorship, and what proportion were originally nominated by contributors?