The week at Retraction Watch featured news of a fine for a doctor who took part in a controversial fake trial, and a likely unprecedented call for retraction by the U.S. FDA commissioner. Here’s what was happening elsewhere:
- Meet the world’s most prolific peer reviewer: 661 reviews in less than a year. (Our co-founders, STAT)
- Prompted by a much-discussed piece by Susan Fiske, Andrew Gelman set a history of the reproducibility backlash to the musical stylings of Randy Newman. Read Fiske’s original article here.
- The PACE trial misled millions with chronic fatigue syndrome until independent scientists analyzed the data, writes Julie Rehmeyer. (STAT) You can read the findings here. (Vincent Racaniello, Virology Blog) The Lancet has no plans to correct the PACE paper it published, David Tuller reports.
- “[O]rder of authorship was determined by rock, paper, scissors.” Meghan Duffy looks at the fun ways researchers have worked out a sometimes vexing problem. (Dynamic Ecology)
- As late as the 1980s, in peer reviews, some women at the UK’s Royal Society “were cautioned against being ‘too ambitious’, or for using ‘emotional’ language.” (Camilla Mørk Røstvik, Royal Society Publishing Blog)
- Sometimes peer reviewers end up contributing immensely to a published article. What if we included them as authors? (Florian Naudet and Daniele Fanelli, Stanford University METRICS blog)
- “[W]hile perceived as important, peer review is often regarded as a secondary activity to publication by decision makers across the Higher Education and funding sectors,” says Catherine Cotton, marking the start of Peer Review Week at the Oxford University Press blog.
- Did the University of Washington cover up research misconduct? It sure looks like they did, says Roger Pielke, Jr. (The Least Thing)
- Meet the new Editor-in-Chief of PLOS ONE, Joerg Heber, who says he’s passionate about open access science. (PLOS ONE)
- Whistle-blower Morteza Shirkhanzadeh is fired after an 11-year saga, even as Queen’s University is found to have not properly investigated the case. (Morgan Dodson, Queen’s University Journal)
- Six research groups at the University of Tokyo are under investigation after allegations that they falsified data. (Dennis Normile, Science)
- “As a researcher who gets such severe criticism, you have to go through the 5 stages of grief.” But sometimes scientific criticism has to hurt, says Daniel Lakens. (The 20% Statistician)
- In defense of the developmental peer review: Clearing up some misunderstandings of a widely used review system. (American Sociological Association)
- BMC Psychology will experiment with a results-free peer review process in which reviewers won’t see the results until the end. (Bob Grant, The Scientist) See our Q&A with another proponent of results-free reviewing here.
- “We at the Journal are committed to making the sharing of clinical trial data an effective, efficient, and sustainable part of biomedical research,” says the New England Journal of Medicine, introducing four Perspective articles on the importance of the practice, and editor-in-chief Jeffrey Drazen gives his take in their podcast.
- “Journalists have ceded the power to the scientific establishment.” The FDA and other institutions are using that power to manipulate the media, says Charles Seife. (Scientific American)
- A new paper argues that scientific incentives have become perverse, and institutions need to incentivize ethical outcomes and de-emphasize output. (Marc Edwards, Siddhartha Roy, Environmental Engineering Science)
- Four cases for and against industry-funded research. (New York Times) Andrew Brown says what matters isn’t who paid for the research, but the methods. (Slate)
- “I write this article in the midst of a controversy among Spanish scientists regarding an alleged fraud in the works of one of them.” Javier Tejada Palacios reflects on scholarly publishing. (Mètode)
- “I think major journals should discourage and eventually prevent the use of professional medical writers, and I think investigators should be required to write their own papers,” says Ian Tannock, who along with recently published a study on the frequency of honorary and ghost authorship. (Jim Daley, Cancer Therapy Advisor)
- How to review a paper: Elizabeth Pain gathers advice from a number of researchers. (Science) More advice from Alaina Levine, also in Science.
- How to get your research published: Advice from the International Journal of Nursing Studies. (sub req’d)
- “Why isn’t science better?” asks Paul Smaldino. “Look at career incentives.” (The Conversation) Here’s our co-founders’ take from June on the paper on the subject by Smaldino and a colleague. (STAT)
- Reviewers and associate editors are the “invisible hands” of research. (Journal of Business Logistics, sub req’d)
- “Somesh Kumar Mathur, an associate professor of economics at the Indian Institute of Technology, Kanpur, has been found to have plagiarised extensively in his academic writings.” (The Wire)
- “Kurukshetra University has failed to initiate action in a case of alleged plagiarism against an associate professor of the commerce department even after nine years.” (Vishal Joshi, The Tribune)
- “21 Brutal, Honest And Relatable Things That Happened In Academic Publishing.” Kelly Oakes, at BuzzFeed, comes up with a list, some of which will be familiar to Retraction Watch readers.
- The number of hybrid open access papers published per year grew more than 20-fold from 2007 to 2013, to about 14,000. (Informetrics)
- In biotech and pharma, “Whereas successful university spinoffs tend to emphasize the scientific value of their knowledge and gain reputation through their high-quality publications, other successful firms tend to emphasize the commercial value of their knowledge and gain reputation through high-quality patents.” (Zeynep Erden, Drug Discovery Today, sub req’d)
- “[A] lower percentage of published studies than unpublished studies contain information on side effects of treatments.” (PLOS Medicine)
- “The crucial point here is that none of these important and critical replication papers were published in the journals which published the original papers, and nor did these journals actively encourage submission of other replication studies.” (Solmaz Filiz Karabag, Christian Berggren, The Replication Network)
- A member of the U.S. Congress has a new proposal to fight sexual harassment in academia: Tie grant funding to university investigations. (Azeen Ghorayshi, BuzzFeed)
- The world is full of bibliometric indices, but apparently one group of researchers thought we needed a new one they call the K-index. (arXiv)
- What is the word “precious” doing in the title of a journal that has nothing to do with precious metals? asks Jeffrey Beall.
- Watch out for emails from predatory publishers, say the authors of a new paper in The American Journal of Medicine: You may become a victim of a cyber-attack. (sub req’d)
- Springer Nature will donate one water filter to a non-profit organization for every review completed in one of their journals in 2017. (press release)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
661 reviews at 38 cents apiece, peer reviewing is officially the world’s worst paid job.
I am now assessing Publons. I can appreciate that things need to evolve and Publons is a good way forward. But I sense that there are some problems, starting with the lack of independent verification of what is inputted. Has anyone checked the validity and content of the 661 “peer” reviews? Maybe the peer review reports need to have their own post-publication peer review!
Trust me, I started to use Publons yesterday to test just exactly how one could achieve so much “fame” for potentially doing so little, or at least, so fast. In half an hour, I had already “scored” 30 “merit points” for adding post-pub comments. This is going to be the next abused metric, like the JIF. Worse, rather than “compensating” scientists for their peer review work, it has the potential of creating an unhealthy competitive environment among and between scientists, especially competitors. I’m all for recognizing reviewers’ efforts, but Publons does not seem the way to go. Referring to scientists as “sentinels”, or trying to lure them with catch phrases like “Show the world you’re a Sentinel of Science”, “the unsung heroes of peer review”, “Top overall contributors to peer review in science and research”, “the highest achievers in peer review across the world’s journals”, “champions of recognition”, etc. is going to stimulate unhealthy competition, further biasing publishing.
When I observe the following list, I observed some worrying trends:
http://prw.publons.com/sentinels-of-science-recipients-2016
a) Top 5 “reviewing countries”, some of the reviewers do not seem to sound like local scientists, but rather many from Asia or SE Asia. This suggests that country-related metrics related to peer review may in fact be incorrect or misleading.
b) The categories “Most contributions as a handling editor” and “Editors most committed to ensuring reviewers are recognised” could induce an unhealthy race to complete as many reviews as possible, which could be dangerous in a compensatory-type pyramid structure as used by Frontiers:
https://forbetterscience.wordpress.com/?s=frontiers
TL, I was not aware that peer reviews are being remunerated, even if badly. Can you indicate the URL where financial remuneration is being offered, please. This could be the game changer (and I don’t mean that in a positive way at all).
The “world’s most prolific peer reviewer” received a “prize” of 250 USD for completing 661 peer reviews in less than a year.
AFAIK, in the US for-profit corporations can get in trouble for using “volunteers” to work for them without paying a salary when compensation is provided through some other financial means (tax issues). Such a prolific peer-reviewer surely comes very close to actually being an employee of the journal publishers, and should be paid an actual salary to perform such a service.
TL, thank you for this clarification about the payment/remuneration. Your observation is fascinating. However, the exact donor of this cash prize is absolutely unclear. Paid by Publons, paid by the publishers, or paid by publishers to Publons who then withdraw from a cash pot? Please observe the Publons “About us” page:
https://publons.com/about/
It indicates that Publons “is a limited liability company registered in New Zealand and in the United Kingdom”:
https://publons.com/about/company/
On that page you can also see the staff, team members, and advisors.
Finally, your comment “US for-profit corporations can get in trouble for using “volunteers” to work for them without paying a salary when compensation is provided through some other financial means” deserves greater exploration. For example, some publishers like Elsevier and Taylor & Francis publish journals in the US, but use peer reviewers for free (i.e., there is no financial remuneration, or compensation of any sort). So, I am curious as to how the law works for such publishers. Any insight or links to US laws related to the use of “volunteers” by for-profit companies would be very welcome. Scientists need to be more educated about such issues.
Publishers, in my view, have been unjustly enriched at the expense of authors, reviewers and academic institutions. I wonder if they will ever be asked to make restitution to the world: i.e. at least make all published research open access.
The relevant law is called the ‘Fair Labor Standards Act’, there are lots of interesting analyses on the web on what kind of unpaid volunteering is legal under this act.
The Springer Nature water filter for peer review compensation link seems to be down:
http://www.springernature.com/us/group/media/press-releases/springer-nature-and-journal-editors-team-up-in-project-for-developing-countries-/10726190
I use Publons to show my record as a reviewer because it is hard to track it by myself even If I Save them on my hard drive. I am glad I did it, I know exactly how many reviews I did and how many manuscripts have been published. This kind of information is very valuable to me, and myself only. People say it may generate competition and one can submit fake ones, but I do not see why people want to do that with no money gain at all and who cares about that beyond yourself. I would worry about a meteor falls on my head more than that.
Would that be:
https://publons.com/author/517857/tian-xia#profile
OR:
https://publons.com/author/519463/tian-xia#profile
Like JATdS, I’m skeptical about the figures reported by Publons, for one simple reason: reviewing 661 papers in less than a year, working in the morning, evening and on Saturdays is … impossible.
A key point is that the RW headline mentions “661 reviews” and the STAT article “661 papers”. Both figures would be consistent only in the case of papers reviewed using a single round of queries by the referee. I estimate that only 10-30% of papers fall in this category. The “661 reviewed papers” figure also assumes that none of the papers was rejected or accepted with no comments (I refer to the 5-word reports like: “all is wrong, don’t publish” or: “excellent, publish with no changes”).
Roughly speaking, I think that nobody is able to review seriously more than 100 papers per year (I mean normal papers: 5-20 pages, including new results, a discussion, figures, references, etc.).
he is only doing statistical reviewing. If that is fairly simple then you can do it quickly provided the study is fairly simple. I find that the journals I review for only ask for a statistical reviewer if it is complex, so it is usually a reasonable amount of work.
It would become incredibly boring.
Sylvaine, there is another serious issue which is not often discussed (or maybe not discussed at all). When exactly do these “peers” conduct peer review? I observed a chart (which was available through my experimental profile) at Publons that showed most reviewing being done on Mondays, followed by Tuesdays and then tapering down to Friday, with least work done on weekends. This implies (or suggests) that these scientists are doing review work during working hours (maybe the the really honest peers will do work after hours). Unfortunately, the Publons figures do not show actual time of the day, only day of the week. I can appreciate that work is mostly (I assume) done for free (I know of very few paid-for models), but I would argue that work done for free for these for-profit publishers, is in fact having real costs, either on private institutes who pay researchers privately, or on tax-payers’ pockets, who support scientists in national institutes. Nothing in this world is for free, and if in fact peers are working on “free” peer review during their regular office hours (assuming a 9-5 job), does this fall within their contractual terms?
There are other issues, and in this sense, Publons could prove to be a valuable post-publication tool, to assess who, and why/how, imperfect science was allowed to be published, and to identify those individuals who “peer reviewed” and approved bad science for publication. So, Publons could actually potentially damage the images of some rotten apples as they seek to self-promote themselves. For example, currently, Publons requires voluntary registration (in principle, not unlike ORCID). Yet, at what point will it become compulsory (if at all)? As more scientists join, as more publishers join, and as the numbers increase, there will likely be a tipping point between voluntary and compulsory (ditto ORCID). This has good and bad points, of course. However, when and if it becomes compulsory, several issues arise:
a) it may induce a very unhealthy state of competition, even though, as suggested by Tian Xia above, some simply want to use Publons to “organize” their peer review efforts (scientists I caution, should not be so naive about such new efforts to globalize “recognition”); recognition, like the Journal Impact Factor, is used and abused by the publishers and by scientists (despite the claims by many in both groups that it is an innocent metric).
b) what does the openness index used by Publons (see the openness column scores here) actually reflect? Is it equated with transparency? Should those peers who have conducted open peer review be awarded the same score as those who have hidden behind their identities in traditional single- or double-blind peer review?
https://publons.com/institution/?order_by=num_reviewers
c) How many peers are conducting “peer work” simply to advance their knowledge about their competitors? How will the scientific community ever know if the peer reports are hidden from public scrutiny?
d) In recent times, one often reads about problems in PLOS ONE or Scientific Reports, for example, most likely because they are currently vying for top place in terms of OA publishing volume. Could this volume and output/productivity be related to peer review work? Is such work being conducted superficially? Why are such open access journals charging so much money for article processing fees when their reviewers – who appear to be quite prolific (according to Publons), receive no money?
https://publons.com/journal/?order_by=reviews
e) As I allude to above, how does voluntary “peer work” relate to editorial remuneration, as observed in Frontiers? See, for example, a Frontiers journal ranked as No. 83:
https://publons.com/journal/?order_by=reviews
f) Publons could be useful for whistle-blowers and critics. For example, what could one say if individuals on the Retraction Watch leaderboard appear on Publons (at the moment, it is voluntary, so likely we might not find any on Publons, but they might appear if and when Publons becomes compulsory):
http://retractionwatch.com/the-retraction-watch-leaderboard/
I think there are many maybes and if related to Publons. Maybe Retraction Watch could organize an interview with the Publon team to clarify some of these issues and concerns.
Hi Jaime,
Thanks for your comments. You raise some very interesting and valid points. We’d be happy to speak with you, or the folks from Retraction Watch to address these. Feel free to contact me: [email protected].
I’m sorry to be hogging the comments section, but Publons in the context of Peer Review Week should be the focus of our attention, because it goes to the heart of peer reviewer compensation and the actual or false incentives in place that may have resulted in the corruption of the scientific record. We see many experimental systems in place, but none as powerful or prominent as Publons. And that is why we should be asking tough questions, not simply accepting this system as it currently stands. As highlighted by Retraction Watch in the weekend reads, Naudet and Fanelli make quite a reasonable, but quite unheard of request or suggestion, to include peer reviewers as co-authors of papers, for their intellectual contributions: “occasionally, peer reviewers do such careful and thoughtful work that they end up contributing to the paper more than some of our listed co-authors.”
Yet, IMHO, Naudet may have missed making an important disclaimer, as indicated by his peer reviewer activity shown by Publons:
https://publons.com/author/922954/florian-naudet#profile
So, already, just in this weekend reads, we see how Publons can be used to assess potential COIs.
If you are doing 661 “peer reviews” in less than a year you are part of the problem, not the solution!
When I review something, it takes me about half a day to read it carefully (sometimes a lot more if its confusing and verbose). Then I put it aside for a bit because my most useful thoughts generally take a day or so to bubble up. Then I spend another hour or two writing out detailed comments.
The minimum number of reviews one owes the world is ~3x the number of manuscripts one submits. I try to do a bit more than that just to be a good citizen. But if I did 2 per day, including weekends, I’d be spewing bad judgement right and left.
I hadn’t heard of this publon thing and need to check it out. Some tracking and recognition that I’ve done my duty to my field would be nice. However, I share other commenter’s concerns that it shouldn’t become another metric that we’re “incentivized” to pervert.
There are many things in this world that we do just because we should – its called adulthood. As for those who don’t do their share – there will always be brats who abuse any system we come up with. Just remember: they do have to look in their own bathroom mirrors every morning, and for some of them, that must be a rather unpleasant experience.
Not necessarily. I don’t think it should take a half day, careful scientist. I use a template similar to this one https://www.ausmed.com/articles/critique-a-research-article/
If you stick to a template and break it down, it is not that time consuming.