Weekend reads: World’s most prolific peer reviewer; replication backlash carries on; controversial PACE study re-analyzed

booksThe week at Retraction Watch featured news of a fine for a doctor who took part in a controversial fake trial, and a likely unprecedented call for retraction by the U.S. FDA commissioner. Here’s what was happening elsewhere:

Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.

17 thoughts on “Weekend reads: World’s most prolific peer reviewer; replication backlash carries on; controversial PACE study re-analyzed”

  1. I am now assessing Publons. I can appreciate that things need to evolve and Publons is a good way forward. But I sense that there are some problems, starting with the lack of independent verification of what is inputted. Has anyone checked the validity and content of the 661 “peer” reviews? Maybe the peer review reports need to have their own post-publication peer review!

    Trust me, I started to use Publons yesterday to test just exactly how one could achieve so much “fame” for potentially doing so little, or at least, so fast. In half an hour, I had already “scored” 30 “merit points” for adding post-pub comments. This is going to be the next abused metric, like the JIF. Worse, rather than “compensating” scientists for their peer review work, it has the potential of creating an unhealthy competitive environment among and between scientists, especially competitors. I’m all for recognizing reviewers’ efforts, but Publons does not seem the way to go. Referring to scientists as “sentinels”, or trying to lure them with catch phrases like “Show the world you’re a Sentinel of Science”, “the unsung heroes of peer review”, “Top overall contributors to peer review in science and research”, “the highest achievers in peer review across the world’s journals”, “champions of recognition”, etc. is going to stimulate unhealthy competition, further biasing publishing.

    When I observe the following list, I observed some worrying trends:

    a) Top 5 “reviewing countries”, some of the reviewers do not seem to sound like local scientists, but rather many from Asia or SE Asia. This suggests that country-related metrics related to peer review may in fact be incorrect or misleading.
    b) The categories “Most contributions as a handling editor” and “Editors most committed to ensuring reviewers are recognised” could induce an unhealthy race to complete as many reviews as possible, which could be dangerous in a compensatory-type pyramid structure as used by Frontiers:

    TL, I was not aware that peer reviews are being remunerated, even if badly. Can you indicate the URL where financial remuneration is being offered, please. This could be the game changer (and I don’t mean that in a positive way at all).

    1. The “world’s most prolific peer reviewer” received a “prize” of 250 USD for completing 661 peer reviews in less than a year.

      AFAIK, in the US for-profit corporations can get in trouble for using “volunteers” to work for them without paying a salary when compensation is provided through some other financial means (tax issues). Such a prolific peer-reviewer surely comes very close to actually being an employee of the journal publishers, and should be paid an actual salary to perform such a service.

      1. TL, thank you for this clarification about the payment/remuneration. Your observation is fascinating. However, the exact donor of this cash prize is absolutely unclear. Paid by Publons, paid by the publishers, or paid by publishers to Publons who then withdraw from a cash pot? Please observe the Publons “About us” page:
        It indicates that Publons “is a limited liability company registered in New Zealand and in the United Kingdom”:
        On that page you can also see the staff, team members, and advisors.

        Finally, your comment “US for-profit corporations can get in trouble for using “volunteers” to work for them without paying a salary when compensation is provided through some other financial means” deserves greater exploration. For example, some publishers like Elsevier and Taylor & Francis publish journals in the US, but use peer reviewers for free (i.e., there is no financial remuneration, or compensation of any sort). So, I am curious as to how the law works for such publishers. Any insight or links to US laws related to the use of “volunteers” by for-profit companies would be very welcome. Scientists need to be more educated about such issues.

        1. Publishers, in my view, have been unjustly enriched at the expense of authors, reviewers and academic institutions. I wonder if they will ever be asked to make restitution to the world: i.e. at least make all published research open access.

        2. The relevant law is called the ‘Fair Labor Standards Act’, there are lots of interesting analyses on the web on what kind of unpaid volunteering is legal under this act.

  2. I use Publons to show my record as a reviewer because it is hard to track it by myself even If I Save them on my hard drive. I am glad I did it, I know exactly how many reviews I did and how many manuscripts have been published. This kind of information is very valuable to me, and myself only. People say it may generate competition and one can submit fake ones, but I do not see why people want to do that with no money gain at all and who cares about that beyond yourself. I would worry about a meteor falls on my head more than that.

  3. Like JATdS, I’m skeptical about the figures reported by Publons, for one simple reason: reviewing 661 papers in less than a year, working in the morning, evening and on Saturdays is … impossible.
    A key point is that the RW headline mentions “661 reviews” and the STAT article “661 papers”. Both figures would be consistent only in the case of papers reviewed using a single round of queries by the referee. I estimate that only 10-30% of papers fall in this category. The “661 reviewed papers” figure also assumes that none of the papers was rejected or accepted with no comments (I refer to the 5-word reports like: “all is wrong, don’t publish” or: “excellent, publish with no changes”).
    Roughly speaking, I think that nobody is able to review seriously more than 100 papers per year (I mean normal papers: 5-20 pages, including new results, a discussion, figures, references, etc.).

    1. he is only doing statistical reviewing. If that is fairly simple then you can do it quickly provided the study is fairly simple. I find that the journals I review for only ask for a statistical reviewer if it is complex, so it is usually a reasonable amount of work.

      It would become incredibly boring.

  4. Sylvaine, there is another serious issue which is not often discussed (or maybe not discussed at all). When exactly do these “peers” conduct peer review? I observed a chart (which was available through my experimental profile) at Publons that showed most reviewing being done on Mondays, followed by Tuesdays and then tapering down to Friday, with least work done on weekends. This implies (or suggests) that these scientists are doing review work during working hours (maybe the the really honest peers will do work after hours). Unfortunately, the Publons figures do not show actual time of the day, only day of the week. I can appreciate that work is mostly (I assume) done for free (I know of very few paid-for models), but I would argue that work done for free for these for-profit publishers, is in fact having real costs, either on private institutes who pay researchers privately, or on tax-payers’ pockets, who support scientists in national institutes. Nothing in this world is for free, and if in fact peers are working on “free” peer review during their regular office hours (assuming a 9-5 job), does this fall within their contractual terms?

  5. There are other issues, and in this sense, Publons could prove to be a valuable post-publication tool, to assess who, and why/how, imperfect science was allowed to be published, and to identify those individuals who “peer reviewed” and approved bad science for publication. So, Publons could actually potentially damage the images of some rotten apples as they seek to self-promote themselves. For example, currently, Publons requires voluntary registration (in principle, not unlike ORCID). Yet, at what point will it become compulsory (if at all)? As more scientists join, as more publishers join, and as the numbers increase, there will likely be a tipping point between voluntary and compulsory (ditto ORCID). This has good and bad points, of course. However, when and if it becomes compulsory, several issues arise:
    a) it may induce a very unhealthy state of competition, even though, as suggested by Tian Xia above, some simply want to use Publons to “organize” their peer review efforts (scientists I caution, should not be so naive about such new efforts to globalize “recognition”); recognition, like the Journal Impact Factor, is used and abused by the publishers and by scientists (despite the claims by many in both groups that it is an innocent metric).
    b) what does the openness index used by Publons (see the openness column scores here) actually reflect? Is it equated with transparency? Should those peers who have conducted open peer review be awarded the same score as those who have hidden behind their identities in traditional single- or double-blind peer review?
    c) How many peers are conducting “peer work” simply to advance their knowledge about their competitors? How will the scientific community ever know if the peer reports are hidden from public scrutiny?
    d) In recent times, one often reads about problems in PLOS ONE or Scientific Reports, for example, most likely because they are currently vying for top place in terms of OA publishing volume. Could this volume and output/productivity be related to peer review work? Is such work being conducted superficially? Why are such open access journals charging so much money for article processing fees when their reviewers – who appear to be quite prolific (according to Publons), receive no money?
    e) As I allude to above, how does voluntary “peer work” relate to editorial remuneration, as observed in Frontiers? See, for example, a Frontiers journal ranked as No. 83:
    f) Publons could be useful for whistle-blowers and critics. For example, what could one say if individuals on the Retraction Watch leaderboard appear on Publons (at the moment, it is voluntary, so likely we might not find any on Publons, but they might appear if and when Publons becomes compulsory):

    I think there are many maybes and if related to Publons. Maybe Retraction Watch could organize an interview with the Publon team to clarify some of these issues and concerns.

    1. Hi Jaime,

      Thanks for your comments. You raise some very interesting and valid points. We’d be happy to speak with you, or the folks from Retraction Watch to address these. Feel free to contact me: tom@publons.com.

  6. I’m sorry to be hogging the comments section, but Publons in the context of Peer Review Week should be the focus of our attention, because it goes to the heart of peer reviewer compensation and the actual or false incentives in place that may have resulted in the corruption of the scientific record. We see many experimental systems in place, but none as powerful or prominent as Publons. And that is why we should be asking tough questions, not simply accepting this system as it currently stands. As highlighted by Retraction Watch in the weekend reads, Naudet and Fanelli make quite a reasonable, but quite unheard of request or suggestion, to include peer reviewers as co-authors of papers, for their intellectual contributions: “occasionally, peer reviewers do such careful and thoughtful work that they end up contributing to the paper more than some of our listed co-authors.”

    Yet, IMHO, Naudet may have missed making an important disclaimer, as indicated by his peer reviewer activity shown by Publons:

    So, already, just in this weekend reads, we see how Publons can be used to assess potential COIs.

  7. If you are doing 661 “peer reviews” in less than a year you are part of the problem, not the solution!

    When I review something, it takes me about half a day to read it carefully (sometimes a lot more if its confusing and verbose). Then I put it aside for a bit because my most useful thoughts generally take a day or so to bubble up. Then I spend another hour or two writing out detailed comments.

    The minimum number of reviews one owes the world is ~3x the number of manuscripts one submits. I try to do a bit more than that just to be a good citizen. But if I did 2 per day, including weekends, I’d be spewing bad judgement right and left.

    I hadn’t heard of this publon thing and need to check it out. Some tracking and recognition that I’ve done my duty to my field would be nice. However, I share other commenter’s concerns that it shouldn’t become another metric that we’re “incentivized” to pervert.

    There are many things in this world that we do just because we should – its called adulthood. As for those who don’t do their share – there will always be brats who abuse any system we come up with. Just remember: they do have to look in their own bathroom mirrors every morning, and for some of them, that must be a rather unpleasant experience.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.