Poll: What to do when peer review feels inadequate?

Imagine via Raul Pacheco-Vega
Image via Raul Pacheco-Vega

How should scientists think about papers that have undergone what appears to be a cursory peer review? Perhaps the papers were reviewed in a day — or less — or simply green-lighted by an editor, without an outside look. That’s a question Dorothy Bishop, an Oxford University autism researcher, asked herself when she noticed some troubling trends in four autism journals.

Recently, Bishop sparked a firestorm when she wrote several blog posts arguing that these four autism journals had a serious problem. For instance, she found that Johnny Matson, then-editor of Research in Developmental Disabilities and Research in Autism Spectrum Disorders, had an unusually high rate of citing his own research – 55% of his citations are to his own papers, according to Bishop. Matson also published a lot in his own journals – 10% of the papers published in Research in Autism Spectrum Disorders since Matson took over in 2007 have been his. Matson’s prodigious self-citation in Research in Autism Spectrum Disorders was initially pointed out by autism researcher Michelle Dawson, as noted in Bishop’s original post.

Short peer reviews of a day or less were also common. Matson no longer edits the journals, both published by Elsevier.

Bishop noted similar findings at Developmental Neurorehabilitation and Journal of Developmental and Physical Disabilities, where the editors (and Matson) frequently published in each others’ journals, and they often had short peer reviews: The median time for Matson’s papers in Developmental Neurorehabilitation between 2010 and 2014 was a day, and many were accepted the day they were submitted, says Bishop.

Although this behavior may seem suspect, it wasn’t necessarily against the journals’ editorial policies. This is the peer review policy at RIDD:

In order to maintain a rapid rate of review all submitted manuscripts are initially reviewed by the Editor in Chief for completeness and appropriateness to the journal’s stated Aims and Scope. Manuscripts that pass the initial review will be handled by the Editor, sent out to reviewers in the field, sent to an associate editor for handling, or some combination thereof, solely at the discretion of the Editor.

Still, that leaves many, many papers that went through a perhaps inadequate peer review — a phenomenon that is not unique to autism journals. What should be done about such papers? We reached out to several experts to find out their opinion, and present here a selection of the responses. We also want to hear yours – check the poll at the bottom, and feel free to leave a comment.

Let’s start with Bishop herself, who was kind enough to chat with us about whether retractions are in order:

A lot of this work is not very good, a lot of it is about clinical populations, and so we have standing in what looks like a peer reviewed record, things that are low quality or controversial or don’t replicate.

The other concern of a lot of the junior people who have contacted me is the people who have been doing this have been richly rewarded – they get jobs, they get prizes, they leapfrog over the honest people in terms of getting promotions and so on…and that has been greatest theme among people who are more junior, they’re very happy that this behavior will now not get rewarded.

I don’t know whether you would need retractions for that to be achieved…I think I’m reasonably satisfied that these two blogs have created an absolute firestorm, and stopped this from happening. But there is this niggling thing that there are all these papers in the scientific record [that have not been adequately peer reviewed].

Virginia Barbour, chair of the Committee on Publishing Ethics (COPE), sent us a statement from the organization:

We agree that there is a need to put a note on each paper now that questions have been raised over the review of the paper and that the editors are assessing each paper individually. Then the new editors need to do just that. The question of authors’ complicity in the process will have to be part of that review but shouldn’t be a criteria on its own for retraction which should be about more, particularly whether the findings themselves are unreliable.

COPE has issued guidelines for deciding when to retract a paper.

We didn’t survey all the journals Matson edits, but one researcher — Sue Fletcher-Watson, at University of Edinburgh — told us one of her papers also appeared to have an unusually quick review at Springer’s Review Journal of Autism and Developmental Disorders:

I submitted a single-author review paper to RJADD, where Dr Matson is editor-in-chief.  I was pleased not to have to respond to reviews – the paper was published in its original, submitted version – but also unsettled by the experience. I don’t think I should have to retract the paper as I submitted it in good faith and wasn’t clear (and I am still not clear) whether the paper was reviewed or not.  After all, it is technically possible, if vanishingly unlikely, that reviewers did see the paper but had no substantive recommendations to make. I would be comfortable with an author’s note to the effect that this paper was not externally reviewed however – I’ve no desire to dupe fellow academics or any other readers.

Ferric Fang, a University of Washington microbiologist and Retraction Watch board member who has published papers on scientific misconduct, thinks the issue at the autism journals is the tip of the iceberg:

I see this as part of a broader problem involving the proliferation of scientific journals that do not adhere to accepted practices, which in turn is a form of cargo cult science.  It is certainly appropriate to bring attention to journals that fail to engage in a rigorous peer-review process.  Articles published in such venues should not be regarded by the scientific community in the same manner as those that have been vetted and revised in response to informed critiques prior to publication.  So-called ‘post-publication’ peer review models raise some similar issues: http://scholarlyoa.com/2015/01/06/im-following-a-fringe-science-paper-on-f1000research/.

Retraction is not a suitable tool to address this problem, as this would require cooperation from the same journals that are failing to institute appropriate peer-review.  Rather, readers, grant reviewers and institutional promotion committees among others must pay close attention to the journals in which scientists are publishing and ensure that they meet certain standards.  This also reveals a problem with simply relying on impact factor and bibliometrics to assess journal and research quality.  The case you have cited shows that if a journal permits excessive self-citation and publication without peer review, then that journal and its authors may be able to inflate their numbers in a spurious fashion. Scientists who value their reputations should scrupulously avoid such journals.

Michael Osuch, the publishing director for neuroscience and psychology journals at Elsevier, posted several comments on Dorothy Bishop’s blog spelling out how the two Elsevier journals were changing their policies. A representative confirmed the comments were authentic, but declined to add anything else about what would happen to the papers published under Matson that may have not be adequately reviewed. From the comments:

Under Dr Matson’s editorship of both RIDD and RASD all accepted papers were reviewed, and papers on which Dr Matson was an author were handled by one of the former Associate Editors. Dr Matson and his team of Associate Editors stepped down at the end of 2014, and he retains a honorary position of ‘Founding Editor’, which does not include handling manuscripts.

In the meantime, our publishing staff have been working, alongside a freelance editor, at securing at least two suitable expert referees for papers submitted. We are working to avoid a backlog and limit any potential delays for authors of those papers. Editorial decisions themselves will be made by the new editorial team when appointed shortly. As we move towards making those appointments, we will ensure that quality peer review remains our priority.

In a minority of cases, Dr Matson acted as sole referee. Our focus is now on the future by bring on board new editorial teams whose priority will be to ensure that all accepted articles have a minimum of two referee reports.

Matson himself had few words for us about the papers:

They were all peer reviewed. Look at the post from Elsevier on her blog.

We believe Matson’s quote refers to Osuch’s comment, though he has not responded to a request for clarification. Jeff Sigafoos, one of the editors of the other two autism journals whose articles were often published in Matson’s journal, told the Guardian (in reference to 73 papers published within 2 days of receipt):

The figures you state for 73 papers is routine practice for papers published in RIDD and RASD. A large percentage of all papers published in any given issue of RIDD and RASD appear to have received a rapid rate of review as indicated would happen in the official editorial policy of these journals.

When we contacted Sigafoos, he declined to comment, but said that he felt his comments in the Guardian were taken out of context:

Yes we do have a response that will appear in an appropriate outlet in the near future.

We also made a response to two reporters from the Guardian, but they actually used very little of that and shortened some quotes in a way that I think somewhat undermined the valid points that we were trying to make.

I hope you can appreciate that due to that experience, we would prefer to disseminate our response in a venue that will maintain its integrity.

We’ve reached out to Informa Healthcare, which publishes Developmental Neurorehabilitation; Taylor and Francis, the previous publisher of the journal; and Springer, which publishes Journal of Developmental and Physical Disabilities. We will update if we hear back.

Update 3:40 p.m. EST 4/8/15: The post has been updated to reflect Michelle Dawson’s role.

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

31 thoughts on “Poll: What to do when peer review feels inadequate?”

  1. In my view, if it turns out that any papers did not receive adaquate peer review, the authors should consider requesting that the papers be withdrawn – not because of any fault in the papers, but to protest the fact that the journal did not provide an adequate service. These papers could then be resubmitted for a full peer review, either at the same journal or elsewhere.

    Failing this, I’d suggest that the articles in question be subject to a mass Expression of Concern, outlining the facts, following the precedent set by BioMed Central in the rather similar case of the journal Head and Neck Oncology.

  2. There is an unstated issue with short/cursory peer reviews. This is often a characteristic of the avalanche of predatory journals which have come to the front of late. These have short or non-existent peer reviews, publication is expensive, and the journals have a very short history. I am not familiar with the journals which are most extensively discussed here.

    1. What makes this case so unusual is that the journals in question were not predatory journals by dodgy publishers. All 4 journals were published by apparently respectable publishers, listed in Web of Science, with detectable, if not stellar, impact factors. The world at large- including many unwitting authors (and presumably Thompson Reuters) assumed they were peer reviewed. The fact that Elsevier appears to think it unnecessary to take any action about its two journals is of deep concern, as I explain in my latest blogpost.

      1. Unfortunately it’s not as unusual as we may think. I believe Phil Davis exposed what he referred to as a citation or peer review “cartel” a year or two ago on the SK. There have been others as well. One of the problems for any publisher is that they kind of cede responsibility and trust that the editors they put in charge of a journal will act ethically. 9 times out of 10 they do, but there are those who ruin it for the rest due to a variety of motivations.

  3. I think this is an excellent set of questions being posed here. In my mind, there is no doubt that there is a subset of the scientific community that literally begs for a “lenient” peer review, and feel chuffed with themselves when they get a breezy peer review. This is precisely the reason why, IMO, there is such an efflux of scientists to the “predatory” OA journals as listed by Beall. Because, very literally, money can buy them “easy” peer reviews (more likely not peer reviews), and thus, in many cases, easy publications.

    The issue becomes more problematic when we are dealing with Impact Factor journals because these continue – unfortunately – to be associated with “quality”.

    I think there are two issues here: a) a personal sense that not enough quality control was imposed on one’s own paper; b) a sense that papers, more widely, within a journal, have not been adequately peer reviewed. In a), one way to circumvent this is by aiming for a higher IF journal, higher than one would most likely expect, to invoke a challenging peer review. Excessively high journals would undoubtedly reject the paper upon submission. In the plant sciences, for example, where the field of study can sometimes tend to be extremely narrow, and where the qualified peer pool is equally narrow, there is a very limited choice at present, explaining precisely why there is a boom of predatory OA “plant” and “agricultural” journals, to cover that gap that we are seeing in traditional STM publishing. Solving a) will not be able to rely on the authors themselves.

    b) is a much more complex issue. Here, we are dealing with editor boards that are always changing (or at least, IMO, should be changing regularly every 3-4 years to avoid stagnation). Thus, a paper published in the same journal 1, 3, 5, 10 and 20 years ago, in the very same journal, may reflect very different levels of “quality”, depending on who was steering the boat at the time. This is precisely why post-publication peer review is so essential. Because we have no idea how well, or how effectively, peer review took place. Or even if peer review took place at all (in 100% of cases). So, for the scientific community to trust what publishers have put out there in journals is dangerous. Trust has been lost, at least by me, and most certainly in many mid-tier plant science journals, where I am witnessing a veritable wave of errors, maybe not major, but errors nonetheless, that should never have gotten approved.

    The key question here is: who is responsible? Again, IMO, authors are responsible for their data, but in terms of the accuracy of statements, and minor errors, this is most definitely an editorial responsibility. Finally, the publisher is responsible for any paper it has approved. I see an excessive focus on the authors, and not enough of the blame being placed on publishers, who have benefitted, primarily in two ways: i) profits and revenues (and in most cases through an inherently exploratory model, i.e, free peers and free editors); ii) an Impact Factor.

    So, we are truly in a sad situation at the moment, truly. Where trust has been lost, where quality has not been guaranteed, and where it is now incumbent upon authors, editors and publishers to come together to collectively examine the already published literature. We can find multiple new and stimulatory ways to improve the peer review system moving forward, but while we do this, we have got to clean up the mess that others have left behind for us. And in plant science, I continue to claim that it’s a big mess. Most likely on those who are sitting at the top of the perch aren’t feeling, or sensing this, while the vast majority simply doesn’t care, or thinks that this is something that is not their responsibility. So, the other great sadness is the silent “impassive” scientist, who sees problems, but does not report them, and sits idle.

    A separate problem is money, an issue I touch upon here:
    http://retractionwatch.com/2015/03/26/biomed-central-retracting-43-papers-for-fake-peer-review/#comment-455907

    So, thank you Dorothy, for your analysis. We need many more like you.

  4. In my experience, I have discovered some percentage of reviewers never really enjoyed this task- but took it on only as a means to legitimize careers at early stages. Once established, I have heard many express that they “were relieved” to be liberated from the process. So perhaps at the root of this, journals need to do a better job of screening potential reviewers, and excluding those whose hearts are not in it… .

    1. I agree, but faint-hearted reviewers were not the problem here. instead we had editors who would accept papers by their friends within a day or two of submission.

      1. That kind of circumstance had not occurred to me. I’ve worked across a broad array of science, and I can recall one field in particular (which I won’t mention in mixed company) where the cronyism was so over the top I found it simply nauseating. Publishing is the one place where this should not exist. Yet it seems to poison the well time and again. It makes me wonder why all journals haven’t adopted double blind review standards… .

  5. Does it REALLY matter when reviews for other non-rapid-review journals typically sit on a reviewers desktop until the day before they’re due anyway?

    I suggest all papers are under active peer-review for about a day, regardless of what journal they are published in. An extended period allowed for review means an extended amount of time peer reviewers can let the manuscript sit before they get to it.

    1. Are you clear on the volunteer nature of review? That it takes some time – I spend a minimum of 4 hours on a review. You are suggesting that, when I accept a review request, that I cancel every other thing, and do the review? There is a serious issue getting qualified reviewers. The modest flexibility afforded by journals makes it possible to review.

      1. *snippy response deleted*… Yes, you spend 4 hrs. Why don’t you spend 8? Maybe an 8 hr review would be better than a 4 hr review… Some people might suggest that taking only 4 hrs to review a paper is pretty slack.

        Which gets to my point… Having 2 weeks, 6 weeks or 3 days to review a manuscript doesn’t mean anything when we take as much (or as little) time as we need to review a manuscript.

    2. Yes it matters. It’s clear that papers cannot be received, reviewed and accepted in a day. It’s simply not possible for an editor to find external reviewers, solicit their willingness to review (or not), send them the paper and have the reviews back all within a day or two. It doesn’t happen – and certainly not as a matter of course.

      Incidentally, I take a week or so to review a paper. This involves reading it, thinking about it, rereading with a more critical appraisal of particular points, maybe doing some checking of stuff, having a think about whether the authors have missed out citing relevant literature etc., writing an appraisal.. It might be 6 to 8 hours work overall but at least for me reviewing a paper is not something you just sit down and do from start to finish…

      maybe the papers being discussed here in these journals are of a different nature to normal scientific research articles…

  6. Masked Avenger
    I suggest all papers are under active peer-review for about a day, regardless of what journal they are published in.

    I strongly disagree, I like to decide when I will read the manuscript and write the review (usually not on the same day – things take time). With this kind of rule, I would probably refuse most of the invitations to review; and editors might have even more problems to find potential reviewers…

  7. Lest anyone think that this is limited to the lower end journals, here’s an example at PNAS last year… http://www.pnas.org/content/110/26/10836.long

    The paper was handled at the journal by the NAS member whom the senior authors trained with, and with whom they’d published over 20 papers, including one just a year before. This despite a PNAS rule (clearly stated in the author guidelines) that papers must not be handled by co-authors from the last 3 years. This indicates not only compromised peer review, but compromised editorial action in not picking up the conflict.

    Because there were a bunch of other issues with the paper, PNAS issued an erratum which includes the simple disclaimer “a conflict of interest statement was omitted”. One can’t help thinking that if the editor had been less conflicted, the peer review would have been somewhat more rigorous, thus abrogating the need for an erratum altogether.

    The episode took several months to resolve, with constant reminders to the journal to keep on it. The end result is a slap on the wrist, rules were broken, move along nothing to see here. Why even have rules, if you’re going to allow a cop-out when people are discovered blatantly flouting them?

    So yeah, count me firmly in the “re-review” column on this poll.

    1. That’s a very much less problematic example Paul. The Singh et al PNAS paper was received in January and approved in May. There’s no evidence that it didn’t undergo proper peer review and the 3 month received-accepted timeline rather supports a normal peer-review process. Yes the editor should have declined to deal with this paper but he didn’t and that was an error – good for PNAS for pointing that out – the corrections seem rather trivial to me…

      ..what do you think is so problematic about this that makes it worth highlighting in the context of the stuff discussed on this thread?

      1. chris
        ..what do you think is so problematic about this that makes it worth highlighting in the context of the stuff discussed on this thread?

        It highlights the fact that less-than optimal peer review is not the sole preserve of the predatory low end open access journals. It’s not just that the editor should have declined; their immediate superior (Assoc’ Ed ?) should have noticed the conflict. What in the system allowed this to go unnoticed? I’m guessing the embedded trust and the level of autonomy afforded NAS members within the journal’s online management system might have something to do with it.

  8. At the risk of using this issue as a shameless plug, let me point out that this was among the reasons we formed PRE and PRE-val. These situations most likely would not have passed our requirements for peer review and, if appropriate, we could have alerted the publisher and assisted them if indeed something unethical was occurring. It’s a shame when (IMO) the large majority of editors and reviewers act responsibly but are tarnished by bad apples. Not saying that’s what happened here, but if it looks like a duck and quacks like a duck…

  9. I’d like to point out that “reviewed in a day” should be something for referees to strive for, not something to put down as a sign of poor quality.

    I am of the opinion that too many faculty and professionals accept review assignments and proceed to simply sit on them for two weeks. I make every effort to turn a review around within two days, and ABSOLUTELY within the first week. If one can’t do it in that timeframe, perhaps he/she should not be accepting new assignments.

    Note: I suspect that this refers to the journal in question rendering a decision within one day, which is obviously nigh impossible if they actually seek outside feedback. However, the phrasing was ambiguous and so I thought I would add my thoughts on the alternate interpretation.

    1. As a referee, I do it the other way round: I never accept to do a review in less than a month, and I usually ask for more time. But then I try to stick to the deadline I promised.

    2. Quite a few years ago, perhaps 20, I read that some economics journals were trying the experiment of paying peer reviewers, with the amount of the honorarium decaying rapidly to zero. I have no idea what became of that.

  10. This is precisely the reason why I flat-out refused to pay the fees for publishing in GM Crops and Food [1]. Because the “peer review” was basically one sentence and instant acceptance. A total farse by what was supposed to be a “respectable” peer reviewed journal. So, personally, I don’t care if the “removal” of my paper is called a “withdrawal” or a “retraction”, but what does concern me is that this Taylor and Francis (at that time Landes Bioscience) journal failed to conduct any peer review at all. Ultimately, what that case study indicates is that when the leadership of the journal fails, the true victims are the authors. Not always, but certainly in this case. I should add that that paper is in review elsewhere, but it is painful, extremely tough and embarassing to have to submit to a journal, having to declare that the paper (at least a previous version) had to be withdrawn due to issues with payments.
    [1] http://retractionwatch.com/2014/11/20/journal-retracts-paper-when-authors-refuse-to-pay-page-charges/

  11. I just received a rejection e-mail today from 3-Biotech, published by Springer:
    http://www.springer.com/chemistry/biotechnology/journal/13205
    The rejection e-mail is not the problem, although the reasons for the rejection are pretty weak (but correctable) and the wording is shoddy.
    The concerning part of the rejection is the fact that the editor-in-chief, Prof. Futwan A.A. Al-Mohanna, revealed my username and password to my co-author.
    Is this acceptable? Furious with this breach of privacy, I have just issued a formal complaint and copied the three Springer “contacts”, requesting a formal explanation. I don’t know about other scientists, but I am encountering literally at least one problem a day with publishers. And this Springer journal in particular has multiple problems.

  12. As soon as publication becomes an end in itself and not a means to something greater, this is precisely what happens. When reward systems stop really trying to evaluate contribution in favor of just counting, the stage is set. In the the Atlanta public schools cheating scandal, what was important was the numbers–the scores on tests–and not what the numbers were supposed to represent. Pure economic rationality brings us to exactly this point.
    If these journals want to retain their prestige, then they have an obligation to clear this cloud. They could do it by organizing a task force and literally plowing through these papers, giving them something like the review they should have received initially. A band of people, working cooperatively, might make a great deal of headway, all the more so for considering multiple papers by the same authors at the same time. This group might at least place papers into classes–no obvious issues, some concerns, major concerns, trash–and set the stage for more detailed critique across the whole corpus. Review papers detailing this work would provide some byline compensation for participating in this effort.
    That said, peer review has always had a dark side. When preparing to submit a paper, many or most authors consider who will oversee the review process, favoring journal outlets where review supervision will be “fair.” “Establishment” authors and friends of the editor get the benefit of the doubt–minor flaws are overlooked–while papers by unknowns from undistinguished institutions get rejected for precisely the same flaws.
    Peer review can also be a vehicle for preserving bad ideas and inhibiting innovation. Looking back on his career building the academic field of forecasting, J. Scott Armstrong credited the editor of the journal Interfaces will letting Armstrong write a regular column, free from peer review. That let Armstrong invest in and introduce ideas from outside the mainstream. Those papers were clearly identified as editorial, however. There was no illusion of peer review.

  13. I must say as a reviewer I try to do it in one go, or at least on two days over a rainy weekend. It usually takes me about 4 hours, considering I only accept reviewing on topics I know about. This means I refuse most invitations to review. I should add that I cold-heartedly rejected most papers I reviewed, having usually reviewed to mainstream average journal in my field, seemingly because authors recommended me or because editors know me. I feel after rejecting papers I am not invited anymore by the same people, which on my side I think is for the best for I strive for sound standards in published literature and I do not have much time to invest in many reviews. On the other hand I feel the system favours “cool” reviewers, which I think is quite bad for Science itself.
    As an author usually I recommend reviewers (because journals ask me to, while I do not agree with this policy) who are the best in the topic I am writing about. This frequently results is strict reviews and major corrections or rejection, but much to learn and think about. Also I feel they get to know me (I surely had my present supervisor as a reviewer). I have received light reviews before, particularly from a low-tier traditional journal which traditionally had no proper peer-review, though traditional in my field. I took these also lightly for the papers were indeed fine for their narrow scope — I would have accepted them there. I have had biased reviewers a couple of times, though biased against me from clear conflict of interest. In these cases I pleaded to the editor and papers got accepted from generally ignoring these reviews — I gather in such situations I half-bypassed peer review.

    My feelings: I do think secret pre-pub peer review hides a lot of politics, thus I do not agree with it. As a result, I try to be strict as a pre-pub reviewer which also results in my reviewing fewer papers. I have tried making my pre-pub peer reviews (on my papers) available on Pubpeer but this was frowned upon by my peers, thus I removed them. For the while, at least. I however do not overvalue pre-pub peer-review and rely on post-pub impressions on my papers.

  14. The trouble with “re-reviewing” a paper is that the field moves on, and it is very difficult to judge objectively how valid a paper is after subsequent information is known. Generally, high impact papers are published in high impact journals, and they tend to go to the trouble of soliciting multiple reviews from people with high standing. You might think that these people would be unlikely to give shoddy reviews, and hopefully this means that the “big” papers should get the most scrutiny. However, I recently reviewed a manuscript for Physiologia Plantarum, a low- to middle-impact plant science journal. I spent about 3 hours on the review, given that the paper was pretty straightforward. I was impressed to see that it was reviewed by three people in total, who were all unanimous in their decision – reject. We all had similar (and complementary) concerns. The validity of the work was not in doubt, but the authors could have made much more of it, and missed out important methodological details. Peer review, in this case, worked well. Meanwhile, a recent submission of mine to Plant Cell received only two reviews, one of which was fairly cursory and showed an unwillingness to really try to understand the work. (Outcome was reject with encouragement to resubmit, as is normal for TPC if there is any extra experimental work required). So rigour of review does not even necessarily correlate with journal standing.

  15. Yadda-yadda, and that is precisely why plant science needs PPPR urgently, because the standards of peer review have varied widely over time, with different editors and because the whole process in low to mid IF journals, as you indicate, is so variable, if not biased. The current system is simply not working very effectively, except perhaps for the highest ranking plant science journals, but even so, recent eevnts at The Plant Cell have shown that there is porosity (or more), even at the top. Some of my thoughts on this:
    http://journal.frontiersin.org/article/10.3389/fpls.2013.00485/full
    http://www.councilscienceeditors.org/wp-content/uploads/v37n2p57_59.pdf
    http://www.tandfonline.com/doi/abs/10.1080/08989621.2014.899909#.VSbbi8uJjIU

  16. STM publishing is a business, driven by financial interests. This fact allows for peer review to be conducted in a manner less concerned with science and more concerned with pace: the mantra may be quantity over quality. The large publishers are pushing for content growth; and the administrators (not the editors) managing the peer review process would rather the editors’ (often overworked) accept or reject based on flimsy reviews than delay the turn around times – keep that submission line moving! It is short sighted – rigorous peer review, though perhaps slower, eventually will raise impact factor – but capitalism often is.

  17. I’m one of the science bloggers at the Guardian who co-wrote the article referred to in the above piece. In his comments to RW, Prof. Sigafoos states that he felt his quotes were taken out of context in our article. This is the first that I have heard of this – he has not contacted us directly. We don’t believe that we took anything that he said out of context, and in the interests of clarity, I’ve reproduced his original email to us below (from which we took the quotes). To give his responses context, I have included the questions that we asked in square brackets.

    —-

    Dear Dr. Etchells,

    Thank you for your questions and for the opportunity to respond.

    1. Question #1: [Why is there such a fast acceptance rate for these papers? For example, this paper (http://www.sciencedirect.com/science/article/pii/S0891422214003266) was received on 20th July 2014 and accepted 2 days later. Do these papers go through standard peer review channels?]

    The editorial policy that covers both RIDD and RASD are clearly stated on the journals’ respective websites as follows:

    Peer review policy In order to maintain a rapid rate of review all submitted manuscripts are initially reviewed by the Editor in Chief for completeness and appropriateness to the journal’s stated Aims and Scope. Manuscripts that pass the initial review will be handled by the Editor, sent out to reviewers in the field, sent to an associate editor for handling, or some combination thereof, solely at the discretion of the Editor.

    http://www.elsevier.com/journals/research-in-developmental-disabilities/0891-4222/guide-for-authors#16000

    http://www.elsevier.com/journals/research-in-autism-spectrum-disorders/1750-9467/guide-for-authors#16000

    The times from submission to acceptance are consistent with this policy and reflect standard editorial policy.

    2. Question #2: [Some researchers have suggested that of 73 papers published in RASD and RIDD co-authored by yourselves and third author, Dr Mark O’Reilly between 2010 and 2014, 17 were accepted the same day that they were received, 13 within one day, and 13 within two days. (More information can be found here: http://deevybee.blogspot.co.uk/2015/02/editors-behaving-badly.html and associated data here: https://osf.io/6ctzy/?view_only=96c89a72c4b94da796f442ccc36b19b8 ). How would you respond to this claim?]

    The figures you state for 73 papers is routine practice for papers published in RIDD and RASD. A large percentage of all papers published in any given issue of RIDD and RASD appear to have received a rapid rate of review as indicated would happen in the official editorial policy of these journals. If you review a volume of RIDD for example (say 2010) you will note that approximately 50% of the papers received rapid review (approximately 100 papers) and a very small percentage of those papers were authored by us (3 papers I believe). Of course many of our papers had much longer lags between when they were submitted and when they were accepted. For example, a 2006 paper was received on 27 January 2004, the revision was received on 13 April 2004, and it was accepted on 24 April 2004. Of course, we have also had papers rejected.

    3. Question #3: [The example paper in question 1, as well as this paper (http://www.sciencedirect.com/science/article/pii/S0891422214004806) pertain to Alzheimer’s Disease, however they are published in Research in Developmental Disabilities. do you consider Alzheimer’s Disease to be a developmental disability? The NLM defines a developmental disability as involving impairments that originate before the age of 18 (http://www.nlm.nih.gov/cgi/mesh/2015/MB_cgi?field=uid&term=D002658). If it’s not the case that AD can be classified in this way, why were these papers submitted to (and accepted in) inappropriate journals?]

    While some of our RIDD papers did not focus on what some might see as the traditional types of childhood developmental disabilities, developmental disability could also be viewed from what we would see as a more contemporary life-span perspective. That perspective acknowledges that development occurs throughout the lifespan and thus a range of impairments or diseases can cause a disability that can affect development at any stage of life. This broader lifespan perspective is an emerging and evolving view. In the future it might be the case that we will be seen as contributing in some small way to this emerging and evolving view.

    Thank you for your patience and for the opportunity to answer your questions.

    Sincerely,

    Jeff Sigafoos, Giulio Lancioni

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.