Anonymous blog comment suggests lack of confidentiality in peer review — and plays role in a new paper

neuronA new paper in Intelligence is offering some, well, intel into the peer review process at one prestigious neuroscience journal.

The new paper is about another paper, a December 2012 study, “Fractionating Human Intelligence,” published in Neuron by Adam Hampshire and colleagues in December 2012. The Neuron study has been cited 16 times, according to Thomson Scientific’s Web of Knowledge.

Richard Haier and colleagues write in Intelligence that

The main purpose of this report is to invite Hampshire and colleagues to respond to our specific scientific concerns that aim to clarify their work and contribute a constructive discussion about the meaning of their findings.

But it is their discussion of what happened during the peer review process and after the study was published that caught our attention. On July 13, 2012, Haier was asked to write a Preview to the paper, to be published along with the study in Neuron:

RH found many aspects of the paper quite difficult to understand and, more troubling, he worried that some main conclusions could be based on erroneous application and interpretation of factor analysis.

He had been given a short deadline, so he shared the manuscript with four colleagues — now co-authors of the Intelligence paper — “with considerable expertise in brain imaging and psychometrics, especially factor analysis.” They shared his concerns, and wrote the Preview together, submitting it on August 6:

… in a cover letter we informed Neuron that our concerns were so serious, that had any of us been original reviewers, we would not have recommended acceptance without major clarifications.

The journal wrote back the next day, and asked Haier and his colleagues to submit a more detailed critique to which Hampshire and his colleagues could respond. Haier et al did that on August 16, but on October 31, Neuron

informed us that publication of the Hampshire et al. manuscript would go forward with some minor changes. We were also informed that, after considerable internal discussion, the editorial board had decided that our Preview would not be published; no reason was given. We objected and asked if we could submit a modified Preview based on the modified manuscript (which was not shared with us). Neuron declined. One editor asked to have a confidential phone call with RH and that call took place on December 2nd. RH respects that confidentiality and can only say that he found the editorial process and decision-making hard to understand.

When Haier saw an embargoed copy of the paper on December 18, he and his colleagues

were surprised to see that the final version did not address our concerns in any substantial way.

The paper — which was press-released by Hampshire et al’s university, The University of Western Ontario — earned some press coverage, including from Neuroskeptic. And it was in the comments on Neuroskeptic’s post where things got interesting:

There were several comments that suggested knowledge of our unpublished Preview. We determined that a graduate student had overheard a relevant discussion and decided to comment on the blog anonymously without our knowledge. One commenter on the blog responded to some of the scientific critiques with a lengthy detailed technical argument (see Appendix D for the full comment). This detailed comment also concluded in part with these sentences: “Finally, a critical comment was submitted to Neuron however, there was no ‘conspiracy’. It was decided, based on feedback from an independent reviewer, that the author of the comment was heavily biased and that the criticisms raised were lacking in substance. Also, the authors of the article demonstrated that they were both willing and able to address all of those criticisms point by point if the journal chose to publish them.”

Obviously someone with inside knowledge of the review process wrote this comment. We sent this comment to Neuron and asked if it were true that our 20 detailed concerns were communicated only to one of the original reviewers who then determined our concerns did not have substance and were biased. We also requested that Neuron provide any written responses to our 20 points made by the original reviewers or the authors. Neuron replied that discussions were all by phone and there were no written responses. Neuron would not confirm that only one original reviewer determined that our concerns were biased or that they had not required a point-by-point response. Finally, we asked Neuron if we could submit comments on the Hampshire et al. paper under the category of “Viewpoint” or “Perspective” and allow the authors to respond. We felt that this would be constructive and educational. Neuron declined.

Worth noting: Both Neuron and Intelligence are published by Elsevier.

The whole situation seems almost like open peer review, only without the transparency.

Hat tip: Neil Martin

Like Retraction Watch? Consider supporting our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, and sign up on our homepage for an email every time there’s a new post.

39 thoughts on “Anonymous blog comment suggests lack of confidentiality in peer review — and plays role in a new paper”

  1. It would really be malpractice for a journal to publish a “Preview” that rejects the validity of the paper it previews. It is appropriate for a Preview to raise issues, of course, but not to such a degree. If a journal accepted a paper by me an then added a Preview that shoots the paper down in flames, I would consider that I had been outrageously mistreated. Publishing their criticisms in a separate forum is the right thing for Haier and his colleagues to have done.

    1. I sincerely disagree. If there are real issues with anyone’s papers, by all possible means that manuscript shouldn’t be published. Peer-review is not error-free.

    2. “It would really be malpractice for a journal to publish a “Preview” that rejects the validity of the paper it previews.”

      Why?
      I can understand that two teams of neuroscientists can study the ‘same’ system, perturb it by a common stimuli, get different readings and come to different conclusions.

      1. Yes, sure. But a Preview is intended to be a short introduction to an article, published in the same journal, in the same issue. Imagine that you had an article accepted by a newspaper, and then you found to your surprise that the editor had placed directly in front on it a paragraph saying, “The following article is full of shit”. Wouldn’t you be pissed off?

        The point is not that there is anything wrong with criticism, just that a “Preview” is the wrong place for it.

  2. What is the purpose of a “preview”? To tell the readers what to think about a paper?

    Seems pointless. Seems like something invented by a person with a degree in marketing. (I know: redundant).

    Solution: do not read previews. Do not write them if asked.

    1. A preview is meant to give context to a study, and make it accessible to a general audience which is not intimately familiar with the subject matter. While the overall tenor of these previews will likely be positive, criticisms or alternative interpretations can certainly be raised, so previews don’t always echo the views made by the authors of the previewed study. If properly written, they can be highly informative.

  3. Are there any ethics issues if Neuron were to write to the reviewers of Hampshire et al.’s manuscript and ask them to rebut the proposed Preview? If the reviewers cannot satisfactorily respond, can Neuron not pull the paper based on poor reviews and as a result pull those reviewers out of their database. Wouldn’t this eventually clean up the peer-review system? And maybe, journals should stop requesting authors to submit a list of possible reviewers. Why would any author give names of researchers who disagree with this model/method?

  4. This seems much ado about nothing (and again, a highly misleading retraction watch headline). Neuron often asks experts in the field to write short previews on manuscripts to be published. It’s understood that these previews should be mostly positive. I agree with BS in that nobody should fault Neuron on their decision not to publish Haier’s piece, or a follow-up viewpoint. It’s within their editorial discretion. It’s also clear that at the moment the preview is requested, peer-review of the paper is over, so no confidentiality should be assumed. Is Haier surprised that his comments were relayed to the authors as well as to a original reviewer under his name? Again, that’s standard operating procedure, especially in cases where significant questions are raised about the validity of the paper. As for the anonymous comments, it’s certainly bad form to post on lab internal discussions, but again, this seems to have happened after formal peer-review was over and no names were mentioned. I can’t speak to the issues that this paper might have but other than an overly chatty lab member, everything was played by the book.

  5. I have had a similar experience as reviewer of a paper in a prestigious Elsevier journal. The paper was published without serious consideration of very significant flaws in the argumentation. My comment addresses the general problem that “prestigious” authors are accepted by prestigious journals without serious review, or in contravention of very negative reviews.

    Albert Gjedde MD DSc FRSC FACNP MAE
    Professor and Chair
    Dept of Neuroscience and Pharmacology
    University of Copenhagen

  6. All the trouble was caused by the failure of Neuron’s editors to ask someone with psychometric expertise to review the manuscript at the outset. They asked Haier to take a look at the paper after they had already accepted it for publication which is pretty strange. The paper’s problems are so glaring —- e.g., the circular use of factor analysis, the confounding of within- and between-individual differences, the lack of engagement with previous literature —- that had the reviewers been equal to the task, the paper would never have proceeded to publication stage without major revision.

    1. Translated. Another failed traditional peer review. Why is there such a rush to publish? The preview option would actually be, in my opinion, an excellent supplement to the traditional peer review. Instead of pumping out 1000 papers a year, for example, reviewed by only 2-3 peers, surely, given this uncomfortable situation in another two Elsevier journals, would not 500 GOOD, well reviewed papers that were vetted by 5 or 6 peers and professionals be a better option nowadays given the retraction risks? The mad race to publish, the under-pressure oversight by editors and the inability of the publisher to provide a new frame-work that allows for post-publication peer review to supplement the traditional peer review model is causing the rapid degradation of science publishing, and the loss of confidence in what has already been published.

      1. “given the retraction risks”

        I think the retraction risks have gone down steeply after popularisation of this blog. In a couple of years more I am sure this could be shown by plotting number of retractions by year, but I think then it will be too late to realise it. Publishers did not like the kind of impact generated by retractions, and now tend to mend published fails. I say let us just ignore editorial decisions, including retractions. Scientific community must have their own independent means of rating literature as useful or useless, and I think the best filter is by far PPPR.

        1. CR, elsewhere, I have defended your call for PPPR, and agree fully with this approach as the alternative model. However, this does not relieve the publishers (or scientists) from having their records closely examined. We are all victims of this system and we are all being closely examined, all the time, so although I agree that going forward, we need self-publishing, PPPR of the back literature is essential. But, PPPR must be fair, in the sense that when examining one aspect of one publisher, then the same aspect needs to be examined evely accross the board for all publishers. If this hand-in-hand process does not take place, then moving forward with be like advancing with a limp, or a crawl. There is no doubt that publishers, especially the established ones, are very conscious of the risks that retractions are having on their images, but that does not excuse them from scape goating fine-scale scrutiny. Personally, this is a trying time not only for me, but also for many scientists I know and even critique, and many of whom are suffering in silence and will refuse to speak publicaly about their professional ordeals here at RW or on other popular scence blogs. I cannot disagree with your comment “let us just ignore editorial decisions” because it is these editors who currently hold (and have been holding) the reigns of power (and quality control) in the traditional peer review system, and must be held accountable. They are in fact directly responsible fr having let through s much work that should not have been published, most likely.

          1. “these editors who currently hold (and have been holding) the reigns of power (and quality control) in the traditional peer review system”

            That is exactly a vital point — I think we scientists have given then these reigns for a traditional peer review system which is clearly ineffective and fallen. I am sure these reigns are retracting and not pulling the horses so well anymore, but I think great good (meaning less damage to good science) could be made if cutting the reigns was made as a conscious act.

            Retractions and corrections are clearly not doing any good selection of useful scientific literature, and are being used for short-term political goals, the first decision being taken less and less often. I think they are but noise to the true judgement of peers (who can also be wrong), that should be made public for posterity as often as possible as PPPR for the sake of good, evolving science.

          2. CR, in theory, we are perfectly aligned. However, let’s take an established journal in the Elsevier, Springer, Taylor and Francis or Wiley-Blackwell fleet, for example. We have a paper published in a journal with a respectable impact factor. There is a respectable editor board and papers were peer reviewed. This is the classical landscape. But, peer review is porous and imperfect and somewhere down the line, a reader picks up some problems, either data that doesn’t make sense, methodology that can’t be repeated, or, in a worse-case scenario, a duplication, or serious plagiarism. In the latter two cases, the publishers have a fairly good set of criteria and are implementing these now, but for the former three cases, the decision as to what is good, or bad, effective, or not, is still extremely subjective. It depends on the editor’s knowledge, on the depth of understanding by the peer and his/her credentials and on the feeling and mood of the editor-in-chief when he dishes out the acceptance or rejection e-mail. It is likely that we are not going to be able to change this system so quickly, or that soon. Very few scientists I know who have a position in an institute want radical change. They want change, but not radical change, as you or I are suggesting. However, scientists are very much like sheep in so many ways. And, if there is initial momentum, there can be a reasonable following, but not all. I believe that self-publishing will only be effective in the next few years for marginal topics that cannot find a home in narrow-scope journals or where the ideas are too opinionated (such as an increasing number of my own ideas and papers). Yet, the majority will still seek to find location of their papers in traditional journals that still command respect from the peer pool. At least in plant science, including horticulture, this is the case. It is an extremely conservative bunch. The agricultural sciences are more flexible, I feel, but that could be because there is a large contingency of scientists emerging from the developed countries who are willing to break away from the norm and establish their own open access outlets to express their data, but, as evidence din many of these start-up operations, at the serious expense of quality control. Ideally, the big 4 publishers I list above should have a simultaneous system alongside their traditional system. For example, next to the published PDF, with its independent DOI, each paper could also have a second PDF, with a separate DOI, which represents the PPPR of that paper. Every 3, or 6 months, the PPPR PDF could be updated. And, in that PDF, which retains the same DOI, comments that have been verified and moderated, will reflect the pee pools opinions about that paper, primarily criticsms, but also allowing for confirmation of the methodology. I have already suggested this idea to Elsevier Ltd. and to Springer Science+Business Media. It would be the perfect system I believe, yet, my ideas have been rejected. Why? Because it will over-work the already stressed editor boards? Because it might cost a bit more money to secure the quality of what was already published? So, it is not only the peer pool with its traditional stance, that is responsible for this highly resistant move forward, it is emanating from the publisher’s lack of desire to embrace the concept. I suspect that this could be because the second PPPR PDF (which should be open access) + DOI would incur additional costs, and would not bring revenues. However, the publishers should understand that it would increase confidence in the system and bring back authors and readership that are moving away to alternative competing OA publishers. The scientific community is not interested in destroying such publishers, it is interested in reforming them, based on its inherent needs. When scientists get banned because they are critical, because they discover the weak links in the editorial chain and get punished for their opinions, even if the tone is not the most palatable, then we can conclude that there is a crisis.

    2. The points Peter makes are probably the best argument for open peer review here – which I agree entirely should be the way forwards. The two supposedly ‘glaring’ problems were of course queried during the review process and dealt with in the responses – the article could not have been accepted for publication in a journal like Neuron otherwise.
      The former (e.g. the possibility that factors should be otherwise orientated) is easily ruled out by a subsidiary regions of interest analysis that looked at task vs rest activation levels in the anatomically distinct sub regions of MD cortex (in the discussion section) as opposed to PCA or ICA components. There are no factor orientations in this analysis, also no evidence of a higher order network, and a close replication of the results from the ICA. However, this point is not discussed in depth in the article as we considered it to be obvious – clearly it wasn’t obvious enough.
      The latter (??confounding?? of within and between subject differences), is not an error at all, it is quite deliberate with a reasoned rationale that is discussed in depth in the response that has been submitted to Intelligence and in the response to the prior comment of Ashton et al. Well established intrinsic functional networks within the brain activate in a dissociable manner across task contexts. The task-behavioural component loadings in the large internet-based cohort reflect this pattern closely when analysed in a completely independent manner. The correlation between the two provides statistical evidence that this conformity is extremely unlikely to have occurred due to chance alone; therefore, the two independently conducted analyses provide quite distinct forms of information regarding what are highly likely to be the same cognitive systems. Notably, behavioural individual differences and group level neuroimaging data have very different strengths and weaknesses, so these forms of information are complementary and we argue allow novel insights to be gleaned by combining them.
      These points should perhaps have been discussed in greater depth, but then there is very limited space in the top tier journals. Better still, on line publication of the reviews and responses would have provided a more transparent window into the process and better discussion of unusual perspectives/ novel analysis approaches. Of course, such debate belongs in the published domain, so readers can judge for themselves one way or another based on the full arguments.

  7. I don’t know Hampshire and I don’t know Haier. However, this sounds like good old professional jealousy and rivalry with loyalists on both sides. We all know these situations. I would like more honest discussion of this critical factor.

  8. I’m very confused by this headline.

    As far as I am aware, peer reviewers are generally forbidden from discussing the content of a submitted manuscript (unless a paper is published then comments on the final form are OK or for the few open review journals where comments are published publically). However, when I have submitted manuscripts, I have never received a guarantee that the editors/journal would not disclose anything about the peer review process. In fact, there seem to be many instances where editors comment broadly on the peer review process for highly contentious articles as was (possibly) done here.

    This particular instance feels like another case of people deciding that things published in a scientific paper have to be 100% correct or the paper can’t be be allowed to exist. The authors here need to give the scientific community some credit. Neuron rejected their paper. Fine, that’s Neuron’s editors problem and prerogative. The proper thing to do is to submit somewhere else. If, in fact, the conclusions of the Neuron paper are biased and faulty, then people will begin to think of Neuron as a flashy journal that lacks respectability. I simply do not understand why the entire story needs to be played out over the course of a journal article. The comments on “the [sic] Neuroskeptic” ‘s blog (which, I was under the impression was a pseudonym and should not have a “the,” correction anyone?) are absolutely irrelevant.

    Either A) the science is reasonable as told in the paper or B) it isn’t. If (A), then the editors acted correctly. If (B), then Neuron is a bad journal run by bad editors. Exactly what transpired between submission of this critique and its ultimate rejection is absolutely irrelevant. Imagine if every time an editor or reviewer rejected a paper, people published the whole story as a preface to their paper in the journal that did accept it. Finally, how the press release was handled? Again, who cares. These authors act as though once something has been published than people will act as though its a real/fact… I would argue that anyone who takes any piece of science prima facie lacks intelligence entirely, so who cares what those folks are told.

    The existence of a paper providing my divinity does not prove my divinity, and anyone who believes that I am divine because someone told that that I am is a fool, as is anyone who really cares that that paper exists. That doesn’t preclude a non-fool arguing against my divinity. In fact, a non-fool would probably appreciate the existence of said paper because it would give her a simple forum to rebut the finding with logic.

    1. You’re correct. I am Neuroskeptic, with no “the”; but either way is fine by me.

      On the paper, what makes this story unusual is that Neuron requested a Preview from someone and then spiked it. This is not unheard of it but it’s uncommon enough to warrant a comment. I agree that it’s not really a story about peer review per se.

      1. Perhaps I am biased, but as the first author on the original article, I would suggest that what is unusual is the extent to which RH and colleagues have acted inappropriately in this instance. RH was asked to draft a preview, but instead, attempted to subvert the peer review process by angling for the already reviewed and accepted article to be rejected based on a broader uninvited review. This is highly suspect, because the results, if true, do not fit at all well with RH’s approach of seeking the neural correlates of a unitary ‘g’ factor. Indeed, consider, if all articles were subject to broader review, work on this type of controversial topic would never be published at all.
        If they disagreed with the article, they should have submitted a comment, which along with a response, would have instigated an interesting and constructive debate. Instead, they have then gone on to spread rumours on blogs, publish personal email correspondence without permission and attempt to paint some bizarre picture of a conspiracy relating to the transparency of a review process that they were never part of. Publishing in Neuron is extremely difficult, why should the editors spare space for a sub-standard and inappropriate preview article? Clearly, the editors dealt conservatively with what was a difficult and unusual situation.
        Intelligence have chosen to publish RH’s comment ahead of the invited response, which is unintentionally misleading; a response was submitted to Intelligence just weeks after invitation – it addresses all of the comments raised by RH in detail. This was not a difficult job, because most of these comments had already been dealt with during the multi stage review process, which was conducted by three reviewers who were all clearly experts in this area of research. These queries included why it is informative to compare task-behavioural component loadings with task-brain network activation levels and how it is possible to disambiguate the choice of ICA as opposed to an unrotated PCA components when analysing the brain imaging data. These points, which were touched upon briefly in the original article, are discussed in much greater detail in the response that has been submitted to Intelligence and I hope that they will lead to a useful and informative debate.
        Science is an evolution of ideas, we should not seek to suppress novel ideas that go counter to our own work (which for RH and colleagues as supporters of unitary ‘g’ is the case). Question them and debate them in a polite and respectful manner? – Certainly. Is the article going to be retracted? – Not a chance. Replicated, strengthened and extended? – Yes, in preparation.

        1. To be clear, I did not (and could not) comment on the actual quality of your paper.

          You do make a very good point about science being an evolution of ideas. I suspect that if all papers were subjected open peer review before press, with a single “no” vote from any expert being sufficient to prevent publication, there would be very few articles in press at any given time.

          I am all for publishing things that others might not like, and there is a reason why many journals allow authors to exclude people as reviewers. If work that is low quality (but not fraudulent or stolen) gets published, that’s completely fine… it’s out there and can be refuted.

        2. Here are some facts to consider—we submitted our invited Preview with the expectation that the original paper would be published as scheduled and we wrote the Preview to provide readers with a context to judge the conclusions it presented. It is worth noting the conclusion of our Preview—“… This [Hampshire et al] study is based on an interesting data set and we applaud the effort. For the reasons stated, however, we think the definitive tone of the interpretations and conclusions is not justified.” We thought that the joint publication of the two pieces would be informative to the field. We don’t see how this would “subvert” the review process.
          Subsequently, the Neuron Editors asked for a detailed rationale for our skeptical Preview. It was in response to this request that we sent 20 points—10 of which called for clarification of methods and procedures (e.g. what were the ages and sex of the 16 imaging subjects, and could restriction of range be a problem) and 10 that we characterized as potentially major flaws. It was our stated intention that addressing these points prior to publication would strengthen the paper, and that had we been reviewers we would have insisted on these points being addressed before publication. Readers can judge whether our concerns have merit or, as alleged, are biased or an effort to “subvert” the review process. In any case, our concerns about the review process are with Neuron because they have declined to tell us how they reached their decision about our Preview and whether our 20 points, sent at their explicit invitation, were considered by any process independent of the original reviewers, who, of course, may not have been the most objective.
          Finally, Adam makes some serious charges in this blog—contrary to the charge of bias, we ourselves have challenged the nature of g—in fact, we published a paper in 2009 that raised questions about whether there was a unitary g on the neuro level based on two independent samples and MRI data from 140 subjects (Haier et al, Intelligence 37, 136-144); our 2007 review paper discussed multiple brain networks related to g (Jung & Haier, Behavioral & Brain Sciences 30, 135-154). The conclusions reached by the original paper in Neuron actually are more consistent with our views than Adam recognizes. Nonetheless, in our view, none of the data to-date on this issue are definitive one way or the other. As for spreading rumors on blogs—this is absolutely false, as is the charge that we have published personal emails without permission. We have never advanced any conspiracy theories about the review process in any forum, but we would like to know what the process was with respect to our Preview and 20 concerns. That’s something only the editors at Neuron can clarify and we hope they do so. As for the substantive issues about data analyses and interpretations, we refer readers to the exchange now in progress in Intelligence.

          1. I wonder what percent of rejected papers at Neuron actually get real, explicit and complete reasons for rejection… probably a very view. Rejections that I’ve seen often boil down to: “this material is not of sufficient interest to our readers” rather than objective scientific concerns.

            High impact factor journals will never be able to retain their status if they don’t retain the right to reject papers “Just Because.” Anyone who decides to play the game by submitting to these journals (invited or not) must be willing to accept that explicit, scientific reasons for rejection will rarely be given because they present the opportunity for refutation. Otherwise, there are plenty of journals without “just because” rejection that you can chose to publish in.

            Further, every journal must set an arbitrary threshold for goodness of paper to merit publication. No paper is perfect. What bothers me is that you have put editorial procedures and random comments into the scientific literature that have zero bearing whatsoever on the quality of the paper in question and act as if there is some conspiracy beyond “the editors of neuron didn’t like what I submitted because it wasn’t in their interest to publish it so they rejected it.” Which pretty much happens all the time. At least they didn’t sit on your comments in review while someone on the editorial board published a similar critique.

          2. Well… none of my comments were really unfounded given the article in press. E.g. – angling to subvert the review process, the letter to the editor at Neuron, which has been published on line in Intelligence, concludes “our recommendation would be REJECTION”. This seems like a pretty unambigous attempt by the “previewer” to get an article that has been accepted based on peer review rejected. In terms of bias, perhaps I am wrong, but the language and general response seems rather strong for someone who is not heavily vested in one side of an argument, it certainly gives the impression of bias even if this was unintentional (e.g. ‘conceptual confusion’ and ‘we applaud the effort’ – such comments add little to a scientific debate). I rather suspect this sort of inappropriate wording contributed to the preview not getting published in the first place. With the denial of publishing email correspondence – seeing the text in the section that starts “Over the last year, we have exchanged a series of emails with Dr. Hampshire.” certainly came as a bit of a surprise.
            Publishing work that challenges main stream views is alreadly exceedingly difficult, allowing others groups to wade in after the review is complete and veto an article would make it pretty much impossible. This wouldn’t be good for scientific debate at all.

          3. “This seems like a pretty unambigous attempt by the “previewer” to get an article that has been accepted based on peer review rejected.”

            To me, it sounds more like a way of justifying the non laudatory tone of the preview.

            “the language and general response seems rather strong for someone who is not heavily vested in one side of an argument, it certainly gives the impression of bias…”

            Haier himself published a paper questioning the unitary nature of g? Haier simply thinks that your conclusions are unjustified because of his impression that your methods are seriously flawed (whether they are or not is another issue).

            “I rather suspect this sort of inappropriate wording contributed to the preview not getting published in the first place.”

            I am not sure where I stand regarding the wording. However, I disagree with you; I believe Neuron rejected the invited preview simply because it raised possible serious issues about your paper, period.

            “With the denial of publishing email correspondence – seeing the text in the section that starts “Over the last year, we have exchanged a series of emails with Dr. Hampshire.” certainly came as a bit of a surprise.”

            Did Haier actually publish the correspondence? With this in mind, I am not sure I see in what way this section of his published comment disparages you in any way and don’t see why you are playing the holier than thou here.

            “Publishing work that challenges main stream views…”

            Your work does not challenge main stream views; the non unitary nature of g is, in neuroscience, the main stream view. Further, if anything, it is easier to publish work that supports your perspective because this perspective is the politically correct one.

            My understanding of Haier’s perspective (and I fully acknowledge being out on a limb here), is that g may or may not be unitary. You seem to be claiming that you have finally proved that it is not unitary and in so doing, have ‘debunked’ IQ. Whether you have or not, I won’t say, but for Haier, you don’t seem to have proven anything. From his perspective, I can see why he responded strongly to your paper.

          4. I would like to clarify the above post.

            The following:

            “the language and general response seems rather strong for someone who is not heavily vested in one side of an argument, it certainly gives the impression of bias…”

            Haier himself published a paper questioning the unitary nature of g? Haier simply thinks that your conclusions are unjustified because of his impression that your methods are seriously flawed (whether they are or not is another issue).

            Should be replaced by:

            “the language and general response seems rather strong for someone who is not heavily vested in one side of an argument, it certainly gives the impression of bias…”

            As you have surely read in Haier’s post, he himself published a paper questioning the unitary nature of g. To me this is not compatible with him being heavily vested in one side. On the contrary, Haier seems to me to be holding a mid ground position while you seem heavily vested on one of the poles. Haier simply thinks that your conclusions are unjustified because of his impression that your methods are seriously flawed (whether they are or not is another issue).

          5. So let me understand—none of your accusations were “really unfounded”? Seriously? You accused us of publishing private emails without permission. You now acknowledge this is not true; our actual transgression—we alluded to the fact that we had corresponded with you. You don’t see a difference? You charge us with subverting the review process because, when Neuron asked for our detailed concerns, we gave them? You think we used “strong” language when we applauded your efforts? Based on my 25 years of experience with brain imaging data, I do applaud your efforts—this is complex and time-consuming work. It is also my long experience that makes me skeptical of any imaging results that claim to be definitive (would you care to defend the claim you made about how your results debunk IQ once and for all?), especially when the key result is based on only 16 poorly described subjects (i.e. no ages or sex given). In this case, as David C pointed out in his comment of 4/19, your findings, even if as flawless as you believe, are not new and do not challenge the mainstream view about a neuro-g… something that neither you, your co-authors or the three original reviewers seem to understand although we provided a number of relevant references prior to publication. You are not even the first to report factor analyses of brain imaging data related to the g-factor. Perhaps you would care to rethink your accusations about our alleged bias as defenders of a unitary neuro-g, our motivations and our integrity, (your claims now all debunked with facts) — especially as someone who publically described our work as “substandard and inappropriate”, “angling to subvert” the review process, promoting “bizarre” conspiracy theories, acting “inappropriately”, and attempting to “veto” your paper. Is this the “polite and respectful manner” of debate you had in mind? And the worst example of strong language on our part is that we applauded your efforts? Seriously, Adam?

            Here’s the main issue: Does the integrity of the review process demand that journal editors ignore any concerns they acquired through efforts they initiated after a paper is accepted but before it is published? Or, does the integrity of the review process require that journal editors use due diligence to investigate the new concerns? Our issue with Neuron is that we have no indication about how our concerns were considered (not even if all three original reviewers saw them let alone any independent consultant) or whether they spiked our Preview because the authors insisted that once a paper is accepted no further considerations should be allowed. This is a broad issue about the proper amount of transparency in the review process and one that we wish Neuron, and readers of this blog, would comment on in general. By the way, we never asked Neuron to reject the Hampshire paper (see the full context of our letter in the online Intelligence Comment—link at top of this blog) but we did think our Preview added a worthwhile perspective. That’s why we have published an account of our experience in Intelligence and offered to debate the scientific issues in that forum.

            Finally, had the situation been reversed, I can assert without hesitation that we would have welcomed the chance to clarify any issues raised by knowledgeable people in the field prior to publication even if our ms had already been accepted. What researcher wouldn’t? Would any journal editor disapprove?

          6. Richard, my earlier post has caused you offence and I apologise unreservedly. I do of course welcome the scientific debate. As you know, I had suggested independently, that your comments on this article might be published, point by point, alongside responses and I have subsequently encouraged this at every step.

          7. Isnt it time editors at Neuron weighed in on this? If neither of the authors (paper and Preview) are at fault for this mess, there must be a third party right? The discussion clearly shows something doesnt add up. Did editors or whoever is in-charge at Neuron do something wrong?

          8. Thank you, Adam. My colleagues and I sincerely accept your apology. We are looking forward to exchanging technical views through the forum provided by Intelligence, and we are looking forward to your future work as well.
            When the editor of Intelligence, Doug Detterman, received our Comment more than two months ago, he not only invited you to respond, he also invited the editors at Neuron to offer comments. We appreciate your willingness to do so. As of today, however, Doug tells me that he has not had any response what-so-ever from Neuron—not even the courtesy of declining to comment editor to editor. The issues most relevant to the commentators to this blog, I believe, concern the non-transparent editorial process at Neuron surrounding our Preview and how they considered the 20 detailed concerns we provided at their request. We would be pleased and satisfied to learn, for example, that Neuron sought comments from someone other than one or more of the original reviewers. Do you know if this was the case? Can you say how many reviewers or editors, if any, discussed our concerns with you? We would be grateful for any other details you can provide about this aspect of the process if you are comfortable doing so. Since Neuron apparently will not comment, you are the only reliable source of information we feel is important for understanding the review process of this important journal.

    2. If, in fact, the conclusions of the Neuron paper are biased and faulty, then people will begin to think of Neuron as a flashy journal that lacks respectability.

      Unfortunately, I don’t think that ever happens. Prestige journals publish flashy rubbish all the time, but it has not made them disrespectable — there is no shortage of people who want to get published in such journals.

      1. Alternatively, between 94 and 05, most journals saw a slight increase in impact factor. However, at least one dropped from very high (>~30) to low (<~0.5) and quite a few took hits that were not small (Althouse, 2009).

        I'm not suggesting that a few bad apples (or this particular paper) *will* spoil Neuron, but some critical mass in theory could. The more people who stop respecting journals because of the poor scientific quality, the less well known it gets. Additionally, I assume that there is some degree of "I actually believe that paper so I'm more likely to cite it." (Of course there is the converse… for example, this particular paper is cited rapidly because people disbelieve it). Over time, the effects of publishing low quality stuff may come to affect reputation.

        Personally, I'd be very curious to know if anyone has attempted to study the effects of things like retractions, expressions of concern, corrections and/or some metric scientific agreement on impact factors of journals over time and some case studies on what happened to the journals that have made the biggest changes.

  9. What is going on here is a battlle between the supporters of the “g” factor, the old school, and the supporters of a position that sees “g” as an artifact as best and conceive intelligence as a collection of independent components. This latter view has been picking up steam and makes a lot of sense, from an evolutionary perspective, so it’s only a matter of time before it wins the war. It’s not just a war of theories, it’s an ideological war, when you look into it.

    1. The dispute around the Hampshire et al. paper has nothing to do with the fact that it presents a non-g theory of intelligence. Well-formulated challenges to the g model, such as this one, cause no controversy. The problem with Hampshire et al. is that it is ridden with methodological and conceptual errors, and makes wide-ranging claims unsupported by their analysis.

      The “g as artifact” idea is almost as old school as the g theory, and, if anything, the g model has been gaining ground in recent times. I think the main problem with most challenges to the g model is that they don’t even try to account for known empirical facts about g aside from the “positive manifold”. For example, genetic effects on cognitive abilities are largely general rather than specific, just as predicted by the g model.

      1. “Well-formulated challenges to the g model, such as this one, cause no controversy.” Why didn’t this well-formulated challenge end up in Neuron? Probably, the pro-g gang was able to reject it when it was submitted there, despite it being a well-formulated challenge! Academic politics in science is not something the public knows much about, but that is the core of the real corruption in the system not the occasional fraudsters.

  10. I have to agree with Bill Skaggs. While I’m fully in favor of criticism and open discussion, co-publishing indictments with manuscripts would not encourage people to submit their work to that particular place and thus would not be a sustainable practice for the journal. Most authors would just send their work somewhere else the next time.

    What I find odd here is that most highlights/previews/etc. are written by one of the paper’s original reviewers so you know what to expect. Why did they ask someone else? Not that I’m against that, but it strikes me as unusual. Then again, I don’t publish in Neuron.

    1. Why do you find this odd? You have answered your own question – based on the confidential review from a particular referee, the editors at Neuron will have a good idea about what to expect for a potential preview and since there are very tight deadlines involved, there is probably not enough time to find someone new who has the time and expertise to thoroughly examine a manuscript and write a balanced, well-articulated piece about it. Also (stating the obvious) even if one finds a paper suitable for publication doesn’t necessarily mean complete agreement on all points, and based on my experience Previews at Neuron are rarely dim-witted puff pieces that take up space.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.