JPET corrects Janssen antidepressant paper after neuroscience blogger notes errors

Are drug company R&D departments reading blogs?

In a recent paper, “Translational evaluation of JNJ-18038683, a 5-HT7 receptor antagonist, on REM sleep and in major depressive disorder,” researchers with New Jersey-based Janssen Pharmaceuticals Inc. tested whether a potential drug code-named JNJ-18038683 that binds to a receptor linked to depression could actually help patients.

Turns out, the drug was a flop, but in the paper published May 8 in the Journal of Pharmacology and Experimental Therapeutics (JPET), the authors, headed by Pascal Bonaventure, made two mistakes, originally noted by U.K.-based blogger NeuroSkeptic: In the abstract that noted that the treatment with the drug was “statistically significant” compared with a placebo (it wasn’t) and the group mistakenly refers to citalopram (Celexa) as a positive depression drug control when they really meant escitalopram (Lexapro).

On May 23, the journal quietly corrected those mistakes, which NeuroSkeptic noted, but it did not indicate that there had been any changes to the paper. On Wednesday (June 20) — following an inquiry from Retraction Watch — the journal published a correction:

The Fast Forward manuscript version of the above article published on May 8, 2012 [Bonaventure P, Dugovic C, Kramer M, De Boer P, Singh J, Wilson S, Bertelsen K, Di J, Shelton J, Aluisio L, Dvorak L, Fraser I, Lord B, Nepomuceno D, Ahnaou A, Drinkenburg W, Chai W, Dvorak C, Carruthers N, Sands S, and Lovenberg T, J Pharmacol Exp Ther, doi:10.1124/jpet.112.193995] was found to contain errors.

For all of the preclinical studies and for the two polysomnograph studies carried out in healthy subjects, JNJ-18038683 was compared to citalopram. For the clinical trial in patients suffering from major depressive disorder, escitalopram was used as an active comparator. Escitalopram was used as an active comparator in order to set the clinical bar as high as possible.

While preparing the manuscript for submission, the authors accidentally converted “escitalopram” into “citalopram,” and the mistake was not caught during the review process.”

In addition, “statistically significant” was not deleted from the abstract of the revised version of the manuscript. The post hoc analysis was statistically significant on the HAMD-17 scale but not on the MADRS scale. Since the MADRS scale was used as a primary end point, “statistically significant” should have been deleted from the abstract.

A corrected version of the manuscript was submitted to the Editor, reviewed, and posted on May 23, 2012. The differences between the two versions of the manuscript should have been noted at that time for readers. The authors and ASPET regret these errors and apologize for any confusion and inconvenience this may have caused.

Richard Dodenhoff, journals director with JPET’s publisher, the American Society for Pharmacology & Experimental Therapeutics, tells us:

The corresponding author caught the error and brought it to the editor’s attention. We should have noted the differences between the two versions when the second was posted online. We have done that now.

He followed up:

We have no reason to believe that they were anything other than editing mistakes made while preparing the manuscript for submission.  In the days before we published accepted manuscripts online immediately, these types of errors would have been caught at the page proof stage and would never have seen the light of day.

We tried reaching corresponding author Pascal Bonaventure to ask whether Neuroskeptic’s blog post was what alerted the team to the errors, but haven’t heard back . However, Greg Panico, who handles communications for the Janssen Research & Development Neuroscience Therapeutic Area, gave a “Let Me Google That For You“-worthy response:

We learned that the journal posted a revision notice earlier today. There is a link to the notice from the article where it is listed among the “Fast Forward” papers. The link also appears in the middle column of the Web pages showing the abstract and the PDF.

Although the correction may have nothing to do with Neuroskeptic’s post, the timing is suspect. Neuroskeptic — whose blog is among our favorites — tells Retraction Watch:

No-one from Janssen or JPET contacted me & they haven’t since, so no I’m not expecting any follow-up. So it’s possible that they noticed the errors themselves and the correction had nothing to do with me – although the timing fits, and they only corrected the 2 issues I noted (even though there’s lots of other typos!) so I’m 90% sure they did.

I noticed the errors myself. Readers quite often tip me off about such things, but in this case, I spotted it myself, when I was reading the paper. I can’t remember why I decided to read it… I think it was because it had come up on a PubMed search for “antidepressants”.

No I’m not a competitor! I’m just a postdoctoral neuroscientist. I try to keep up with the antidepressant literature although it’s not my field, mainly because I take antidepressants myself, also they tend to make good blog subjects! Lots of controversial research.

But in this case it didn’t take any expertise specific to antidepressants, the citalopram/escitalopram thing was totally random (I happened to mouse over the “invisible” text box!) and the abstract clashing with the results was just me paying attention.

I don’t pay that much attention to most papers mind you, I was giving this one a close look because they cited the “enrichment” paper, which I really hate.

For Lynn Wecker, a distinguished professor in the University of South Florida’s College of Medicine’s department of psychiatry and neurosciences who just ended a year as president of the American Society for Pharmacology and Experimental Therapeutics, which publishes JPET, the case may be the tip of the iceberg for the scientific literature. She tells Retraction Watch:

Speaking as a scientist for more than 30 years, and not as the current (for 11 more days) President of ASPET, I believe that the peer-review process today is not what it was years ago. I read many papers in the literature that are missing important information, that have contradictions, that come to results not justified by the data, etc. I could go on and on and on. But science today is not what it was years ago, and unfortunately, therein lies the problem. I think that investigators do not take the time required to thoroughly review manuscripts because they are too busy, often resulting in ‘garbage’ in the literature perhaps reflecting reviews by graduate students or postdoctorals and not peers. I also think the ‘good old boys’ network is alive and well and that many individuals are in a big rush to publish without attempting to ascertain whether they data is reproducible.

In the past year alone, my lab has been unable to reproduce data published by other groups, and upon speaking with several of these authors, learned that their ‘methods’ were not accurate. Further, what is most troublesome, is that they know it and yet continue to publish the same information.

(If JPET sounds familiar, by the way, it may be because it was the subject of yesterday’s post, too. That’s complete coincidence; we just happened to be working on both of these at the same time. And we should note that showing up in Retraction Watch more often could just reflect more of a willingness to correct the record than other journals show — something we applaud.)

9 thoughts on “JPET corrects Janssen antidepressant paper after neuroscience blogger notes errors”

  1. Good comments by Wecker. Very true. Of course, it begs the question: if her group has found a number of errors, what specific steps has she taken to correct the literature in such occasions?

    As for the blogosphere and corrections, it is remarkable how many problematic papers have been identified and rigorously discussed in this ‘unofficial’ forum, and yet how few editors do anything about it. Perhaps the publishers need to employ some individuals to patrol the blogosphere and then act on suspect articles accordingly. Ignorance is no excuse. The responsibility lies with the publishers to ensure their product is of the highest possible quality.

      1. The bloggers should probably be made the editors of most journals. There generally seems to be an inverse correlation between integrity and intelligence and the likelihood of being made a journal editor or a member of the editorial board.

  2. Once again, we come back to the conclusion that good copy editors are worth it. (I work for a medical magazine. We don’t put anything out – print or online – if it hasn’t been through at least one copy edit, precisely to prevent, you know, ERRORS.)

    Yes, scientists should be checking the science of these papers, and it’s definitely a problem people are not. But a lot of these errors don’t need a PhD to catch – they just need a careful copy editor with a mind for details.

  3. Sure, Lynn Wecker spots THE key problem. However, I disagree with her about “science today is not what it was years ago”. Science doesn’t move, while research (or doing Science) does. Remember: 30 years ago, Ctrl+C and Ctrl+V shortcuts didn’t exist!

  4. Thanks for the link.

    Lynn Wecker may be a little pessimistic. While it’s true that there is a lot of crappy research today, some of the worst abuses of the past have been avoided e.g. mandatory trial registration has made it much harder for companies to bury research showing that drugs don’t work.

    Personally I’m cautiously optimistic that the many problems with science today can be remedied and will be, sooner rather than later. So long as scientists are out there trying to make science better, anything is possible.

  5. I got a chuckle out of Lynn Wecker’s comment about the good old days thirty years ago. I don’t think peer review has all that much to do with it. I can promise her that very similar problems were common in at least one field thirty-*five* years ago. In fact, it happens every time a major methods breakthrough occurs. It takes time before the methods are standardized and all the variables identified, much less worked out. The more powerful the new method, the longer it seems to take to work out the bugs. In the interval, one always finds articles that don’t mention the controlling effect of some factor that wasn’t appreciated at the time. The labs reporting those results may know about those issues by the time the results are actually published, but by then, everyone has moved on.

    A simple example: archeology papers from as late as the 1980s almost all have bad (sometimes very bad) 14C dates since they all used inappropriate materials (e.g. unfractionated bone), failed to control for radiocarbon plateaus, didn’t apply marine reservoir corrections, etc. Some dates from the 1950’s and 1960’s didn’t even use the correct half-life of 14C. The dates are frequently not repeatable. In fact, as often as not, they aren’t reported correctly (e.g. RCYBP and cal BP, both reported as “BP”). Should these all be retracted? That simply isn’t practical.

    This sort of thing is irritating and frustrating. It ought to be avoided where possible. But it isn’t new.

    1. You are right, of course. A lot of the whining is probably coming from anonymous graduate students who don’t have a clue about how science works.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.