Neuroscience has retracted a 2009 paper by a team of Korean sports researchers for what appear to be figure irregularities. But the journal’s handling of the case is puzzling and unhelpful.
The article, “Treadmill exercise improves cognitive function and facilitates nerve growth factor signaling by activating mitogen-activated protein kinase/extracellular signal-regulated kinase1/2 in the streptozotocin-induced diabetic rat hippocampus,” came out of Korea National Sport University, among others. It seemed to suggest that exercise could make diabetic rats smarter.
According to the retraction notice:
This article has been retracted at the request of the Editors.
The authors acknowledge there are serious errors in the figures. Apologies are offered to the readers of the journal that this was not brought to the Editors’ attention during the peer review process.
But this is insufficient information masquerading as an explanation. Were the errors manipulation? What does it mean that the problems were not “brought to the Editors’ attention during the review process”? — that the authors knew about the bad figures but declined to say anything?
We emailed Neuroscience’s editor, Stephen Lisberger, of Duke and the Howard Hughes Medical Institute, who punted:
I do not know whether the errors were manipulations or honest mistakes. Sorry.
So, the authors gave the journal no information about the nature of the problems — problems, if the notice is to be believed on its face, that evidently prompted the editors, not the authors, to request the retraction? That prompted the Rumsfeldian reply:
I have told you what I know (and don’t know).
Let’s set aside that these responses stretch credulity. After all, Lisberger and his editorial colleagues must have received some notification about the problematic figures, either from the authors themselves or from a concerned reader(s). They really could not, or should not, have retracted the paper without a clear understanding of the nature of the errors. What they ask us to accept, by implication, is that the editors allowed the authors to submit a meaningless retraction notice — or wrote one themselves — without bothering to conduct even a basic inquiry.
Neither alternative should inspire much confidence among the readership of Neuroscience. The study has been cited 12 times, according to Thomson Scientific’s Web of Knowledge.
Here’s the last retraction we covered in the journal, which also left a lot of unanswered questions.
or perhaps someone informed the authors that (s)he noticed that the figures were manipulated and threatened to inform the editors, and subsequently the authors requested retraction to avoid this happening?
This is a case for AMW, but is Figure 3 B and C upper panels look the same.
A lot of the control panels look the same also, but this may be a feature not a bug.
Not to be a broken record, but again I find myself wondering what every one of the peer reviewers have to say about their failure! I know peer review is all volunteer, but if they don’t promise to be careful, who are we kidding?
(Sounds snarky but it’s an honest question.)
I can only speak for myself, but as a reviewer I start with the position that the authors have properly described what they did and properly showed what the results are. I do not check whether the figures have been manipulated, I check whether they show what the authors tell me they show. Science, in my view, is based on trust.
Actually, I look for signs of fraud, but it is impossible to know for sure if it is fraud or sloppiness. For example, at times the figure data points do not match up with the tables or text…
Also remember that sometimes it is easier to spot figure manipulation than in other cases. For example, if the authors present a HPLC calibration curve (which hardly ever happens anymore anyway), I have no way of telling whether they made up any points. If they tell me they used 20 mg/ml of compound A, how can I check this? But if they show me an SDS-PAGE in which bands are duplicated, I may be so lucky to notice.
I review at least a dozen papers a year. In the last couple of years, I have reported more than half a dozen paper for image fraud, duplicate publication, plagiarism. You don’t see those.
Oh, wait, in actual fact you often do, because chances are they are merely re-cycled to a different journal after ‘cleaning”.
The Editors normally send a “naughty, naughty” letter to the senior author. And thats the end of it. Trust me, it is extremely frustrating to spend hours of my time reviewing a manuscript, detecting image manipulation – and you have to be damn sure it is manipulated or its MY reputation out the window – only to see the manuscript in print a few months later in a different journal with only the maniulated image deleted.
Don’t throw this stuffback at the reviewers. The blame lies squarely – and solely with the authors who – by and large – get away with it in the current situation.
There are good authors and bad authors, good reviewers and bad reviewers, good editors and bad editors; and that will never change. But what can be changed are the systems, to provide more oversight and backups to make it more robust. All journals should check the figures in accepted papers before they are published to make sure the reviewers caught the obvious things. The (anonymized) reviewer comments should be published on the web. Editors and reviewers should not be told the authors’ names or affiliations. Editors should read the papers and the reviewers’ comments. Journals should ask for unannotated scans of the whole blots/gels/images be submitted with the manuscript. When a paper is retracted, journals should look back at what went wrong, and they should explain the reasons for retraction in the notice.
Looking at the article, I am astonished it was published. I have no idea how did it slip through multiple authors (unless they were either blind or complicit), went through peer review and the journal editors:
1- Figure 3: Panels B and C are identical.
2- GAPDH in figures 3, 4, 6 and 7 are the same.
3- Figure 5 is suspicious. ERK-1, 2 bands are very different from P-ERK1, 2 bands. They came from different blots, invaliding the result.
Here is a link to the PDF file
Fantastic! Gapdh is even reused in a so far unretracted Neurochemistry International paper from 2009 (Neurochem Int. 2009 Sep;55(4):208-13. PMID: 19524110) There is another retraction by the same authors on swimming exercise in Neuroscience. Interestingly, in a paper where the first author of the retracted Neuroscience paper is not listed also has substantial reuse of an identical GAPDH blot. Neurosci Res. 2011 Feb;69(2):161-73. PMID: 20969897. Albeit same conditions, but way too many blots…
Yes, the paper pointed out by Junk Science (Neurosci Res. 2011 Feb;69(2):161-73. PMID: 20969897) is remarkable for its re-use of the same four lane GAPDH control. In Fig. 3A it even serves as a control for a three lane blot.