Last October, Anica Klockars, a neuroscience researcher at Uppsala University in Sweden, and a colleague published a controversial comment in the journal Small GTPases, a Landes Bioscience title.
The title of the letter was meant to provoke: “Scientific yellow journalism.”
As the authors wrote:
Today, more than ever before, the importance of conducting responsible research is vital. New mass media technologies, allowing for the rapid distribution of news, enable researchers across the world to publicize their latest discoveries to a vast audience. The problem arises when inconclusive research is disseminated, with results that are exaggerated, misinterpreted or even fabricated. We, as scientists, have a responsibility to be brutally critical toward our own research, as well as that of our colleagues. Unfortunately, due to the system of publishing fast, often and in high-impact factor journals, scientists are under greater pressure to produce quantity, at the expense of research quality.
This problem of exaggerating results is especially evident in the field of environmental toxicology, where reports about chemicals, often incorporated in plastics used for food packaging, beauty products, children’s toys and baby products are broadcast on a daily basis to an audience that is unfamiliar with the actual studies behind these reports and the “traditions in toxicological research” of overdosing animals to extreme levels in order to obtain an effect. We certainly cannot claim that chemicals are not dangerous—many of them are—but scaring the public with continuous press releases based on dubious results is not only irresponsible but, similar to the boy who cried wolf, it can only serve to obstruct the entire field when the public grows weary of the never ending alarms, later rescinded because more responsible research is finally performed.
Therefore, it is critical that responsible research is performed, studies are thoroughly executed using various model systems—with a critical approach and doses that are more representative of environmental exposures—and we are sure of our results before going public.
We, and, we’re guessing, most of our readers would agree with this assessment — especially the sentence: “Unfortunately, due to the system of publishing fast, often and in high-impact factor journals, scientists are under greater pressure to produce quantity, at the expense of research quality.”
But, as Klockars found out, introspection was not all that she stimulated. A supervisor (whom she asked us not to identify) objected to the tone of the piece, and insisted that Klockars retract the comment.
The following article from Small GTPases, “Scientific Yellow Journalism” by Anica Klockars and Michael J. Williams, published online on 20 September 2012 (doi: 10.4161/sgtp.22289; http://www.landesbioscience.com/journals/smallgtpases/article/22289/) by Landes Bioscience and subsequently published in print in Small GTPases 2012 3(4):201 has been retracted by agreement between the authors and the journal’s Editor in Chief …
We felt that was fairly ironic, as irony goes. And Klockars agreed:
I think science definitely suffers a lot from the way it is funded. If you can only get grants by publishing as fast as possible, you will most likely not stop and think about what would be great studies – you will just publish something and probably even “adjust” your data in order to publish fast. This is the whole reason why I wrote the comment. I read too many really bad papers, with results that really don’t say anything, and worst of all: newspaper articles are often based on these inconclusive papers, exaggerating the findings in ridiculous ways and spreading alarming news that hasn’t even really been proven scientifically. So I got the opportunity to raise my concern in an editor’s corner.
I’m speechless. Wow.
“A supervisor (whom she asked us not to identify) objected to the tone of the piece, and insisted that Klockars retract the comment” – this is a joke, right? Can the supervisor walk on water? Or maybe he thinks he can? This would explain a lot.
The most ridiculous retraction I’ve ever seen. What is the next step?
She could hardly have formulated the situation more diplomatically. It is a sign of the high work ethics in science that irrespective of the pressures by the project science system most scientist produce good work.
The supervisor should be outed. There is nothing in this piece that warrants a retraction. The supervisor must have bullied and threatened Anica for her to agree to retraction that has no basis or foundation in facts. It is despicable.
So Anica is a PhD student, this is the sort of attitude we should be encouraging in PhD students, not quashing.
In reply to arthurdentition March 21, 2013 at 2:58 am
This is the real world. I think that what has happened is wrong. I do not condone what has happened by saying that this is the real world. Change may be possible.
Ah. Hmmm, yes. OK, I’ll have to actually sit down and read this one, this is what I did for my PhD work (well, I did more photophysics than drug binding studies- but either way a heck of a lot of difference IR).
Part of what I am trying to do by commenting on public forums like this one is show some people in my, ah, “science club” that it’s OK to bring peer review into the open- and that post-publication review is more important than pre-publication (right now it’s almost being a barrier to communication rather than a simple quality control check before you tell the news to your esteemed audience). I’ve been talking with some physicists about this- us younger folk are starting to think it’s a good idea to publish most of the stuff since we don’t really need to worry about page limits anymore. Then, have the more established/experienced Editors sort through it, pick out what they like, and put it into monthly surveys of the sub-field. Works better for physics of course since they are pretty narrow sub-fields, there’s less to read and if someone gets the math wrong someone else publishes a scathing editorial. I’d still want an Editor to give my work a good read-through before presentation of my papers to the relevant audience– I make a lot of typos since I tend to type too fast, and my spelling in not the best in the world.
The ACS has defiantly gotten a bit on the crowed side these days. BTW, the professor who helped me … think about the language I should use for my PLoS and a grant I put together based on it was Prof. Spiro at UW Chemistry in Seattle- he’s in the Acknowledgments, I had him and a few other USA professors read it to make sure I wasn’t over-selling anything. (I needed an established USA mentor/consultant in my field- protein spectroscopy- vouch for me a little with USA/UK run journals.)
Allison, I’m not really convinced about the value of “post-publication peer review” of the sort that one sees on blogs and journal messageboards. Much of it seems like grandstanding and opinionating to me. Of course traditional “post publication peer review” that assesses the value of a paper by the impact it has on further research is straightforward and pretty much one of the mainstays of the progression of science.
In my opinion by far the most important element of peer review is pre-submission “self-peer-review” that involves hard thinking about one’s research and its interpretations, discussions with colleagues, presentation of work at group and departmental seminars and conferences and so on. The negative aspects of “just publish everything and let post-publication peer review sort it out” is that it seems to let the researcher off the difficult task of working hard to produce something truly worthwhile with a series of experiments addressing a meaningful problem set within a proper context. The vast number of low quality journals (and even PLoS One, though some good papers are published there), allows the possibility for researchers to spend entire careers churing out “stuff” that doesn’t really add up to very much.
Incidentally, I don’t think it’s the job of an editor to sort out one’s typos/spelling mistakes. That’s part of pre-submission peer review surely!
@ Chris
Just got …. busy. (Stanford may be-might be- may be- interested in me.)
Completely agree with “self review”. This is how I was taught at SBU- double check with yourself 1st and then a few colleagues- aka my Thesis Committee- 2 or 3 independent profs. (My Chair was Prof. London, who is an Editor at the ACS publication Biochemistry. Whoo boy could he ask some persnickety questions.) PS my mom just found a few typos in my PLoS, sigh. This is why we NEED Editors/office folk to double check us!
Will be back with this blog shortly; must discuss a few things with a few folks. Will try to keep reading however.
In the words of Calvin and Hobbes, “Further updates as events warrant.”
@fernando pessoa One of my college buddies once said about this particular prof that he’s the “master of the back-handed compliment”.
When I was leaving out of the Oral Exam for his metabolic biochemistry course, he complimented me on “not crying”. All the examinees prior to me were also female.
And I thought he was “being funny”, with some of the, ah, more abrasive comments during the exam, heh! But, I have an overly robust sense of self worth and do need to be taken down a peg on occasion. This is what Editors are *for*!
Anyone else wondering why a commentary attacking environmental toxicology was published in a journal called “Small GTPases”?? I wonder if the reviewers of this journal are the appropriate ones to assess statements that seem to condemn an entire field of science.
I’m no proponent of the high-dose testing extrapolated to guess-timate the effects of low dose human exposures. However, I think at least the paragraphs cited above use a broad brushstroke to characterize an entire field of study in an inappropriate way. Further, I wonder what expertise either of the co-authors holds in environmental toxicology. It’s easy to dismiss an entire scientific field from the outside (For example, “wow, neuroscientists have totally over-estimated the value of the brain!”) but in my experience, doing this just makes you look silly.
Adam Marcus wrote:
‘We, and, we’re guessing, most of our readers would agree with this assessment — especially the sentence: “Unfortunately, due to the system of publishing fast, often and in high-impact factor journals, scientists are under greater pressure to produce quantity, at the expense of research quality.”’
Actually I do not agree, at least not completely. I am in constantly called upon to assess the work of my colleagues. I review NIH grants 3 times a year as a standing member of a CSR study section and also review grants annually for three different foundations. I also review the CVs of faculty applicants to several different departments at my institution, and am called upon regularly to write assessments of colleagues for their promotions. In this capacity I assess each individuals published work, and with grants the preliminary data, in order to make an informed recommendation. I am not oblivious to the total number of papers published, but I pay close attention to the quality of the work, whether it has a meaningful impact on the field, and whether it has been corroborated, and (particularly with respect to promotions) whether the work stands the test of time. I hardly think I am alone in this approach.
Most of us (I assume) can discern quality work that leads to meaningful advances in our own fields. Thus while the concern Adam raises is valid, it is also well known. I am not trying to minimize the problem of crappy science, but it should be kept in perspective. To a significant extent, crap science is background noise we learn to filter out.