Before we present this week’s Weekend Reads, a question: Do you enjoy our weekly roundup? If so, would you consider a tax-deductible donation of $25, or a recurring donation of an amount of your choosing, to support it?
The week at Retraction Watch featured the retraction of a paper on a “gut makeover,” a retraction following a mass resignation from an editorial board, and the resignation of a management researcher who admitted to misconduct. Here’s what was happening elsewhere:
- The NIH imposed unusual new requirements on Duke researchers following concerns over how the institution handled recent cases of misconduct. Our latest in Science.
- “In the end, science has to stand the test of time, not the test of what journal you happened to get it into during your lifetime as a scientist.” (Sylvia McLain; Girl, Interrupting)
- “Fake Peer Review: What We’ve Learned at Retraction Watch.” A presentation by our Ivan Oransky. And a video in which he talks about preliminary findings from our new retraction database.
- A psychologist has agreed to pay back more than $130,000 to resolve allegations that he submitted false claims to earn grants. (Jonathan Silver, Pittsburgh Post-Gazette)
- Did studies of antipsychotics demonstrate a “flagrant conflict of interest?” Marie Claude-Malbouef reports. (La Presse, in French)
- Should science lower the accepted p value threshold to .005? John Ioannidis considers, in JAMA.
- Two plastic surgeons in South Korea are fighting over claims that a textbook was plagiarized. (Song Soo-Youn, Korea Biomedical Review)
- A story about a new lab approach that appeared in the Case Western student newspaper plagiarized a university news release, and has been retracted. (The Observer)
- The Grumpy Geophysicist is quite grumpy about preprints in earth science. (Craig Jones)
- A new preprint suggests “that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.” (Open Science Framework)
- A systematic study of the India’s University Grants Commission’s (UGC) approved list of journals found “a huge number of dubious or predatory journals which publish substandard papers for a small fee with very little peer-reviewing, if at all.” (R. Prasad, The Hindu)
- “Our reliance on journal articles needs a redefinition, if not a shift,” say Tom Jefferson and Lars Jorgensen. (BMJ Evidence-Based Medicine)
- “If authors giving peer reviewers grief is a thing that happens to plenty of people, should we discuss if contact at all is appropriate?” Hilda Bastian looks at signing peer reviews. (Absolutely Maybe)
- “Last Fall This Scholar Defended Colonialism,” writes Vimal Patel. “Now He’s Defending Himself.” (The Chronicle of Higher Education) Background on this case from our archives.
- Want to know if that aquaculture journal is legit or predatory? A group of researchers has a new rubric. (Frontiers in Marine Science)
- A professor admitted Monday that she “had criminal sexual contact with a disabled man who was unable to speak” — and who had allegedly penned a now-retracted paper. (Thomas Moriarty, NJ.com)
- “As a major clinical trial in cardiology nears completion it has provoked a storm of criticism and controversy.” (Cardiobrief)
- “The National Institutes of Health will examine whether health officials violated federal policy against soliciting donations when they met with alcohol companies to discuss funding a study of the benefits of moderate drinking,” reports Roni Caryn Rabin, who reported earlier this week that some were asking questions about the study. (New York Times)
- “President Donald Trump’s likely pick to lead the Centers for Disease Control and Prevention is facing significant criticism because of a 20-year-old controversy over shoddy HIV research,” reports Marisa Taylor. (Kaiser Health News)
- A retraction earns a correction. (Scientific Reports)
- Scholarship is being damaged all over the world, write Mary Jane Curry and Theresa Lillis, because English is the lingua franca of journals. (Inside Higher Ed)
Like Retraction Watch? You can make a tax-deductible contribution to support our growth, follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up for an email every time there’s a new post (look for the “follow” button at the lower right part of your screen), or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at [email protected].
“In the end, science has to stand the test of time, not the test of what journal you happened to get it into during your lifetime as a scientist.” This is an extremely well articulated piece and I would encourage everyone to read it. I wish a few high-profile labs with nothing to loose decided to publish their findings in low IF society-type journals for a while. Others would likely follow suit, ultimately forcing everyone to evaluate published findings on their own merits.
High profile labs have everything to lose: all that grant funding if they don’t make splashy findings. Nobody wants to be the Ivy League professor who can’t get published in a big name journal and get pushed out. Even if all the big labs coordinated and agreed to only publish in the Obscure Journal of Biology (OJB), it wouldn’t mean that papers got evaluated on their own merits. OJB would become the new Cell and be used as a proxy for quality.
Having been on study sections my sense is that continued grant support depends on productivity, measured in #, quality, and impact of publications, much less on where the science was published. So, a few publications in OBJ would unlikely cause a dent in funding.
I don’t think OBJ would turn into the new Cell. OBJ does not employ editorial rejections (weeding out smaller labs in favor of big names, independent of the quality of the science), has no space constraints, and does not ask for an endless number of additional experiments. Anecdotally, I know quite a few well known scientists who have made the conscious decision to only publish in OBJ, and their papers get cited just as often as if they had been published in Cell.
Ultimately I don’t have all the answers, but the current system is not sustainable. It puts a small number of editors in charge of making critical decisions about what type of science is worthy of attention, and what is “more suitable for a specialized journal”. This has created an absurd arms race, with exploding man hours and costs per publication, and increased incentive for misconduct.
I would like to point out one more difference between Cell and OBJ/OJB: archives.
I wanted to find a paper from 1979, published in the Journal of Cyclic Nucleotide Research. After much head-scratching and web-digging I managed to establish that a successor of this journal got merged with another one, and this another one is online, (even the issues from before the merger) but the paper I want apparently is not.
Whereas a paper from Cell from 1979 is at a few clicks distance.
As much as I agree with the need to end this arms race and prestige for the sake of prestige, I do see some advantages for all-famous journals that stay around for a longer time.
In my opinion, the strongest pressure to publish in the glossies comes from search committees and fellowship award committees.
So while the PI of a high-profile lab may have nothing to lose, the trainees have a lot to lose if their papers are published in low IF journals.
As a basic scientist I would be happy to lower the significance threshold for p to < 0.005.
I would however, need a corresponding increase in my funding, lab space, animal ethics approval and workforce. I would also need some relief from the pressure to publish positive results and a reduction in the frequency of threatened unemployment from once every 2-4 years to something like once every 10-15 years.
As a basic scientist my take on this is that we should move away from p values and instead report effect sizes and confidence intervals, particularly for discovery-based research.
I agree with you – effect sizes and confidence intervals are more useful than a p value alone. I try to include these types of measures and provide my raw data so others can analyze it however they wish.
But resources are still an issue. My power analysis tells me that I need more data points. My confidence intervals are always wider than I would like. If we were to change the threshold to p < 0.005 (or 99.5% CI), I still need a corresponding increase in statistical power and that won't come cheaply.