The week at Retraction Watch featured a look at where retractions for fake peer review come from, and an eyebrow-raising plan that has a journal charging would-be whistleblowers a fee. Here’s what was happening elsewhere:
- Many researchers use sharing fees to keep their data locked up. The latest from our co-founders in STAT.
- Machiavellianism is positively associated with self-reported research misbehavior, but narcissism and psychopathy are not, says a new study in PLOS ONE.
- A new survey finds “that scientists are often reticent or unable to take formal action against many behaviors they perceive as unethical and irresponsible. As a result, they resort to informal gossip to warn colleagues of transgressors.” (Social Problems, sub req’d)
- Amy Cuddy has responded to her co-author’s comments on their 2010 “power posing” study: “By today’s improved methodological standards, the studies in that paper — which was peer-reviewed — were ‘underpowered,’ meaning that they should have included more participants.” (New York Magazine) Her co-author, Dana Carney, has said she does not think the paper should be retracted.
- “Women don’t submit as often, but when they do they have a higher acceptance rate.” American Geophysical Union journals release one of the most detailed breakdowns of gender bias in peer review. (Alexandra Witze, Nature)
- Frontiers compares its rate of retraction to that of other publishers. (Gearoid Faolean, Frontiers blog)
- Why are papers rejected? Jack Grove talks to Hilary Hamnett for the answer. (Times Higher Education)
- OMICS, being sued by the U.S. government for deceptive trade practices, has been quietly buying journals in Canada, The Star and CTV report.
- “For the first few years, it will be free to publish in eLife.” No more: The journal will now charge $2,500. (press release) More from Declan Butler at Nature.
- Want to make sure scientific results can be replicated? Seek out training in study design and data analysis. (Monya Baker, Nature)
- Should we use double blind peer review? Bob O’Hara takes a look at the evidence. (Methods in Ecology and Evolution blog)
- Ian Cree talks about the benefits of open peer reviewwith Francesca Martin for BioMed Central’s podcast.
- Just how dishonest are we, and can peer review curb the impetus to lie in research? Elsevier’s Darren Sugrue wondered, so he talked to Dan Ariely and Yael Melamede, who made the film “(Dis)Honesty: The Truth About Lies.”
- “The problem is where he accuses me of having published statistically faulty research.” Susan Fiske doubles down on charges of “methodological terrorism.” (Rafi Letzter, Business Insider) Ideas for civil criticism: Uri Simonsohn responds to Fiske’s piece.
- Are the Nobel Prizes good for science? ask Arturo Casadevall and Ferric Fang. (FASEB Journal)
- Industrial and organizational psychology has a “lack of research integrity,” write Sheila List and Michael McDaniel. (Society for Industrial and Organizational Psychology)
- Economics is more controversial and medicine has more high-profile scandals, so why is so much of the replication debate focused on psychology? asks Andrew Gelman.
- “What’s clear is that we need more thoughtful reviews from qualified people.” So how do we motivate more researchers to review? (Shannon Palus, Slate)
- “There are forces swirling in the air that have the potential to undermine [peer review] as a cornerstone of the scientific process,” which is why we must stand up in favor of it, argues Emilie Marcus. (CrossTalk)
- “In our experience, once it is determined that there are valid concerns surrounding a paper, most authors are willing to take the responsible course of action.” The editors of Cell Metabolism respond to the “common perception is that journals are reluctant to take action on reported concerns of data manipulation because months and sometimes years go by before anything happens.”
- Which reference software makes the least mistakes? EndNote, Mendeley, RefWorks or Zotero? (The Journal of Academic Librarianship, sub req’d)
- “From where I sit, the peer review process is not unlike the families described by Leo Tolstoy in his masterpiece Anna Karenina: ‘All happy families are alike; each unhappy family is unhappy in its own way.’” Milka Kostic reflects on the relaunch of Cell Chemical Biology.
- Are unique author identifiers enough to prevent confusion in the scholarly literature? Look at the names of these two authors, who should know. (BMJ Innovations)
- A researcher accused of “duplicating his own work on 15 occasions, denied all charges and blamed the jealousy of his colleagues as the reason behind them.” (Munish Pandey, Mumbai Mirror)
- The new euphemism for plagiarism: “Unauthorized collaboration.” (Daily Pennsylvanian)
- Janssen failed to share data on problems with a blood testing device during and after a key clinical trial used by the FDA to approve a drug, Deborah Cohen of The BMJ reports.
- A petition started by Bernard Carroll, John Nardo, and John Noble “asks Congress to mandate coordination of the U.S. Food and Drug Administration (FDA) with the National Institutes of Health (NIH) to guarantee fidelity of clinical trials reporting and of scientific claims for medical products (drugs and medical devices).”
- Phil Davis visualizes citation cartels. (Scholarly Kitchen)
- “Biomedical experts plan to create a scoring system that will help researchers choose reliable antibodies for their experiments,” writes Monya Baker at Nature. “The only problems: figuring out how such a ranking would work — and getting manufacturers to adopt the standard.”
- Does productivity – in numbers of papers published – mean less quality? Anna Azvolinsky explores a recent paper in PLOS ONE that took a look. (The Scientist)
- Sitting around twiddling your thumbs because you haven’t been asked to review a paper recently? PeerJ has a solution for you.
- “[T]here may be serious work to do to get nursing’s (reporting) house in order,” finds an analysis of how often clinical trials published in journals were previously registered. (Journal of Advanced Nursing)
- “Research Integrity Practices in Science Europe Member Organisations:” A new report from Science Europe.
- How do researchers at the French National Research Center (CNRS) feel about open access? (Emerald Insight)
- What’s the relationship between institutional repositories, copyright, and open access? (Science & Technology Libraries, sub req’d)
- “Positive trials of tPA for ischemic stroke are cited approximately three times as often as neutral trials, and nearly 10 times as often as negative trials, indicating the presence of substantial citation bias,” according to a new analysis. (Trials)
- “Scholars and academics who came together for a workshop recently agreed that a social research ethics governing body needed to be established in Nepal.” (Amrita Limbu, The Kathmandu Post)
- “Scientific journal publishing is too complex to be measured by a single metric,” say editors of Mem. Inst. Oswaldo Cruz. It’s “time to review the role of the impact factor!”
- Elizabeth Moylan discusses “how publishers can recognize and reward the work that peer reviewers do.” (BioMed Central blog)
- How common is irreproducibility in hydrogen storage material research? (Energy & Environmental Science, sub req’d)
- Ben Goldacre talks to Australia’s ABC Radio about fighting bad science.
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
I’m surprised at the lack of discussion, in considering double-blind peer review, of the hazard of blocking reviewers from being able to detect unacknowledged conflicts of interests on the authors’ part. Maybe this is more of a concern in medicine than in other fields, but I really don’t fancy having reviewers blinded to, say, the fact that 3 of the authors sit on the board of the company which owns the device they’re testing….
A brief note to mention that our article on “Irreproducibility in hydrogen storage material research” published in Energy & Environmental Science will hopefully be made open access soon. In the meantime, if anyone is interested but doesn’t have access, please just email me for a pdf…
It is clearly topical to be bothered about pre-publication peer review and impact. Journal peer reviewers frequently provide a wonderful service and the resultant publication is indeed markedly improved by their efforts. They are also well known to have blocked excellent science, sometimes for trivial reasons or even worse personal pride and fraud. However, the benefits of “peer review” are so high that even predatory journals feel the need to at least toss a nod of the head in its direction. To counteract this, some journals are moving to more post review – but surely subsequent citation (regardless of whether favourable or negative to the publication) is post publication peer review? In fact, those bothering to actually read the paper are surely peers, rather than the likely quite small group of scientists vaguely associated with the field of the report? This is particularly noted in previous articles noting the difficulty of publishing cross-discipline work.
Impact factor fraudulent activity (and at a personal level h-index stacking) only arises because we attribute IF as a measure of the prestige of the journal and by implication a quality halo sits on all papers published in it. This is patently nonsense, as evidenced by many retractions and papers we like to snigger at. Equally, predatory journals may occasionally snare a good quality piece of work, but the dust cloud surrounding these journals automatically applies to any papers in them.
Perhaps a journal, not using the pejorative epithet “preprint” could resolve a lot of these issues. It becomes the responsibility of the authors to seek prepublication review and decide on the validity of publication, check the validity and accuracy of references, even decide on their preference of page layout and referencing style. If it is well cited, clearly it was quality work. In many respects, it may also be up to the author’s institutional authorities to influence whether the work is publishable or not. Bioarxiv and arXiv are a good start towards this as many papers submitted are not in fact published again elsewhere, but why then muddle the message? Preprint of course serves a separate purpose.
With such a scheme, authors need to actually read and consider the quality of any paper they cite rather than assume the mantle of authority implied by an IF is good enough.