Should it be a crime for editors to cite work in their own journal?
Last year, the Journal of Criminal Justice became the top-ranked journal in the field of criminology, but critics say that its meteoric rise is due in part to the editor’s penchant for self-citation.
Today, Sciencepublished the first results from a massive reproducibility project, in which more than 250 psychology researchers tried to replicate the results of 100 papers published in three psychology journals. Despite working with the original authors and using original materials, only 36% of the studies produced statistically significant results, and more than 80% of the studies reported a stronger effect size in the original study than in the replication. To the authors, however, this is not a sign of failure – rather, it tells us that science is working as it should:
Last month, the community was shaken when a major study on gay marriage in Science was retracted following questions on its funding, data, and methodology. The senior author, Donald Green, made it clear he was not privy to many details of the paper — which raised some questions for C. K. Gunsalus, director of the National Center for Professional and Research Ethics, and Drummond Rennie, a former deputy editor at JAMA. We are pleased to present their guest post, about how co-authors can carry out their responsibilities to each other and the community.
Just about everyone understands that even careful and meticulous people can be taken in by a smart, committed liar. What’s harder to understand is when a professional is fooled by lies that would have been prevented or caught by adhering to community norms and honoring one’s role and responsibilities in the scientific ecosystem.
Take the recent, sad controversy surrounding the now-retracted gay marriage study. We were struck by comments in the press by the co-author, Donald P. Green, on why he had not seen the primary data in his collaboration with first author Michael LaCour, nor known anything substantive about its funding. Green is the more senior scholar of the pair, the one with the established name whose participation helped provide credibility to the endeavor.
The New York Timesquoted Green on May 25 as saying: “It’s a very delicate situation when a senior scientist makes a move to look at a junior scientist’s data set.”
Grant reviewers at the U.S. National Institutes of Health are doing a pretty good job of spotting the best proposals and ranking them appropriately, according to a new study in Science out today.
Danielle Li at Harvard and Leila Agha at Boston University found that grant proposals that earn good scores lead to research that is more cited, more published, and published in high-impact journals. These findings were upheld even when they controlled for notoriously confounding factors, such as the applicant’s institutional quality, gender, history of funding and experience, and field.
Taking all those factors into consideration, grant scores that were 1 standard deviation lower (10.17 points, in the analysis) led to research that earned 15% fewer citations and 7% fewer papers, along with 19% fewer papers in top journals.
Li tells Retraction Watch that, while some scientists may not be surprised by these findings, previous research has suggested there isn’t much of a correlation between grant scores and outcomes:
One of the complaints about peer review — a widely used but poorly studied process — is that it tends to reward papers that push science forward incrementally, but isn’t very good at identifying paradigm-shifting work. Put another way, peer review rewards mediocrity at the expense of breakthroughs.
propose that the process of manuscript reviewing needs to be evaluated and improved by the scientific publishing community.
We certainly agree in principle, and have suggested a Transparency Index that has some of the same goals. We asked Adam Etkin, who founded Peer Review Evaluation (PRE) “to assist members of the scholarly publishing community who are committed to preserving an ethical, rigorous peer review process,” what he thought. PRE has created PRE-val and PRE-score to “validate what level of peer review was conducted prior to the publication of scholarly works.” Etkin told us: Continue reading Is it time for a journal Review Quality Index?
A new paper in Intelligence is offering some, well, intel into the peer review process at one prestigious neuroscience journal.
The new paper is about another paper, a December 2012 study, “Fractionating Human Intelligence,” published in Neuron by Adam Hampshire and colleagues in December 2012. The Neuron study has been cited 16 times, according to Thomson Scientific’s Web of Knowledge.