Crime journal’s meteoric rise due to questionable self-citation: analysis

JCJShould it be a crime for editors to cite work in their own journal?

Last year, the Journal of Criminal Justice became the top-ranked journal in the field of criminology, but critics say that its meteoric rise is due in part to the editor’s penchant for self-citation.

As Thomas Baker of the University of Central Florida, writes in the September/October issue of the The Criminologist, a newsletter of the American Society of Criminology: Continue reading Crime journal’s meteoric rise due to questionable self-citation: analysis

Yes, many psychology findings may be “too good to be true” – now what?

scienceToday, Science published the first results from a massive reproducibility project, in which more than 250 psychology researchers tried to replicate the results of 100 papers published in three psychology journals. Despite working with the original authors and using original materials, only 36% of the studies produced statistically significant results, and more than 80% of the studies reported a stronger effect size in the original study than in the replication. To the authors, however, this is not a sign of failure – rather, it tells us that science is working as it should:

Continue reading Yes, many psychology findings may be “too good to be true” – now what?

Weekend reads: How to publish in Nature; social media circumvents peer review; impatience leads to fakery

booksThe week at Retraction Watch featured a look at why a fraudster’s papers continued to earn citations after he went to prison, and criticism of Science by hundreds of researchers. Here’s what was happening elsewhere: Continue reading Weekend reads: How to publish in Nature; social media circumvents peer review; impatience leads to fakery

“If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science”: Guest post

Last month, the community was shaken when a major study on gay marriage in Science was retracted following questions on its funding, data, and methodology. The senior author, Donald Green, made it clear he was not privy to many details of the paper — which raised some questions for C. K. Gunsalus, director of the National Center for Professional and Research Ethics, and Drummond Rennie, a former deputy editor at JAMA. We are pleased to present their guest post, about how co-authors can carry out their responsibilities to each other and the community.

C. K. Gunsalus
C. K. Gunsalus

Just about everyone understands that even careful and meticulous people can be taken in by a smart, committed liar. What’s harder to understand is when a professional is fooled by lies that would have been prevented or caught by adhering to community norms and honoring one’s role and responsibilities in the scientific ecosystem.

Take the recent, sad controversy surrounding the now-retracted gay marriage study. We were struck by comments in the press by the co-author, Donald P. Green, on why he had not seen the primary data in his collaboration with first author Michael LaCour, nor known anything substantive about its funding. Green is the more senior scholar of the pair, the one with the established name whose participation helped provide credibility to the endeavor.

The New York Times quoted Green on May 25 as saying: “It’s a very delicate situation when a senior scientist makes a move to look at a junior scientist’s data set.”

Really?

Continue reading “If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science”: Guest post

Does peer review ferret out the best science? New study tries to answer

scienceGrant reviewers at the U.S. National Institutes of Health are doing a pretty good job of spotting the best proposals and ranking them appropriately, according to a new study in Science out today.

Danielle Li at Harvard and Leila Agha at Boston University found that grant proposals that earn good scores lead to research that is more cited, more published, and published in high-impact journals. These findings were upheld even when they controlled for notoriously confounding factors, such as the applicant’s institutional quality, gender, history of funding and experience, and field.

Taking all those factors into consideration, grant scores that were 1 standard deviation lower (10.17 points, in the analysis) led to research that earned 15% fewer citations and 7% fewer papers, along with 19% fewer papers in top journals.

Li tells Retraction Watch that, while some scientists may not be surprised by these findings, previous research has suggested there isn’t much of a correlation between grant scores and outcomes:

Continue reading Does peer review ferret out the best science? New study tries to answer

Peer review isn’t good at “dealing with exceptional or unconventional submissions,” says study

pnascoverOne of the complaints about peer review — a widely used but poorly studied process — is that it tends to reward papers that push science forward incrementally, but isn’t very good at identifying paradigm-shifting work. Put another way, peer review rewards mediocrity at the expense of breakthroughs.

A new paper in the Proceedings of the National Academy of Sciences (PNAS) by Kyle Silera, Kirby Leeb, and Lisa Bero provides some support for that idea.

Here’s the abstract: Continue reading Peer review isn’t good at “dealing with exceptional or unconventional submissions,” says study

Is it time for a journal Review Quality Index?

trends ecoIt’s time to review the reviews.

That’s the central message of a new paper in Trends in Ecology & Evolution , “Errors in science: The role of reviewers,” by Tamás Székely, Oliver Krüger, and E. Tobias Krause. The authors

propose that the process of manuscript reviewing needs to be evaluated and improved by the scientific publishing community.

We certainly agree in principle, and have suggested a Transparency Index that has some of the same goals. We asked Adam Etkin, who founded Peer Review Evaluation (PRE) “to assist members of the scholarly publishing community who are committed to preserving an ethical, rigorous peer review process,” what he thought. PRE has created PRE-val and PRE-score to “validate what level of peer review was conducted prior to the publication of scholarly works.” Etkin told us: Continue reading Is it time for a journal Review Quality Index?

Weekend reads: Self-plagiarism and moral panic; sexism in science; peer review under scrutiny

booksAnother busy week at Retraction Watch, which kicked off with our announcement that we’re hiring a paid intern. Here’s what was happening elsewhere around the web: Continue reading Weekend reads: Self-plagiarism and moral panic; sexism in science; peer review under scrutiny

Anonymous blog comment suggests lack of confidentiality in peer review — and plays role in a new paper

neuronA new paper in Intelligence is offering some, well, intel into the peer review process at one prestigious neuroscience journal.

The new paper is about another paper, a December 2012 study, “Fractionating Human Intelligence,” published in Neuron by Adam Hampshire and colleagues in December 2012. The Neuron study has been cited 16 times, according to Thomson Scientific’s Web of Knowledge.

Richard Haier and colleagues write in Intelligence that Continue reading Anonymous blog comment suggests lack of confidentiality in peer review — and plays role in a new paper