Retraction Watch

Tracking retractions as a window into the scientific process

Archive for the ‘studies about peer review’ Category

From annoying to bitter, here are the six types of peer reviewers

with 13 comments

Urban Geography

After two decades of submitting papers to journals, and more than 10 years of serving on an editorial board or editing journals, geography researcher Kevin Ward knows a thing or two about peer review.

Recently, as the editor of Urban Geography, he received a particularly “grumpy” and “obnoxious” review in his inbox, which got him thinking. Although, he says, the review raised “professionally appropriate issues,” it went well beyond the widely accepted content and tone. Ward, therefore, decided to reflect on his two decades of experience, and decipher the different types of reviewers and their characteristics.

In all, Ward — from the University of Manchester in the UK — says he’s encountered six types of referees.

Here’s the first, according to his recent editorial published in Urban Geography: 

Read the rest of this entry »

Written by Dalmeet Singh Chawla

July 25th, 2016 at 9:30 am

Do publishers add value? Maybe little, suggests preprint study of preprints

with 18 comments

ArXiv

Academic publishers argue they add value to manuscripts by coordinating the peer-review process and editing manuscripts — but a new preliminary study suggests otherwise.

The study — which is yet to be peer reviewed — found that papers published in traditional journals don’t change much from their preprint versions, suggesting publishers aren’t having as much of an influence as they claim. However, two experts who reviewed the paper for us said they have some doubts about the methods, as it uses “crude” metrics to compare preprints to final manuscripts, and some preprints get updated over time to include changes from peer-reviewers and the journal.

The paper, posted recently on ArXiv, compared the text in over 12,000 preprint papers published on ArXiv from February 2015 to their corresponding papers published in journals after peer review.

The authors report in their paper, “Comparing published scientific journal articles to their pre-print versions:” Read the rest of this entry »

Written by Dalmeet Singh Chawla

June 24th, 2016 at 8:30 am

Do interventions to reduce misconduct actually work? Maybe not, says new report

with 14 comments

Elizabeth Wager and Ana Marusic

Can we teach good behavior in the lab? That’s the premise behind a number of interventions aimed at improving research integrity, invested in by universities across the world and even private companies. Trouble is, a new review from the Cochrane Library shows that there is little good evidence to show these interventions work. We spoke with authors Elizabeth Wager (on the board of directors of our parent organization) and Ana Marusic, at the University of Split School of Medicine in Croatia.

Retraction Watch: Let’s start by talking about what you found – looking at 31 studies (including 15 randomized controlled trials) that included more than 9500 participants, you saw there was some evidence that training in research integrity had some effects on participants’ attitudes, but “minimal (or short-lived) effects on their knowledge.” Can you talk more about that, including why the interventions had little impact on knowledge? Read the rest of this entry »

Written by Alison McCook

April 12th, 2016 at 2:00 pm

What if we tried to replicate papers before they’re published?

with 12 comments

Martin Schweinsberg

Martin Schweinsberg

Eric Uhlmann

Eric Uhlmann

We all know replicability is a problem – consistently, many papers in various fields fail to replicate when put to the test. But instead of testing findings after they’ve gone through the rigorous and laborious process of publication, why not verify them beforehand, so that only replicable findings make their way into the literature? That is the principle behind a recent initiative called The Pipeline Project (covered in The Atlantic today), in which 25 labs checked 10 unpublished studies from the lab of one researcher in social psychology. We spoke with that researcher, Eric Uhlmann (also last author on the paper), and first author Martin Schweinsberg, both based at INSEAD.

Retraction Watch: What made you decide to embark upon this project? Read the rest of this entry »

Written by Alison McCook

March 31st, 2016 at 2:00 pm

“Evidence-based medicine has been hijacked:” A confession from John Ioannidis

with 22 comments

ioannidis

John Ioannidis

John Ioannidis is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” One of the most highly cited researchers in the world, Ioannidis, a professor at Stanford, has built a career in the field of meta-research. Earlier this month, he published a heartfelt and provocative essay in the the Journal of Clinical Epidemiology titled “Evidence-Based Medicine Has Been Hijacked: A Report to David Sackett.” In it, he carries on a conversation begun in 2004 with Sackett, who died last May and was widely considered the father of evidence-based medicine. We asked Ioannidis to expand on his comments in the essay, including why he believes he is a “failure.”

Retraction Watch: You write that as evidence-based medicine “became more influential, it was also hijacked to serve agendas different from what it originally aimed for.” Can you elaborate? Read the rest of this entry »

Written by Ivan Oransky

March 16th, 2016 at 2:00 pm

Papers with simpler abstracts are cited more, study suggests

with 2 comments

J informetricsResearch papers containing abstracts that are shorter and consist of more commonly used words accumulate citations more successfully, according to a recent study published in the Journal of Informetrics.

After analyzing more than 200,000 academic papers published between 1999 and 2008, the authors found that abstracts were slightly less likely to be cited than those that were half as long. Keeping it simple also mattered— abstracts that were heavy on familiar words such as “higher,” “increased” and “time” earned a bit more citations than others. Even adding a five-letter word to an abstract reduced citation counts by 0.02%.

According to Mike Thelwall, an information scientist at the University of Wolverhampton, UK, who was not a co-author on the paper: Read the rest of this entry »

Written by Dalmeet Singh Chawla

March 10th, 2016 at 11:30 am

Researchers’ productivity hasn’t increased in a century, study suggests

with 18 comments

Screen Shot 2016-01-19 at 10.50.25 AMAre individual scientists now more productive early in their careers than 100 years ago? No, according to a large analysis of publication records released by PLOS ONE today.

Despite concerns of rising “salami slicing” in research papers in line with the “publish or perish” philosophy of academic publishing, the study found that individual early career researchers’ productivity has not increased in the last century. The authors analyzed more than 760,000 papers of all disciplines published by 41,427 authors between 1900 and 2013, cataloged by Thomson Reuters Web of Science.

The authors summarize their conclusions in “Researchers’ individual publication rate has not increased in a century:”

Read the rest of this entry »

Written by Dalmeet Singh Chawla

March 9th, 2016 at 2:00 pm

Fast-tracked PNAS papers are cited less often — but gap is shrinking

with one comment

PNASAn analysis of more than 50,000 papers submitted to Proceedings of the National Academy of Sciences (PNAS) shows that those published using its “contributed track” — in which academy members can fast-track their own papers by coordinating the peer-review process themselves — have been cited less often than regular submissions, but that gap is shrinking.

Although the overall average difference in citations between contributed and regular submissions was 9%, the yearly difference has declined from 13.6% in 2005 to 2.2% in 2014, according to the new study, posted before peer review on the preprint server bioRxiv by Phil Davis, an independent researcher and publishing consultant based in New York.

The contributed track is a long-standing editorial practice of PNAS, which has triggered concerns from some academics that say Read the rest of this entry »

Written by Dalmeet Singh Chawla

January 15th, 2016 at 12:45 pm

Why retraction shouldn’t always be the end of the story

with 2 comments

rsc-logoWhen researchers raised concerns about a 2009 Science paper regarding a new way to screen for enzymatic activity, the lead author’s institution launched an investigation. The paper was ultimately retracted in 2010, citing “errors and omissions.”

It would seem from this example that the publishing process worked, and science’s ability to self-correct cleaned up the record. But not so to researchers Ferric Fang and Arturo Casadevall.

Fang, of the University of Washington, Seattle, and Casadevall, of Johns Hopkins — who have made names for themselves by studying retractions — note today in an article for Chemistry World that

Read the rest of this entry »

Do science findings feel more novel, robust? They are — at least, in language

with 5 comments

BMJ-Avatar-160x160

Do you think the write-up of scientific results has gotten more rosy over time? If so, you’re right — the use of positive language in science abstracts has increased by 880% since 1974, according to new findings reported in the British Medical Journal.

Researchers led by Christiaan H Vinkers at the University Medical Center Utrecht in The Netherlands found that, among PubMed abstracts: Read the rest of this entry »

Written by Alison McCook

December 15th, 2015 at 11:30 am