Archive for the ‘studies about peer review’ Category
As if peer reviewers weren’t overburdened enough, imagine if journals asked them to also independently replicate the experiments they were reviewing? True, replication is a big problem — and always has been. At the November 2016 SpotOn conference in London, UK historian Noah Moxham of the University of St Andrews in Scotland mentioned that, in the past, some peer reviewers did replicate experiments. We asked him to expand on the phenomenon here.
Retraction Watch: During what periods in history did peer reviewers repeat experiments? And how common was the practice? Read the rest of this entry »
When a former Stanford psychology researcher lost her fifth paper last year due to unreliable results, one researcher took particular notice: Martha Alibali at the University of Wisconsin-Madison. Why? She had reviewed the 2006 paper, and took to social media to express her dismay at the result of the time and effort she spent on the research. We spoke with Alibali further about her reactions to the news.
Retraction Watch: You reviewed the paper more than 10 years ago. Can you recall what you thought about it? In retrospect, were there any red flags or doubts you had about the findings that you wish you’d caught?
The scientific literature has seen its share of child prodigies – such as a nine-year-old who published a study in JAMA, and a group of eight-year-olds who reported on bumblebees in Biology Letters. But Alexandre Martin of the University of Kentucky sought to help his seven-year-old son get published in a non-traditional way – by submitting his school report to a journal on Jeffrey Beall’s predatory list, the (now-defunct) International Journal of Comprehensive Research in Biological Sciences. They recount the story in a recent paper in Learned Publishing, giving young Martin his first taste of academic publishing, and helping his father expose its flaws.
Retraction Watch: As part of your experiment, you reformatted a booklet written by your seven-year-old about bats. In an excerpt in your paper, one line says “Bats are really cool animals!” The entire paper was only 153 words, according to The Times Higher Education. Did you think the paper would be accepted by the journal? Read the rest of this entry »
Although previous research has suggested peer reviewers are not influenced by knowing the authors’ identity and affiliation, a new Research Letter published today in JAMA suggests otherwise. In “Single-blind vs Double-blind Peer Review in the Setting of Author Prestige,” Kanu Okike at Kaiser Moanalua Medical Center in Hawaii and his colleagues created a fake manuscript submitted to Clinical Orthopaedics and Related Research (CORR), which described a prospective study about communication and safety during surgery, and included five “subtle errors.” Sixty-two experts reviewed the paper under the typical “single-blind” system, where they are told the authors’ identities and affiliations, but remain anonymous to the authors. Fifty-seven reviewers vetted the same paper under the “double-blind” system, in which they did not know who co-authored the research. We spoke with Okike about some of his unexpected results.
Retraction Watch: You found that reviewers were more likely to accept papers when they could see they were written by well-known scientists at prestigious institutions. But the difference was relatively small. Did anything about this surprise you? Read the rest of this entry »
Would distributing all reviewers’ reports for a specific paper amongst every referee before deciding whether to accept or reject a manuscript make peer review fairer and quicker? This idea — called “cross-referee commenting” — is being implemented by the journal Development, as part of its attempt to improve the peer-review process. Katherine Brown, executive editor of Development from Cambridge, UK, who co-authored a recent editorial about the phenomenon, spoke to us about the move.
Retraction Watch: Many journals share the reviews of a particular paper with those who’ve reviewed it. What is cross-referee commenting in peer review and how is it different from current reviewing processes? Read the rest of this entry »
The detection process uses the algorithm “statcheck” — which we’ve covered previously in a guest post by one of its co-developers — to scan just under 700,000 results from the large sample of psychology studies. Although the trends in Hartgerink’s present data are yet to be explored, his previous research suggests that around half of psychology papers have at least one statistical error, and one in eight have mistakes that affect their statistical conclusions. In the current effort, regardless of whether any mistakes are found, the results from the checks are then posted to PubPeer, and authors are alerted through an email.
Till now, the initiative is one of the biggest large-scale post-publication peer review efforts of its kind. Some researchers are, however, concerned about its current process of detecting potential mistakes, particularly the fact that potentially stigmatizing entries are created even if no errors are found. Read the rest of this entry »
We’ve seen computer-generated fake papers get published. Now we have computer-generated fake peer reviews.
Retraction Watch readers may recall that in 2014, publisher Springer and IEEE were forced to retract more than 120 conference proceedings because the papers were all fakes, written by the devilishly clever SCIgen program and somehow published after peer review. So perhaps it was inevitable that fake computer-generated peer reviews were next.
In a chapter called “Your Paper has been Accepted, Rejected, or Whatever: Automatic Generation of Scientific Paper Reviews,” a group of researchers at the University of Trieste “investigate the feasibility of a tool capable of generating fake reviews for a given scientific paper automatically.” And 30% of the time, people couldn’t tell the difference. “While a tool of this kind cannot possibly deceive any rigorous editorial procedure,” the authors conclude, “it could nevertheless find a role in several questionable scenarios and magnify the scale of scholarly frauds.”
We spoke to one of the chapter’s authors, Eric Medvet, by email.
Retraction Watch: In the paper, you test the feasibility of computer-generated fake peer reviews. Why? Read the rest of this entry »
Authors of systematic review articles sometimes overlook misconduct and conflicts of interest present in the research they are analyzing, according to a recent study published in BMJ Open.
During the study, researchers reviewed 118 systematic reviews published in 2013 in four high-profile medical journals — Annals of Internal Medicine, the British Medical Journal, The Journal of the American Medical Association and The Lancet. In addition, the authors contacted review authors to ask additional questions; 80 (69%) responded. The review included whether the authors had followed certain procedures to ensure the integrity of the data they were compiling, such as checking for duplicate publications, and analyzing if the authors’ conflicts of interest may have impacted the findings.
Carrying out a systematic review involves collecting and critically analyzing multiple studies in the same area. It’s especially useful for accumulating and weighing conflicting or supporting evidence by multiple research groups. A byproduct of the process is that it can also help spot odd practices such duplication of publications. Read the rest of this entry »
The problem of publication bias — giving higher marks to a paper that reports positive results rather than judging it on its design or methods — plagues the scientific literature. So if reviewers are too focused on the results of a paper, would stripping a paper of its findings solve the problem? That was the question explored in a recent experiment by guest editors of Comparative Political Studies. Mike Findley, an associate professor at the University of Texas at Austin and one of the guest editors of the journal, talked to us about a new paper explaining what they learned.
Retraction Watch: Can you explain what a “results-free” paper looks and reads like? Read the rest of this entry »
Ever read a review where the editor or reviewer seems to be specifically looking for reasons to reject a paper? Neil Herndon, the editor-in-chief of the Journal of Marketing Channels, from the South China University of Technology in Guangzhou, has. In a recent editorial, Herndon calls this type of review “gotcha” peer reviewing, and presents an alternative.
Retraction Watch: What is “gotcha” reviewing? What is its purpose and who is practicing it for the most part? Read the rest of this entry »