In more faked peer review news…10 papers pulled by Hindawi

Screen Shot 2015-12-18 at 9.57.36 AMGuess what? We’ve got more cases of fraudulent peer review to report — our second post of the day on the subject, in fact. In the latest news, Hindawi Publishing Corporation has retracted 10 papers for “fraudulent review reports,” after an investigation of more than 30 papers that had been flagged this summer.

The investigation found that author Jason Jung, a computer engineer at Yeungnam University in Korea, “was involved in submitting the fraudulent review reports” for four of the retracted papers, according to the publisher’s CEO. In the case of the other six, the authors didn’t appear to be involved.

Hindawi Publishing Corporation, which publishes over 400 journals, doesn’t ask authors for potential review suggestions — making a common route to fake peer review more difficult.  In July, when Hindawi announced it was investigating the papers, it posted a statement saying that they suspected the editors had created fake reviewer accounts.

The retraction note on Jung’s papers — identical except for the title at the beginning — explains that each paper has

Continue reading In more faked peer review news…10 papers pulled by Hindawi

Investigation into CT scan paper reveals plagiarism

Screen Shot 2015-12-01 at 10.02.20 AMA paper on the quality of computed tomography (CT) images of the human body didn’t stand up to a close examination. It’s been retracted after an investigation found that it plagiarized work from two publications and a poster by another researcher.

The text in the Journal of the Korean Physical Society paper was taken from work by Kenneth Weiss, a radiologist at the University of Miami, and Jane Weiss, CFO of the couple’s medical imaging company. According to emails that Jane Weiss forwarded to us, Kenneth Weiss brought the plagiarism to light after a PhD student pointed out the similarities between the JKPS paper and one of Weiss’s in the American Journal of Roentgenology. Weiss notified the AJR in January. They started an investigation into the matter, and alerted the JKPS.

The retraction note for “Measurement of image quality in CT images reconstructed with different kernels” provides more details about the investigation:

Continue reading Investigation into CT scan paper reveals plagiarism

3-D printing paper accidentally includes secrets

9

A paper on 3-D printing has been pulled because it “inadvertently” included some sensitive material.

We’re not sure which parts of the paper were the specific problem. But the sensitive material may have something with how to improve the surfaces of 3-D printed products, which is the subject of “Feasibility of using Copper(II)Oxide for additive manufacturing.”

Here’s what the paper, published in the International Journal of Precision Engineering and Manufacturing contains, according to the abstract:

Additive manufacturing, in spite of its ever wider application range, is still plagued by issues ranging from accuracy to surface finish. In this study, to address the latter issue, the feasibility of using Copper(II)Oxide powder with a polymer binder deposited through a Fused Deposition Modeling (FDM) 3D printing technique is explored.

Here’s the retraction note:

Continue reading 3-D printing paper accidentally includes secrets

Can journals get hijacked? Apparently, yes

science pic.mag.current-issueDid you recently log onto your favorite journal’s website and see this? (For anyone who doesn’t want to bother clicking, it’s the video from Rick Astley’s “Never Gonna Give You Up.”) If so, your favorite journal was hijacked.

In today’s issue of Science, John Bohannon (who recently published a bogus study about the benefits of chocolate) explains how easy it is to take over a journal’s website — so easy, in fact, that he did it himself. And he’s not the only one, he reports: Continue reading Can journals get hijacked? Apparently, yes

Making error detection easier – and more automated: A guest post from the co-developer of “statcheck”

Michèle B. Nuijten
Michèle B. Nuijten

We’re pleased to present a guest post from Michèle B. Nuijten, a PhD student at Tilburg University who helped develop a program called “statcheck,” which automatically spots statistical mistakes in psychology papers, making it significantly easier to find flaws. Nuijten writes about how such a program came about, and its implications for other fields.

Readers of Retraction Watch know that the literature contains way too many errors – to a great extent, as some research suggests, in my field of psychology. And there is evidence that problem is only likely to get worse.

To reliably investigate these claims, we wanted to study reporting inconsistencies at a large scale. However, extracting statistical results from papers and recalculating the p-values is not only very tedious, it also takes a LOT of time.

So we created a program known as “statcheck” to do the checking for us, by automatically extracting statistics from papers and recalculating p-values. Unfortunately, we recently found that our suspicions were correct: Half of the papers in psychology contain at least one statistical reporting inconsistency, and one in eight papers contain an inconsistency that might have affected the statistical conclusion.

The origins of statcheck began in 2011, Continue reading Making error detection easier – and more automated: A guest post from the co-developer of “statcheck”

Plagiarism detected in two papers on improving detection of cancer by mammograms

8 (1)

A group of computer scientists has a pair of retractions for duplicating “substantial parts” of other articles written by different authors. Both papers, published in Neural Computing and Applications, are on ways to screen for breast cancer more effectively.

According to the abstract of  “An improved data mining technique for classification and detection of breast cancer from mammograms,” computers make the process of identifying cancer in lesions detected by mammograms faster and more accurate:

Although general rules for the differentiation between benign and malignant breast lesion exist, only 15–30% of masses referred for surgical biopsy are actually malignant. Physician experience of detecting breast cancer can be assisted by using some computerized feature extraction and classification algorithms. Computer-aided classification system was used to help in diagnosing abnormalities faster than traditional screening program without the drawback attribute to human factors.

The article has been cited four times, according to Thomson Scientific’s Web of Knowledge.  The retraction note reveals where “substantial parts” of the article came from:

Continue reading Plagiarism detected in two papers on improving detection of cancer by mammograms

BMC retracts paper by scientist who banned use of his software by immigrant-friendly countries

Screen Shot 2015-11-10 at 12.24.42 PMA BioMed Central journal has pulled the paper of a scientist who decided to prohibit countries that are friendly to immigrants from using his software.

Recently, German scientist Gangolf Jobb declared that starting on October 1st scientists working in countries that are, in his opinion, too welcoming to immigrants — including Great Britain, France and Germany — could no longer use his Treefinder software, which creates trees showing potential evolutionary relationships between species. He’d already banned its use by U.S. scientists in February, citing the country’s “imperialism.” Last week, BMC Evolutionary Biology pulled the paper describing the software, noting it now “breaches the journal’s editorial policy on software availability.”

Many scientists have used Jobb’s software: The BMC paper that describes it, “TREEFINDER: a powerful graphical analysis environment for molecular phylogenetics,” has been cited 745 times since it was published in 2004, according to Thomson Scientific’s Web of Knowledge.

Jobb told Retraction Watch that the software is still available to any scientist willing to travel to non-banned countries, and that he does not care about the retraction: Continue reading BMC retracts paper by scientist who banned use of his software by immigrant-friendly countries

JAMA issues mega-correction for data breach letter due to “wording and data errors”

s_cover_jcv062315A JAMA letter published in April on data breaches accidentally included some data that shouldn’t have been published, either — specifically, “wording and data errors” that affected five sentences and more than 10 entries in a table. One result — a reported increase in breaches over time — also went from statistically significant to “borderline” significant, according to the first author. (So yeah, this post earns our “mega correction” category.)

According to an author, an “older version” of a table made it into the letter, “Data Breaches of Protected Health Information in the United States,” which was corrected in the journal’s June 23/30 issue.

The letter and table in question detail 949 breaches of “unencrypted protected health information.”  The letter says the number of breaches has increased from 2010 to 2013; the original article claimed that the P value on that increase was <.001, but the correction says it’s really 0.07. The original says 29.1 million personal records were affected in those breaches; the real number is 29.0. And so on.

Continue reading JAMA issues mega-correction for data breach letter due to “wording and data errors”

“You don’t retract a paper, you retract the results within:” Why one scientist still displays one of his mistakes

lance fortnow
Lance Fortnow

And now, one from the archives.

In 1989, then MIT grad student Lance Fortnow (he’s now chair of the computer science department at Georgia Tech) wrote a mathematical proof and published it as conference proceedings. He later went to publish the proof in a journal.

But he then discovered “unexpected technical challenges” and published a retraction in 1997. Both are still available on his personal website.

Not everyone would be that transparent. We reached out to ask why he left them up for people to see. He gave us his rationale: Continue reading “You don’t retract a paper, you retract the results within:” Why one scientist still displays one of his mistakes

The camel doesn’t have two humps: Programming “aptitude test” canned for overzealous conclusion

Photo via Benutzerin:BS Thurner Hof

From Larry Summers to James Watson, certain scientists have a long and questionable tradition of using “data” to make claims about intelligence and aptitude.

So it’s no surprise that, when well-known computer scientist Richard Bornat claimed his PhD student had created a test to separate people who would succeed at programming versus those who didn’t, people happily embraced it. After all, it’s much easier to say there’s a large population that will just never get it, instead of re-examining your teaching methods.

The paper, called “The camel has two humps,” suggested instead of a bell curve, programming success rates look more like a two-humped ungulate: the kids who get it, and the kids who never will.

Though the paper was never formally published, it made the rounds pretty extensively. Now, Bornat has published a retraction, stating that he wrote the article during an antidepressant-driven mania that also earned him a suspension from his university. Here’s the meat of the notice: Continue reading The camel doesn’t have two humps: Programming “aptitude test” canned for overzealous conclusion