“I am really sorry:” Peer reviewer stole text for own paper

cl_45_2

We’re sharing a relatively old retraction notice with you today, because it’s of a nature we don’t often see: A chemist apparently stole text from a manuscript he was reviewing.

In spring of 2009, Yi-Chou Tsai, a chemist at National Tsing Hua University in Taiwan, was reviewing a paper for Nature Chemistry. At the time, he’d asked a colleague to write a review article with him, so forwarded him the Nature Chemistry manuscript for reference. But some of that text ended up in their review paper,”Recent Progress in the Chemistry of Quintuple Bonds,” published in Chemistry Letters. 

Both papers were published in 2009; Chemistry Letters retracted the review the next year.

The retraction includes a statement from Tsai, who puts the blame on his co-author, Chih-Chieh Chang, also listed as affiliated with NTHU (we couldn’t find a webpage for him):

Continue reading “I am really sorry:” Peer reviewer stole text for own paper

Desalination journal let a plagiarized paper — from the same journal — through its filter

1-s2.0-S0011916415X00130-cov150h

The editor of Desalination has retracted a paper that plagiarized from another article published in the same journal six years earlier. The papers describe desalination systems, of course.

This retraction happened on a relatively quick timeline: The paper, “An integrated optimization model and application of MEE-TVC desalination system,” was published online in June, and pulled in January.

Here’s the retraction note:

Continue reading Desalination journal let a plagiarized paper — from the same journal — through its filter

What did retractions look like in the 17th century?

Alex Csiszar
Alex Csiszar

We always like to get a historical perspective on how scientists have tried to correct the record, such as this attempt in 1756 to retract a published opinion about some of the work of Benjamin Franklin. Although that 18th century note used the word “retract,” it wasn’t a retraction like what we see today, in which an entire piece of writing is pulled from the record. These modern-day retractions are a relatively recent phenomenon, which only took off within the last few decades, according to science historian Alex Csiszar at Harvard University. He spoke to us about the history of retractions – and why an organization like Retraction Watch couldn’t have existed 100 years ago.

Retraction Watch: First of all, let’s start with something you found that appears to break our previous record for the earliest retraction – a “retractation” by William Molyneux of some assertions about the properties of a stone, published in 1684. Could this be the earliest English-language retraction? Continue reading What did retractions look like in the 17th century?

Weekend reads: Science reporter fired; crappiest fraud ever; are journals necessary?

booksThis week at Retraction Watch featured a big new study of retractions, another that looked at scientist productivity over time, and a new statement on how to use p values properly. Here’s what was happening elsewhere: Continue reading Weekend reads: Science reporter fired; crappiest fraud ever; are journals necessary?

Algorithm paper retracted for “significant overlap” with another

1-s2.0-S0096300314X00044-cov150hA paper on a hybrid algorithm turned out to be a hybrid itself — some original data, plus some from a paper that the authors had published earlier.

According to the retraction note, the overlap was significant enough to pull it from the scientific record.

The retracted paper describes an algorithm that is the combination of a “genetic algorithm” and a “cultural algorithm”– which, as their names sort of suggest, focus on looking at a population of solutions, and the history of which kinds of solutions work, respectively. According to the abstract, results to optimization problems found with a hybrid algorithm are “more accurate and the fast convergence is obvious.”

The retraction note provides a few details about the nature of the duplication:

Continue reading Algorithm paper retracted for “significant overlap” with another

Journal retracts bioelectronics paper for lack of credit to collaborators

S09565663The list of co-authors on a paper about a “bioelectronic composite” was apparently too sparse.

According to its retraction note — posted at the request of the editor-in-chief and the corresponding author — the paper failed to include some of the collaborators.

The Biosensors & Bioelectronics paper looks at a protein complex that could function as part of a “bio-hybrid” device, like a sensor or a solar cell. It has been cited only by its retraction according to Thomson Reuters Web of Science.

What went wrong in allotting credit for the work pretty straightforward, according to the note for “Monolayers of pigment–protein complexes on a bare gold electrode: Orientation controlled deposition and comparison of electron transfer rate for two configurations.” Here it is in full:

Continue reading Journal retracts bioelectronics paper for lack of credit to collaborators

Poll: If authors don’t address mistakes, is that misconduct?

cover_natureIn an interesting letter printed in today’s Nature, biologists Sophien Kamoun and Cyril Zipfel suggest that “failure by authors to correct their mistakes should be classified as scientific misconduct.”

They note that this policy is already in place at their institute, The Sainsbury Laboratory (TSL).

We contacted Kamoun to ask what constituted a mistake, given that numerous papers have received queries, such as on sites like PubPeer, but it’s not clear whether those are legitimate mistakes. He told us: Continue reading Poll: If authors don’t address mistakes, is that misconduct?

Papers with simpler abstracts are cited more, study suggests

J informetricsResearch papers containing abstracts that are shorter and consist of more commonly used words accumulate citations more successfully, according to a recent study published in the Journal of Informetrics.

After analyzing more than 200,000 academic papers published between 1999 and 2008, the authors found that abstracts were slightly less likely to be cited than those that were half as long. Keeping it simple also mattered— abstracts that were heavy on familiar words such as “higher,” “increased” and “time” earned a bit more citations than others. Even adding a five-letter word to an abstract reduced citation counts by 0.02%.

According to Mike Thelwall, an information scientist at the University of Wolverhampton, UK, who was not a co-author on the paper: Continue reading Papers with simpler abstracts are cited more, study suggests

More retractions bring total to 7 for neuroscience pair, 2 more pending

JOCNAuthors have retracted two papers about visual perception and working memory from the Journal of Cognitive Neuroscience, after the first author admitted to falsifying or fabricating data in four other papers.

The authors have requested another two retractions, as well, which will bring the total for Edward Awh and his former graduate student David Anderson to nine retractions. (Earlier in 2015, they lost a paper due to an error in the analytic code, which Awh told us was unrelated to the misconduct.)

The retraction notice attached to both articles cites a 2015 settlement agreement between the Office of Research Integrity and first author Anderson (the “respondent”), who admitted to misconduct while working as a graduate student in the lab of Awh at the University of Oregon in Eugene. Since then, “additional problems” were discovered in the newly retracted articles, such as removed data points.

Awh, who has since moved to the University of Chicago, sent us a lengthy statement, explaining the concerns about each article: Continue reading More retractions bring total to 7 for neuroscience pair, 2 more pending

Researchers’ productivity hasn’t increased in a century, study suggests

Screen Shot 2016-01-19 at 10.50.25 AMAre individual scientists now more productive early in their careers than 100 years ago? No, according to a large analysis of publication records released by PLOS ONE today.

Despite concerns of rising “salami slicing” in research papers in line with the “publish or perish” philosophy of academic publishing, the study found that individual early career researchers’ productivity has not increased in the last century. The authors analyzed more than 760,000 papers of all disciplines published by 41,427 authors between 1900 and 2013, cataloged by Thomson Reuters Web of Science.

The authors summarize their conclusions in “Researchers’ individual publication rate has not increased in a century:”

Continue reading Researchers’ productivity hasn’t increased in a century, study suggests