The week at Retraction Watch featured the tale of a scientist whose explanations for misconduct kept changing, and revelations in a big legal case involving Duke University. Here’s what was happening elsewhere:
- “So I confess. I do look at Impact Factors. I look at citation metrics. I even count papers.” Merlin Crossley disagrees with the advice of Nobel laureates. (The Conversation) Those Nobel Prize winners say: “The research counts, not the journal!” (YouTube)
- “The sheer quantity of scientific research published must mean the quality has gone down,” says David Spiegelhalter, who criticizes journals for not providing a forum to “call out” flawed research. “There should just be less published.” (Hannah Devlin, The Guardian)
- This seems challenging: The incoming principal of an Idahoan school resigns after plagiarizing the majority of an email introducing himself. (Dillon Tabish, Flathead Beacon) But principals across the U.S. are apparently plagiarizing their welcome letters, an investigation finds. (David Winter, ABC FOX Montana)
- “Can cancer researchers accurately judge whether preclinical reports will reproduce?” Meh, says a new study. (PLOS Biology)
- “The unquestioning acceptance of peer review as final validation in the field of medicine emphasises not only the responsibility held by medical journals to ensure peer review is done well but also the need to raise awareness amongst the medical community of the limitations of the current peer review process.” (BioMed Central)
- Introducing a prize for young researchers dedicated to good scientific writing. (Cambridge Core)
- Shifting to open-access is “is evolutionary and revolutionary—too slow for some, but too radical for others,” says Stephen Curry. (ACS Omega)
- What are the most important retractions in cancer research? Our co-founders Ivan Oransky and Adam Marcus weigh in. (CollabRx)
- “Today’s saturated digital content landscape means the biggest obstacle will always be to get people’s attention. But research behind a paywall creates yet another barrier to views and wider attention.” (Sierra Williams, PeerJ blog)
- Some journals check every image in every paper, while some only do random spot checks or nothing at all. Perhaps it would be better to stop image manipulation at the source, says Nature.
- “Picking off fraudsters can be satisfying, but it does not solve the systemic problems of research.” (Matt Hodgkinson, Hindawi blog)
- “Violations of Benford-Newcomb’s law about the frequencies of the leading digits cannot serve as proof of falsification but they may provide a basis for deeper discussions between the editor and author about a submitted work.” (Der Anaesthesist) And find our coverage of a similar potential solution to weeding out fabricated data here.
- “It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free, and asked the government to foot the bill.” Stephen Buranyi on scientific publishing. (The Guardian)
- Three CNN reporters resign in the wake of a retracted story that reported on an investigation into a Russian investment fund. (Brian Stelter, CNN)
- “What are the most important recent developments within the world of metrics?” Tim Gillett asks several in the field. (Research Information)
- The latest attempts by a high-profile project to replicate important cancer research findings are promising, but there are still plenty of concerns. (Jocelyn Kaiser, Science)
- A Canadian biotech company hopes to expand to San Diego, even as its chief executive officer is embroiled in allegations of scientific misconduct. (Bradley J. Fikes, The San Diego Union-Tribune
- A researcher wins $22m from a lab owner that caused its destruction. (Priyanka Dayal McCluskey, Boston Globe)
- Spoof papers have been with us for decades, such as articles from the 1970s on salt passage. (Current Psychology)
- “Our study shows that although 73 percent of the researchers surveyed say that having access to other researchers’ data helps them in their own work; 34 percent indicated that they encounter obstacles when making their own data available.” (Sacha Boucherie, Helena Cousjin, and Federica Rosetta, Elsevier Connect)
- “When do psychological phenomena exist?” A look at reproducibility in psychological science. (Frontiers in Psychology)
“This seems challenging: The incoming principal of an Idahoan school resigns …”
The school is in Montana, not Idaho — the principal was moving to Montana from a position in Idaho…
Here is something to potentially add, Alexandra Elbakyan has posted on “Some facts on Sci-Hub that Wikipedia gets wrong”.
https://engineuring.wordpress.com/2017/07/02/some-facts-on-sci-hub-that-wikipedia-gets-wrong/
Crossley (“I confess, I do look at impact factors”) writes as a manager of people whose areas of specialization he doesn’t understand, for which reason he needs numbers, even if the numbers aren’t very good. That could be read as an argument that that university decision making has become too centralized and managerial, but it does not seem that he intended it that way.
Everybody must make decisions based on imperfect indicators, but it is a real problem when the indicators produce behavior that degrades the thing they are meant to measure. As Crossley must know, his comparison with the heights of basketball players ignores the simple fact that athletes don’t choose their heights, while academics’ publishing strategies are chosen, and are shaped by incentives. That these high-powered incentives are creating perverse outcomes is a central claim of those criticizing their use. Crossley allows this in passing (“…can be gamed in various ways…”), but in the end he gives exactly zero weight to the problem and leaves us with a picture of metrics as measurements that are not affecting what they measure: “simply indicators or messengers… hard, cold numbers.”
Input = Output! It is well established that the number of papers published rises with the amount of money spent on academic research. The notion that quality is simply diluted by quantity disregards forest for trees. Meanwhile, politics of academy budgeteros encourages grant income but decries spending on the results.