The week at Retraction Watch featured the retraction of a 35-year-old paper written by a cat, and the retraction of a study about a controversial gene editing technique. Here’s what was happening elsewhere:
- A BS detector for science? “Science is still the best way of knowing stuff. Darpa just wants to know what stuff science is really sure about, and how it knows it. And how it knows it knows it.” (Adam Rogers, Wired)
- Scholarly publishing’s 1%: An alternative to the Impact Factor, the Impact Quotient measures what percentage of a journal’s papers reach the top 1% most-cited papers in that research area. (Phil Davis, The Scholarly Kitchen)
- Is it time for an s-index, to monitor self-citations? (Publications)
- A “recent study funded by members of the International Association of Color Manufacturers (IACM) and written by IACM staff, members, and consultants touting the safety of food dyes is so riddled with inaccuracies and misleading statements that it should be retracted and disregarded,” says the Center For Science in the Public Interest.
- “The University of Washington just fired a tenured professor for the first time,” reports Azeen Ghorayshi. (BuzzFeed) And a Caltech professor who sexually harassed two students has resigned.
- To fix peer review, “a hundred thousand journals need to die,” say Alex Welte and Eduard Grebe. (GroundUp)
- “We manually monitored the retractions appearing on ‘Retraction Watch’ for six months, which led us to the assumption that most undetected image manipulation could be avoided if publishers/editors implemented a routine check for the described manipulation.” (Science and Engineering Ethics)
- “The rise of unproven stem cell therapies turned this obscure scientist into an industry watchdog.” Science’s Kelly Servick profiles Paul Knoepfler.
- “Yes, Your Manuscript Was Due 30 Years Ago. No, the University Press Still Wants It.” (Chris Quintana, Chronicle of Higher Education)
- Between 2005 and 2014, more than a quarter of the research at three universities in South Africa ended up in bogus journals, reports Dave Chambers. (BusinessDay)
- “Targets and incentives, carrots and sticks are all very well, but if they are not realistically achievable, they are merely a source of angst – and, potentially, a driver of research misconduct.” (Martin Surya Mulyadi, Times Higher Education)
- Björn Brembs offers “Seven functionalities the scholarly literature should have.” (LSE Impact Blog)
- “While men make up the majority of invited speakers at four major virology conferences, recent trends demonstrate a greater inclusion of women.” (Aggie Mika, The Scientist)
- “Do our measures of academic success hurt science?” ask Rinze Benedictus and Frank Miedema. (Inside Higher Ed)
- 11 academics at North-West University are under investigation for plagiarism, reports Msindisi Fengu. (News24/City Press)
- Andrew Gelman wants to “start a new feature, Letters to the Editor of Perspectives on Psychological Science, which will feature corrections that this journal refuses to print.”
- In open public peer review, the “recommendations are extreme, and the outcome is likely to be random when the compromise is the median of the reviewers’ recommendations.” (Scientometrics, sub req’d)
- David Matthews has the story of “how Romania’s plagiarism hunter took on the prime minister” over his PhD. (Times Higher Education)
- Randomized controlled trials are the gold standard in clinical research —they’re also expensive, time-consuming, and a challenge to conduct ethically. Time for change, says former U.S. CDC director Tom Frieden. (STAT)
- When it comes to misconduct, why is the call always for more research when other potential remedies are routinely ignored? ask Donald S. Kornfeld and Sandra L. Titus. (Nature)
- The Salk Institute defends itself against allegations of sexism …by using a sexist metric. (Ian Graber-Stiehl, Slate)
- A preprint posted to bioRxiv without a methods section prompts discussion about what to do with preprints that don’t meet scientists’ standards. (Diana Kwon, The Scientist)
- “Tweeting about journal articles: Engagement, marketing or just gibberish?” (arXiv)
- Elsevier acquires bepress in a move that will position Elsevier as an increasingly dominant player in preprints, says Roger C. Schonfeld. (The Scholarly Kitchen)
- A student newspaper in South Africa retracts a story on one-hit wonders after users on Twitter felt obliged to point out that all the artists they had listed had enjoyed multiple successful songs. (Shandukani Mulaudzi, HuffPost South Africa)
- The University of Tokyo finds two of its researchers committed misconduct across five papers. (Mizuho Aoki, The Japan Times)
- Can PhD students write review papers? asks Neuroskeptic (Discover)
- Protesting Springer’s high prices, all four editors-in-chief of a math journal intend to launch a rival, open access journal. (Lindsay McKenzie, Inside Higher Ed)
- “Research funders are making a strong statement that there will be no more excuses on why some clinical trials remain unreported long after they have completed.” (Till Bruckner, Inside Philanthropy)
- Anthony Scaramucci’s White House career may be dead, but he certainly isn’t — despite the fact that the Harvard Law School alumni directory listed him that way. (Collin Binkley, Associated Press, via Western Mass News)
- “Important to note is that to avoid the problems of “significosis” (i.e., only publishing significant results) and “arigorium” (i.e., a lack of rigor, particularly in design and estimation, see Antonakis, 2017), this special issue will use a registered research approach only.” A call for papers. (The Leadership Quarterly)
Like Retraction Watch? Consider making a tax-deductible contribution to support our growth. You can also follow us on Twitter, like us on Facebook, add us to your RSS reader, sign up on our homepage for an email every time there’s a new post, or subscribe to our daily digest. Click here to review our Comments Policy. For a sneak peek at what we’re working on, click here.
My lab head is the author of one of the top 100 most cited papers in history. The paper introduced a basic statistical method in phylogenetics, and everyone who uses that method therefore cites it–a lot of cites. But it is not his best or most influential paper; it’s just something that a great many people routinely need to cite. (Probably the vast majority of those who cite it haven’t read it.) A large proportion of the top 100 are papers of this kind.
Or in other words, I think this is just as bad a metric as influence factor, if not worse.
About the system of incentives in Indonesia (https://www.timeshighereducation.com/opinion/carrots-and-sticks-are-not-enough): did I understand it right that the incentives to publish in scopus-indexed journals in Indonesia led to less citations? The article in Times Higher Education is about something else, but it seems to me that this fall in citations would be the real impressive point…
Kornfeld and Titus are correct, and it takes little insight to see it–it’s been said in the RW forum before: make the punishment match the offense. If one uses fraudulent means to obtain public monies, they should/must face criminal jeopardy! Civil penalties may also be appropriate when misconduct has been perpetrated.